Today is special corporate edition of our weekly news digest, which we will devote to ROI, EBITDA, TCO, IFRS, CRM, SLA, NDA, GAAP and the likes. Just kidding – as always, we’ll talk about the most important security news of the week. As it happened this week, they are all relevant to corporate security, in one way or another. We’ll cover the cases of companies being hacked and data being leaked and companies reacting on the incidents.
What is the difference between end-user security and corporate security? First, while users enjoy the luxury of relatively simple security solutions, corporations security is something very complex – due to several reasons, but mostly due to complexity of their IT infrastructure. Second, in order to protect corporate infrastructure from threats, particular policies should be applied on all levels of the organization.
Anyway, how are businesses performing in terms of security? To be honest, not that well. For example, Gartner thinks that in three years companies would spend 30% of their budget on security. Moreover, the old-school approach of role-based access, which used to be the cornerstone of corporate security, is hopelessly obsolete. Now 90% of effort is spend on preventing the breach of the perimeter and only 10% on detection and response.
— CiscoEnterprise (@CiscoEnterprise) October 9, 2015
That means, once an intruder manages to infiltrate the infrastructure, he finds himself in a very comfortable environment, which frequently results in devastating consequences for the victim. So, Gartner’s recommendation to change this ratio to 60/40 makes sense. For instance, our report on Carbanak, a notorious campaign against banks and financial organizations, showed that the criminal remained undetected in this “10%” zone for quite a while.
The previous editions of Security Week can be found here.
Outlook Web App as the entry point into corporate infrastructure
If one hacks one of many corporate PCs, bugs the compromised machine and drains the data, what would the output be for a hacker? If it is a regular employee’s laptop, the culprit would be able to steal some work-relevant data and, possibly, other information from the file servers this employee had access to.
But the campaign would be much more ‘efficient’ if the surveillance is installed on a computer of a boss or an admin who typically enjoy higher privileges. The unsolicited access to a mail server would compromise an enormous amount of data sent through the email. A report prepared by Cybereason proves that sky mail is not the limit.
Targeted attack exposes Outlook Web Access weakness: https://t.co/5rshOVB5iB
— Eugene Kaspersky (@e_kaspersky) October 7, 2015
As it usually happens, the reason was not in some vulnerability in Outlook Web App (a.k.a. Web Access, Exchange Web Connect and Outlook on the web – Microsoft has changed this program’s names four times in 20 years). The attackers stole (likely by phishing) an admin’s login credentials and injected a malicious (unsigned!) DLL library, thus getting access to both mail AND the Active Directory – Hackers were then able to send any emails from any employees.
Also, another embedding was found in the IIS server, monitoring connections to webmail. The research showed that culprits constantly kept an eye on who, when, and where logs onto the mail. Researchers pointed out to Microsoft that an unsigned binary could be easily executed on One Web Access’ server, but Microsoft claimed that, if properly configured, the system would not allow this to happen.
Whatever that would mean, it’s secondary. In a nutshell, here’s what we have to date:
— A service accessing both Internet and Intranet by design.
— Lax security on the IT specialist’s side (login and password were stolen from him, and not from an ordinary employee).
— Flawed server configuration, which allowed a hassle-free installation of a backdoor.
— Inability to detect the breach during a long time.
What we have as a bottom line is a bunch of problems that should be solved separately. Curiously, data integrity in Active Directory is quite ok, and this service is heavily protected (although, there are some examples when this attack vector was used). However, there is a probability of a less obvious compromise method through the weakest link.
Fifteen million T-Mobile subscriber’s data stolen via hack of a supplier
Allow me to make a short introduction. In the US the majority of mobile subscribers sign a long-term contract with a mobile carrier, which includes both voice/data plans and a device (mobile phone, smartphone, or tablet). It seems a very convenient option: you get a new device for either no or little money, yet, on the other hand, you cannot switch to another carrier until your current contract expires.
This approach presupposes that your credibility as a payer would be checked by the carrier – just as it’s done with bank loans. In order to invoke this process, a carrier would send an enquiry to a local credit bureau. In the story with the T-Mobile leak this local credit bureau was Experian, and it was hacked.
— Kaspersky Lab (@kaspersky) October 2, 2015
According to the disclosure, unsolicited access was ‘an isolated incident over a limited period of time’, yet it resulted in the alleged leak of data on 15 million T-Mobile subscribers over a 2-year period. To do them justice, both companies handled the disclosure in a very open and adequate manner. Both posted an accurate and detailed description of the breach on their websites.
All victims whose data was compromised were offered a free credit monitoring service. This case shows significant progress in terms of post-breach processes, if compared, say, with the Target debacle, when the retailer who suffered a leak of 40 million payment card credentials, confined itself to a short we-fixed-the-issue statement.
In the case of T-Mobile, credit card data remained intact, yet other personal data, including names, addresses, driver license numbers, etc. were compromised. T-Mobile’s CEO assured the data was ‘partially’ encrypted – meaning, not very well encrypted.
This story has some solid reference to the issue of privacy. Credit bureaus, should know an awful lot about their clients and they get this data from everywhere. Moreover, they sell the acquired data to other companies for a big buck, and their buyers are not always prominent and respectable global companies. It’s not the first time Experian messed around with customer data.
There was an incident which was not at all the result of a breach: once a Vietnamese guy, acting as some Singapore-based private investigator, legally paid for Experian’s services and sold the personal data of 200 million Americans to a number of cybercriminal groups specializing in identity theft.
ICYMI: 200 Million Consumer Records Compromised in Experian ID Theft Case – http://t.co/WSHMHLfBFT
— Threatpost (@threatpost) March 11, 2014
This is horrifying. For instance, in order to change a forgotten password to an Amazon account, one should state postal address, date of birth, or social security number – and all of this is held by companies like Experian.
One more thing: the data is most vulnerable in transit from one company to another – just because their security policies and solutions may vary.
Surveillance cameras vendor blocks vulnerability disclosure, threatens to sue researcher
Gianni Gnesa, a researcher from a Swiss company Ptrace Security, prepared a report for HITB GSEC conference in Singapore, in which he was planning to cover some aspects of surveillance camera vulnerabilities. However, this never happened. His research included examples of vulnerabilities in several models of IP cameras by three vendors (now, eventually, we would never know who).
No one could have guessed how the situation would evolve: Gianni sent bug reports to the vendors, corresponded routinely with their security teams, and then announced his intention to present the vulnerabilities at a conference and sought approval of his research. Then IT guys suddenly disappeared and were replaced by corporate lawyers who suggested Gianni did not go publicly with his research, or there would be consequences.
Sadly, this is not the first and will not be the last time it happened. The reason is simple: the difference between a black hat and white hat, in terms of common sense, is obvious (the latter ‘does no harm’) and is not so from the legal point of view. One great example is the Wassenaar Arrangement – an international export control regime of dual-use good and technologies.
In December 2013, the European Parliament included intrusion software into this list. The idea of ‘hacking for the greater good’ is dualistic, per se, but in this case regulators at least considered that the developers of such software (like the notorious Hacking Team) are more discriminating when choosing their customers.
— Ptrace Security GmbH (@ptracesecurity) October 8, 2015
However, the Wassenaar rules define practically anything as ‘intrusion software’. It would not create significant obstacles to bad guys, yet will make the good guys’ lives much harder –for instance, the pentesters’ work would be seriously undermined by the new regulation. As a result, HP was forced to decline participation in PWN2OWN hackathon in Japan, as the fact of HP researchers presenting overseas might be considered the case of ‘export of dual-use goods and technologies’.
Too bad. If it’s hard with corporations, in terms of legislation it’s even worse. The motivation behind the vendors’ restrictions on disclosure is quite understandable: if you have a way to get rid of a problem like that, why not leveraging this opportunity? But how would it impact security of products?
If companies would use half of the energy they use to get after researchers, to fix their products, we would all be more secure. #HITBGSEC
— Gianni Gnesa (@GianniGnesa) October 4, 2015
I wouldn’t say the ‘disclose all’ approach is any better than the ‘restrict all’ approach: in some cases, irresponsible disclosure of critical hardware or software vulnerabilities would fire back on users. The optimum is, again, somewhere in between those extremities.
What else happened:
It’s all very, very bad – with cybersecurity on nuclear energy facilities. Check out the relevant post in Eugene Kaspersky’s blog. The key takeaway here is as follows: if you think there is an Air Gap between your critical infrastructure and Internet, think again. Maybe you are wrong.
Cyber-saber rattling: bad; vulnerable nuclear power stations: v. bad. But there’s hope: https://t.co/owbzGS0eMo
— Eugene Kaspersky (@e_kaspersky) October 12, 2015
Gartner continues to foretell the future. Check this out: by 2018, we’ll have to create machines to manage machines, as managing all IoT (Internet of things) devices manually would become impossible (I could not possibly object, having spent half of my weekend on attempts to manage just four Raspberry Pis).
— Forbes Tech News (@ForbesTech) October 7, 2015
Moreover, people might work with robo-bosses, and fitness trackers would be used not for fitness but for controlling your daily activities. Brave New World! Well, this awaits you given you keep your job, as the fastest-growing companies would employ thrice more robots than people in just three years.
Drones can be hacked (who would disagree with this). Traditionally, each new breed of devices is susceptible to all kind of security ‘teething problems’, just as small kids are susceptible to chickenpox. In this case, drones employ insecure connectivity protocols, which do not employ any kind of authorization.
The Hymn family
The family of resident viruses. Usually, they infect COM and EXE files on running, closing, renaming, or changing attributes. If a number of current month matched with the day (like in January 1 (01.01) or February 2 (02.02), the viruses destroy a part of system information on the Disk C’s boot sector. Then they decrypt and display an image:
Then they play USSR national anthem, at the same time nulling bytes in the boot sector, which contain a number of bytes per sector, a number of sector per cluster, a number of FAT copies, etc (in total 9 bytes). Once this changes are applied in the boot sector of an MS-DOS computer, it would not boot, both from HDD or a floppy drive. To restore information, one should program their own mini-launcher or use special utilities. Hymn-1962 and Hymn-2144 also encrypt their body.
Quoted from “Computer viruses in MS-DOS” by Eugene Kaspersky, 1992. Page 36.
Disclaimer: this column reflects only the personal opinion of the author. It may coincide with Kaspersky Lab position, or it may not. Depends on luck.