INSIGHTS | February 6, 2013

The Anatomy of Unsecure Configuration: Reality Bites

As a penetration tester, I encounter interesting problems with network devices and software. The most common problems that I notice in my work are configuration issues. In today’s security environment, we can accept that a zero-day exploit results in system compromise because details of the vulnerability were unknown earlier. But, what about security issues and problems that have been around for a long time and can’t seem to be eradicated completely? I believe the existence of these types of issues shows that too many administrators and developers are not paying serious attention to the general principles of computer security. I am not saying everyone is at fault, but many people continue to make common security mistakes. There are many reasons for this, but the major ones are:

  • Time constraints: This is hard to imagine, because an administrator’s primary job is to manage and secure the network. For example: How long does it take to configure and secure a network printer? If you ask me, it should not take much time to evaluate and configure secure device properties unless the network has significant complex requirements. But, even if there are complex requirements, it is worth investing the time to make it harder for bad guys to infiltrate the network.
  • Ignorance, fear, and a ‘don’t care’ attitude: This is a human problem. I have seen many network devices that are not touched or reconfigured for years and administrators often forget about them. When I have discussed this with administrators they argued that secure reconfiguration might impact the reliability and performance of the device. As a result, they prefer to maintain the current functionality but at the risk of an insecure configuration.
  • Problems understanding the issue: If complex issues are hard to understand, then easy problems should be easy to manage, right? But it isn’t like that in the real world of computer security. For example, every software or network device is accompanied by manuals or help files. How many of us spend even a few minutes reading the security sections of these documents? The people who are cautious about security read these sections, but the rest just want to enjoy the device or software’s functionality. Is it so difficult to read the security documentation that is provided? I disagree.  For a reasonable security assessment of any new product and software, reading the manuals is one key to impressive results. We cannot simply say we don’t understand the issue­­­ and if we ignore the most basic security guidance, then attackers have no hurdles in exploiting the most basic known security issues.

 

Default Configuration – Basic Authentication

 

Here are a number of examples to support this discussion:

The majority of network devices can be accessed using Simple Network Management Protocol (SNMP) and community strings that typically act as passwords to extract sensitive information from target systems. In fact, weak community strings are used in too many devices across the Internet. Tools such as snmpwalk and snmpenum make the process of SNMP enumeration easy for attackers. How difficult is it for administrators to change default SNMP community strings and configure more complex ones? Seriously, it isn’t that hard.

A number of devices such as routers use the Network Time Protocol (NTP) over packet-switched data networks to synchronize time services between systems. The basic problem with the default configuration of NTP services is that NTP is enabled on all active interfaces. If any interface exposed on the Internet uses UDP port 123 the remote user can easily execute internal NTP readvar queries using ntpq to gain potential information about the target server including the NTP software version, operating system, peers, and so on.

Unauthorized access to multiple components and software configuration files is another big problem. This is not only a device and software configuration problem, but also a byproduct of design chaos i.e. the way different network devices are designed and managed in the default state. Too many devices on the Internet allow guests to obtain potentially sensitive information using anonymous access or direct access. In addition, one has to wonder about how well administrators understand and care about security if they simply allow configuration files to be downloaded from target servers. For example, I recently conducted a small test to verify the presence of the file Filezilla.xml, which sets the properties of FileZilla FTP servers, and found that a number of target servers allow this configuration file to be downloaded without any authorization and authentication controls. This would allow attackers to gain direct access to these FileZilla FTP servers because the passwords are embedded in these configuration files.

 

 

 

FileZilla Configuration File with Password

It is really hard to imagine that anyone would deploy sensitive files containing usernames and passwords on remote servers that are exposed on the network. What results do you expect when you find these types of configuration issues?

 

XLS File Containing Credentials for Third-party Resources
The presence of multiple administration interfaces widens the attack surface because every single interface makes available different attack vectors. A number of network-based devices have SSH, Telnet, FTP, and Web administration interfaces. The default installation of a network device activates at least two or three different interfaces for remote management. This allows attackers to attack network devices by exploiting inherent weaknesses in the protocols used. In addition to that, the presence of default credentials can further ease the process of exploitation. For example, it is easy to find a large number of unmanaged network devices on the Internet without proper authentication and authorization.

 

 

 

Cisco Catalyst – Insecure Interface
How can we forget about backend databases? The presence of default and weak passwords for the MS SQL system administrator account is still a valid configuration that can be found in the internal networks of many organizations. For example, network proxies that require backend databases are configured with default and weak passwords. In many cases administrators think internal network devices are more secure than external devices. But, what happens if the attacker finds a way to penetrate the internal network
Thesimple answer is: game over. Integrated web applications with backend databases using default configurations are an “easy walk in the park” for attackers. For example, the following insecure XAMPP installation would be devastative.

 

 

 

XAMPP Security – Web Page
The security community recently encountered an interesting configuration issue in the GitHub repository (http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code), which resulted in the disclosure of SSH keys on the Internet. The point is that when you own an account or repository on public servers, then all administrator responsibilities fall on your shoulders. You cannot blame the service provider, Git in this case. If users upload their private SSH keys to the repository, then whom can you blame? This is a big problem and developers continue to expose sensitive information on their public repositories, even though Git documents show how to secure and manage sensitive data.
Another interesting finding shows that with Google Dorks (targeted search queries), the Google search engine exposed thousands of HP printers on the Internet (http://port3000.co.uk/google-has-indexed-thousands-of-publicly-acce). The search string “inurl:hp/device/this.LCDispatcher?nav=hp.Print” is all you need to gain access to these HP printers. As I asked at the beginning of this post, “How long does it take to secure a printer?” Although this issue is not new, the amazing part is that it still persists.
The existence of vulnerabilities and patches is a completely different issue than the security issues that result from default configurations and poor administration. Either we know the security posture of the devices and software on our networks but are being careless in deploying them securely, or we do not understand security at all. Insecure and default configurations make the hacking process easier. Many people realize this after they have been hacked and by then the damage has already been done to both reputations and businesses.
This post is just a glimpse into security consciousness, or the lack of it. We have to be more security conscious today because attempts to hack into our networks are almost inevitable. I think we need to be a bit more paranoid and much more vigilant about security.

 

 

 

INSIGHTS |

Hackers Unmasked: Detecting, Analyzing, And Taking Action Against Current Threats

Tomorrow morning I’ll be delivering the opening keynote to InformationWeek & Dark Reading’s virtual security event – Hackers Unmasked — Detecting, Analyzing, And Taking Action Against Current Threats.

You can catch my live session at 11:00am Eastern discussing the “Portrait of a Malware Author” where I’ll be discussing how today’s malware is more sophisticated – and more targeted – than ever before. Who are the people who write these next-generation attacks, and what are their motivations? What are their methods, and how do they chose their targets? Along with how they execute their craft, and what you can do to protect your organization.

The day’s event will have a bunch of additional interesting speakers too – including Dave Aitel and our very own Iftach Ian Amit.

Please come and join the event. I promise not to stumble over my lines too many times, and you’ll learn new things.

You’ll need to quickly subscribe in order to get all the virtual event connection information, so visit the InformationWeek & DarkReading event subscription page HERE.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | February 4, 2013

2012 Vulnerability Disclosure Retrospective

Vulnerabilities, the bugbear of system administrators and security analysts alike, keep on piling up – ruining Friday nights and weekends around the world as those tasked with fixing them work against ever shortening patch deadlines.

In recent years the burden of patching vulnerable software may have felt to be lessening; and it was, if you were to go by the annual number of vulnerabilities publicly disclosed. However, if you thought 2012 was a little more intense than the previous half-decade, you’ll probably not be surprised to learn that last year bucked the downward trend and saw a rather big jump – 26% over 2011 – all according to the latest analyst brief from NSS Labs, “Vulnerability Threat Trends: A Decade in Review, Transition on the Way”.

Rather than summarize the fascinating brief from NSS Labs with a list of recycled bullet points, I’d encourage you to read it yourself and to view the fascinating video they constructed that depicts the rate and diversity of vulnerability disclosures throughout 2012 (see the video – “The Evolution of 2012 Vulnerability Disclosures by Vendor”).

I was particularly interested in the Industrial Control System (ICS/SCADA) vulnerability growth – a six-fold increase since 2010! Granted, of the 5225 vulnerabilities publicly disclosed and tracked in 2012 only 124 were ICS/SCADA related (2.4 percent), it’s still noteworthy – especially since I doubt very few of vulnerabilities in this field are ever disclosed publicly.

Once you’ve read the NSS Labs brief and digested the statistics, let me tell you why the numbers don’t really matter and why the ranking of vulnerable vendors is a bit like ranking car manufacturers by the number of red cars they sold last year.

A decade ago, as security software and appliance vendors battled for customer dollars, vulnerability numbers mattered. It was a yardstick of how well one security product (and vendor) was performing against another – a kind of “my IDS detects 87% of high risk vulnerabilities” discussion. When the number of vulnerability disclosures kept on increasing and the cost of researching and developing detection signatures kept going up, yet the price customers were willing to pay in maintenance fees for their precious protection technologies was going down, much of the discussion then moved to ranking and scoring vulnerabilities… and the Common Vulnerability Scoring System (CVSS) was devised.

CVSS changed the nature of the game. It became less about covering a specific percentage of vulnerabilities and more about covering the most critical and exploitable. The ‘High, Medium, and Low’ of old, got augmented with a formal scoring system and a bevy of new labels such as ‘Critical’ and ‘Highly Critical’ (which, incidentally, makes my teeth hurt as I grind them at the absurdity of that term). Rather than simply shuffling everything to the right, with the decade old ‘Medium’ becoming the new ‘Low’, and the old ‘Low’ becoming a shrug and a sensible “if you can be bothered!”… we ended up with ‘High’ being “important, fix immediately”, then ‘Critical’ assuming the role of “seriously, you need to fix this one first!”, and ‘Highly Critical’ basically meaning “Doh! The Mayans were right, the world really is going to end!”

But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that.

Vulnerability markets and bug purchase programs – white, gray and black – have changed the incentive to disclose publicly, as well as limit the level of information that is made available at the time of disclosure. Furthermore, the growing professionalization of bug hunting has meant that vulnerability discoveries are valuable commercial commodities – opening doors to new consulting engagements and potential employment with the vulnerable vendor. Plus there’s a bunch of other lesser reasons why public disclosures (as a percentage of actual vulnerabilities found by bug hunters and reported to vendors) will go down.

The biggest reason why the vulnerability disclosures numbers matter less and less to an organization (and those charged with protecting it), is because the software landscape has fundamentally changed. The CVSS approach was primarily designed for software that was brought, installed and operated by multiple organizations – i.e. software packages that could be patched by their many owners.

With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!).

Why would someone assign a CVE number to the vulnerability? Who’s going to have all the disclosure details to construct a CVSS score, and what would it matter if they did? Why would the service provider issue an advisory? As a bug hunter who responsibly discloses the vulnerability to the cloud service provider you’d be lucky to even get any public recognition for your valuable help.

With all that said and done, what should we take-away from the marvelous briefing that NSS Labs has pulled together? In essence, there’s a lot of vulnerabilities being disclosed and the vendors of the software we deploy on our laptops and servers still have a ways to go to improving their security development lifecycle (SDL) – some more than others.

While it would be nice to take some solace in the ranking of vulnerable vendors, I’d be more worried about the cloud vendors and their online services and the fact that they’re not showing up in these annual statistics – after all, that’s were more and more of our critical data is being stored and manipulated.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | January 30, 2013

Energy Security: Less Say, More Do

Due to recent attacks on many forms of energy management technology ranging from supervisory control and data acquisition (SCADA) networks and automation hardware devices to smart meters and grid network management systems, companies in the energy industry are increasing significantly the amount they spend on security. However, I believe these organizations are still spending money in the wrong areas of security.  Why? The illusion of security, driven by over-engineered and over-funded policy and control frameworks and the mindset that energy security must be regulated before making a start is preventing, not driving, real world progress.

Sadly, I don’t see organizations in the oil and gas exploration, utility, and consumer energy management sectors taking more visible and proactive approaches to improving the security of their assets in 2013 any more than they did in 2012.
It’s only January, you protest. But let me ask you: on what areas are your security teams going to focus in 2013?
I’ve had the privilege in the past six months of travelling to Asia, the Middle East, Europe and the U.S. to deliver projects and have seen a number of consistent shortcomings in security programs in almost every energy-related organization that I have dealt with. Specialized security teams within IT departments are commonplace now, which is great. But these teams have been in place for some time. And even though as an industry we spend millions on security products every year, the number of security incidents is also increasing every year.  I’m sure this trend will continue in 2013. It is clear to me (and this is a global issue in energy security), that the great majority of organizations do not know where or how to correctly spend their security budgets.
Information security teams focus heavily on compliance, policies, controls, and the paper perception of what good security looks like when in fact there is little or no evidence that this is the case. Energy organizations do very little testing to validate the effectiveness of their security controls, which leaves these companies exposed to attacks and wondering what they are doing wrong.

 

For example, automated malware has been mentioned many times in the press and is a persistent threat, but companies are living under the misapprehension that having endpoint solutions alone will protect them from this threat. Network architectures are still being poorly designed and communication channels are still operating in the clear, leaving critical infrastructure solutions exposed and vulnerable.
I do not mean to detract from technology vendors who are working hard to keep up with all the new malware challenges, and let’s face it, we would we would be lost without many of their solutions. But organizations that are purchasing these products need to “trust but verify” these products and solutions by requiring vendors and solution integrators to prove that the security solutions they are selling are in fact secure. The energy industry as a whole needs to focus on proving the existence of controls and to not rely on documents and designs that say how a system should be secure. Policies may make you look good, but how many people read them? And, if they did read them, would they follow them? How would you know? And could you place your hand on heart and swear to the CEO, “I’m confident that our critical systems and data cannot be compromised.”?

 

I say, “Less say, more do in 2013.” Energy companies globally need to stop waiting for regulations or for incidents to happen and must do more to secure their systems and supply. We know we have a problem in the industry and it won’t go away while we wait for more documents that define how we should improve our security defenses. Make a start. The concepts aren’t new, and it’s better to invest money and effort in improved systems rather than churning out more polices and paper controls and hoping they make you more secure. And it is hope, because without evidence how can you really be sure the controls you design and plan are in place and effective?

 

Start by making improvements in the following areas and your overall security posture will also improve (a lot of this is old news, but sadly is not being done):

 

Recognize that compliance doesn’t guarantee security. You must validate it.
·         Use ISA99 for SCADA and ISO27001/2/5 for security risk management and controls.
·         Use compliance to drive budget conversations.
·         Don’t get lost in a policy framework. Instead focus on implementing, then validating.
·         Always validate paper security by testing internal and external controls!
Understand what you have and who might want to attack it.
·         Define critical assets and processes.
·         Create a list of who could affect these assets and how.
·         Create a layered security architecture to protect these assets.
·         Do this work in stages. Create value to the business incrementally.
·         Test the effectiveness of your plans!
Do the basics internally, including:
·         Authentication for logins and machine-to-machine communications.
·         Access control to ensure that permissions for new hires, job changers, and departing employees are managed appropriately.
·         Auditing to log significant events for critical systems.
·         Availability by ensuring redundancy and that the organization can recover from unplanned incidents.
·         Integrity by validating critical values and ensuring that accuracy is always upheld.
·         Confidentiality by securing or encrypting sensitive communications.
·         Education to make staff aware of good security behaviors. Take a Health & Safety approach.
Trust but verify when working with your suppliers:
·         Ask vendors to validate their security, not just tell you “it’s secure.”
·         Ask suppliers what their security posture is. Do they align to any security standards? When was the last time they performed a penetration test on client-related systems? Do they use a Security Development Lifecycle for their products?
·         Test their controls or ask them to provide evidence that they do this themselves!
Work with agencies who are there to assist you and make them part of your response strategy, such as:
·         Computer Emergency Readiness Team (CERT)
·         Centre for the Protection of National Infrastructure (CPNI)
·         North American Electric Reliability Corporation (NERC)
Trevor Niblock, Director, ICS and Smart Grid Services
INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | January 22, 2013

You cannot trust social media to keep your private data safe: Story of a Twitter vulnerability

I‘m always worried about the private information I have online. Maybe this is because I have been hacking for a long time, and I know everything can be hacked. This makes me a bit paranoid. I have never trusted web sites to keep my private information safe, and nowadays it is impossible to not have private information published on the web, such as a social media web site. Sooner or later you could get hacked, this is a fact.

 

Currently, many web and mobile applications give users the option to sign in using their Twitter or Facebook account. Keeping in mind the fact that Twitter currently has 200 million active monthly users (http://en.wikipedia.org/wiki/Twitter), it makes a lot of sense for third-party applications to offer users an easy way to log in. Also, since applications can obtain a wealth of information from your Twitter or Facebook account, most of the time you do not even need to register. This is convenient, and it saves time signing into third-party applications using Twitter or Facebook.

 

 

Every time I’m asked to sign in using Twitter or Facebook, my first thought is, “No way!”  I don’t want to give access to my Twitter and Facebook accounts regardless of whether I have important information there or not. I always have an uneasy feeling about giving a third-party application access to my accounts due to the security implications.

 

Last week I had a very interesting experience.

I was testing a web application that is under development. This application had an option to allow me to sign into Twitter. If I selected this option, the application would have access to my Twitter public feed (such as reading Tweets from my timeline and seeing who I follow). In addition, the application would have been able to access Twitter functionality on my behalf (such as following new people, updating my profile, posting Tweets for me). However, it wouldn’t have access to my private Twitter information (such as direct messages and more importantly my password). I knew this to be true because of the following information that is displayed on Twitter’s web page for “Signing in with Twitter”:

 

Image 1

 

After viewing the displayed web page, I trusted that Twitter would not give the application access to my password and direct messages. I felt that my account was safe, so I signed in and played with the application. I saw that the application had the functionality to access and display Twitter direct messages. The functionality, however, did not work, since Twitter did not allow the application to access these messages. In order to gain access, the application would have to request proper authorization through the following Twitter web page:

 

  Image2

 

The web page displayed above is similar to the previous web page (Image 1). However, it also says the application will be able to access your direct messages. Also, the blue button is different. It says “Authorize app” instead of “Sign in”. While playing with the application, I never saw this web page (image 2). I continued playing with the application for some time, viewing the functionality, logging in and out from the application and Twitter, and so on. After logging in to the application, I suddenly saw something strange. The application was displaying all of my Twitter direct messages. This was a huge and scary surprise. I wondered how this was possible. How had the application bypassed Twitter’s security restrictions? I needed to know the answer.

 

My surprise didn’t end here. I went to https://twitter.com/settings/applications to check the application settings. The page said “Permissions: read, write, and direct messages”. I couldn’t understand how this was possible, since I had never authorized the application to access my “private” direct messages. I realized that this was a huge security hole.

 

I started to investigate how this could have happened. After some testing, I found that the application obtained access to my private direct messages when I signed in with Twitter for a second or third time. The first time I signed in with Twitter on the application, it only received read and write access permissions. This gave the application access to what Twitter displays on its “Sign in with Twitter” web page (see image 1). Later, however, when I signed in again with Twitter without being already logged in to Twitter (not having an active Twitter session – you have to enter your Twitter username and password), the application obtained access to my private direct messages. It did so without having authorization, and Twitter did not display any messages about this. It was a simple bypass trick for third-party applications to obtain access to a user’s Twitter direct messages.

 

In order for a third-party application to obtain access to Twitter direct messages, it first has to be registered and have its direct message access level configured here: https://dev.twitter.com/apps. This was the case for the application I was testing.  In addition and more importantly, the application has to obtain authorization on the Twitter web page (see Image 2) to access direct messages. In my case, it never got this. I never authorized the application, and I did not encounter a web page requesting my authorization to give the application access to my private direct messages.

 

I tried to quickly determine the root cause, although I had little time. However, I could not determine this. I therefore decided to report the vulnerability to Twitter and let them do a deeper investigation. The Twitter security team quickly answered and took care of the issue, fixing it within 24 hours. This was impressive. Their team was very fast and responsive. They said the issue occurred due to complex code and incorrect assumptions and validations.

 

While I think the Twitter security team is great, I do not think the same of the Twitter vulnerability disclosure policy. The vulnerability was fixed on January 17, 2013, but Twitter has not issued any alerts/advisories notifying users.

 

There should be millions of Twitter users (remember Twitter has 200 million active users) that have signed in with Twitter into third-party applications. Some of these applications might have gained access to and might still have access to Twitter users private direct messages (after the security fix the application I tested still had access to direct messages until I revoked it).

 

Since Twitter, has not alerted its users of this issue, I think we all need to spread the word. Please share the following with everyone you know:

Check third-party applications permissions here: https://twitter.com/settings/applications

If you see an application that has access to your direct messages and you never authorized it, then revoke it immediately.

 

Ironically, we could also use Twitter to help users. We could tweet the following:

Twitter shares your DMs without authorization, check 3rd party application permissions  https://ioactive.com/you-can-not-trust-social-media-twitter-vulnerable/ #ProtectYourPrivacy (Please RT)

 

I love Twitter. I use it daily. However, I think Twitter still needs a bit of improvement, especially when it comes to alerting its users about security issues when privacy is affected.

 

INSIGHTS | January 21, 2013

When a Choice is a Fingerprint

We frequently hear the phrase “Attribution is hard.” And yes, if the adversary exercises perfect tradecraft, attribution can be hard to the point of impossible. But we rarely mention the opposite side of that coin, how hard it is to maintain that level of tradecraft over the lifetime of an extended operation. How many times out of muscle memory have you absent-mindedly entered one of your passwords in the wrong application? The consequences of this are typically nonexistent if you’re entering your personal email address into your work client, but they can matter much more if you’re entering your personal password while trying to log into the pwned mail server of Country X’s Ministry of Foreign Affairs. People make mistakes, and the longer the timeframe, the more opportunities they have to do so.

This leads me to the recent release from Kaspersky labs, about a malware campaign referred to as “Red October”, which they have attributed to Russian hackers. There are a number of indications pointing to Russian origination, including Russian words in the source code,  a trojan dropper that enables Cyrillic before installation, and targets concentrated in Russia’s sphere of influence. Although Kaspersky has avoided naming the sponsor of the campaign as the Russian government, the targets of the malware are strongly suggestive of government sponsorship. The campaign seemed to selectively target governments, diplomatic facilities, defense, research institutes, etc. These are targets consistent with sponsors seeking geo-political intelligence, not criminals seeking profit. Kaspersky hypothesizes the perpetrators may have collected the data to sell it, but I would argue that this is fallacious. The customer of this information would be a government, and if a government is paying criminals for the information, I would argue that’s state-sponsorship.
With that context, the one datapoint that was most interesting to me in Kaspersky’s release was the inclusion of the word, “zakladka”. As Kasperky mentions in their report, “Zakladka” is a Russian word that can mean “bookmark.” In slang it can also mean “Undocumented feature” or a brick with an embedded microphone, like the kind you would sneak into an adversary nation’s embassy.
It’s delightfully poetic then, that in a piece of malware apparently intended to target embassies someone (presumably Russian) would choose to name a module “zakladka.” The United States and Russia have a rich history of attempting to bug each other’s diplomatic facilities. As early as 1945 the Soviet Union infiltrated an ingenuous listening device into the office of the US ambassador to Moscow, hiding it in a wooden US Seal presented as a gift [1]. By 1964 the Soviets were able to collect extensive classified information from the US embassy through hidden microphones [2]. In 1985 construction work stopped on a new US Embassy building in Moscow after it was determined that the building was so riddled with microphones, which had been integrated into the construction, that it could never be considered secure [3].
Presumably in homage to this history, a programmer decided to name his module of code “zakladka”, which would be included in malware that is effectively the evolution of a microphone hidden in drywall. Zakladka is an appropriate name, but the very elegance with which its name matches
its function undermines the deniability of the malware. In this case, it was a choice made by a programmer years ago, and it has repercussions as forensic experts attempt to unravel the source of the malware today.
It’s a reminder of how humans make mistakes. Defenders often talk about the difficulty of attribution, but as the offense we seldom talk about the challenge in gaining and maintaining network access on a target system while remaining totally unnoticed. We take it for granted.
Seemingly innocuous decisions made days, weeks, or months ago can unravel what was otherwise sound tradecraft. In this case, I just found it fascinating that the choice of name for a module–an elegantly appropriate choice–can be as strong a fingerprint for attribution as anything else.
INSIGHTS | January 17, 2013

Offensive Defense

I presented before the holiday break at Seattle B-Sides on a topic I called “Offensive Defense.” This blog will summarize the talk. I feel it’s relevant to share due to the recent discussions on desktop antivirus software   (AV)

What is Offensive Defense?

The basic premise of the talk is that a good defense is a “smart” layered defense. My “Offensive Defense” presentation title  might be interpreted as fighting back against your adversaries much like the Sexy Defense talk my co-worker Ian Amit has been presenting.

My view of the “Offensive Defense” is about being educated on existing technology and creating a well thought-out plan and security framework for your network. The “Offensive” word in the presentation title relates to being as educated as any attacker who is going to study common security technology and know it’s weaknesses and boundaries. You should be as educated as that attacker to properly build a defensive and reactionary security posture. The second part of an “Offensive Defense” is security architecture. It is my opinion that too many organizations buy a product to either meet the minimal regulatory requirements, to apply “band-aide” protection (thinking in a point manner instead of a systematic manner), or because the organization thinks it makes sense even though they have not actually made a plan for it. However, many larger enterprise companies have not stepped back and designed a threat model for their network or defined the critical assets they want to protect.

At the end of the day, a persistent attacker will stop at nothing to obtain access to your network and to steal critical information from your network. Your overall goal in protecting your network should be to protect your critical assets. If you are targeted, you want to be able to slow down your attacker and the attack tools they are using, forcing them to customize their attack. In doing so, your goal would be to give away their position, resources, capabilities, and details. Ultimately, you want to be alerted before any real damage has occurred and have the ability to halt their ability to ex-filtrate any critical data.

Conduct a Threat Assessment, Create a Threat Model, Have a Plan!

This process involves either having a security architect in-house or hiring a security consulting firm to help you design a threat model tailored to your network and assess the solutions you have put in place. Security solutions are not one-size fits all. Do not rely on marketing material or sales, as these typically oversell the capabilities of their own product. I think in many ways overselling a product is how as an industry we have begun to have rely too heavily on security technologies, assuming they address all threats.

There are many quarterly reports and resources that technical practitioners turn to for advice such as Gartner reports, the magic quadrant, or testing houses including AV-Comparatives, ICSA Labs, NSS Labs, EICAR or AV-Test. AV-Test , in fact, reported this year that Microsoft Security Essentials failed to recognize enough zero-day threats with detection rates of only 69% , where the average is 89%. These are great resources to turn to once you know what technology you need, but you won’t know that unless you have first designed a plan.

Once you have implemented a plan, the next step is to actually run exercises and, if possible, simulations to assess the real-time ability of your network and the technology you have chosen to integrate. I rarely see this done, and, in my opinion, large enterprises with critical assets have no excuse not to conduct these assessments.

Perform a Self-assessment of the Technology

AV-Comparatives has published a good quote on their product page that states my point:

“If you plan to buy an Anti-Virus, please visit the vendor’s site and evaluate their software by downloading a trial version, as there are also many other features and important things for an Anti-Virus that you should evaluate by yourself. Even if quite important, the data provided in the test reports on this site are just some aspects that you should consider when buying Anti-Virus software.”

This statement proves my point in stating that companies should familiarize themselves with a security technology to make sure it is right for their own network and security posture.

There are many security technologies that exist today that are designed to detect threats against or within your network. These include (but are not limited to):

  • Firewalls
  • Intrusion Prevention Systems (IPS)
  • Intrusion Detectoin Systems (IDS)
  • Host-based Intrusion Prevention Systems (HIPS)
  • Desktop Antivirus
  • Gateway Filtering
  • Web Application Firewalls
  • Cloud-Based Antivirus and Cloud-based Security Solutions

Such security technologies exist to protect against threats that include (but are not limited to):

  • File-based malware (such as malicious windows executables, Java files, image files, mobile applications, and so on)
  • Network-based exploits
  • Content based exploits (such as web pages)
  • Malicious email messages (such as email messages containing malicious links or phishing attacks)
  • Network addresses and domains with a bad reputation

These security technologies deploy various techniques that include (but are not limited to):

  • Hash-detection
  • Signature-detection
  • Heuristic-detection
  • Semantic-detection
There are of course others  techniques (that I won’t go into great detail in this blog on) for example:
  • Reputation-based
  • Behavioral based
It is important to realize that there is no silver bullet defense out there, and given enough expertise, motivation, and persistence, each technique can be defeated. It is essential to understand the limitations and benefits of a particular product so that you can create a realistic, layered framework that has been architected to fit your network structure and threat model. The following are a few example attack techniques against each protection technique and technology (these have been widely publicized):

 

 

For the techniques that I have not listed in this table such as reputation, refer to my CanSecWest 2008 presentation “Wreck-utation“, which explains how reputation detection can be circumvented. One major example of this is a rising trend in hosting malicious code on a compromised legitimate website or running a C&C on a legitimate compromised business server. Behavioral sandboxes can also be defeated with methods such as time-lock puzzles and anti-VM detection or environment-aware code. In many cases, behavioral-based solutions allow the binary or exploit to pass through and in parallel run the sample in a sandbox. This allows what is referred to as a 1-victim-approach in which the user receiving the sample is infected because the malware was allowed to pass through. However, if it is determined in the sandbox to be malicious, all other users are protected. My point here is that all methods can be defeated given enough expertise, time, and resources.

Big Data, Machine Learning, and Natural Language Processing

My presentation also mentioned something we hear a lot about today… BIG DATA. Big Data plus Machine Learning coupled with Natural Language Processing allows a clustering or classification algorithm to make predictive decisions based on statistical and mathematical models. This technology is not a replacement for what exists. Instead, it incorporates what already exists (such as hashes, signatures, heuristics, semantic detection) and adds more correlation in a scientific and statistic manner. The growing number of threats combined with a limited number of malware analysts makes this next step virtually inevitable.

While machine learning, natural language processing, and other artificial intelligence sciences will hopefully help in supplementing the detection capabilities, keep in mind this technology is nothing new. However, the context in which it is being used is new. It has already been used in multiple existing technologies such as anti-spam engines and Google translation technology. It is only recently that it has been applied to Data Leakage Prevention (DLP), web filtering, and malware/exploit content analysis. Have no doubt, however, that like most technologies, it can still be broken.

Hopefully most of you have read Imperva’s report, which found that less than 5% of antivirus solutions are able to initially detect previously non-cataloged viruses. Regardless of your opinion on Imperva’s testing methodologies, you might have also read the less-scrutinized Cisco 2011 Global Threat report that claimed 33% of web malware encountered was zero-day malware not detectable by traditional signature-based methodologies at the time of encounter. This, in my experience, has been a more widely accepted industry statistic.

What these numbers are telling us is that the technology, if looked at individually, is failing us, but what I am saying is that it is all about context. Penetrating defenses and gaining access to a non-critical machine is never desirable. However, a “smart” defense, if architected correctly, would incorporate a number of technologies, situated on your network, to protect the critical assets you care about most.

 

 

 

The Cloud versus the End-Point

If you were to attend any major conference in the last few years, most vendors would claim “the cloud” is where protection technology is headed. Even though there is evidence to show that this might be somewhat true, the majority of protection techniques (such as hash-detection, signature-detection, reputation, and similar technologies) simply were moved from the desktop to the gateway or “cloud”. The technology and techniques, however, are the same. Of course, there are benefits to the gateway or cloud, such as consolidated updates and possibly a more responsive feedback and correlation loop from other cloud nodes or the company lab headquarters. I am of the opinion that there is nothing wrong with having anti-virus software on the desktop. In fact, in my graduate studies at UCSD in Computer Science, I remember a number of discussions on the end-to-end arguments of system design, which argued that it is best to place functionality at end points and at the highest level unless doing otherwise improves performance.

The desktop/server is the end point where the most amount of information can be disseminated. The desktop/server is where context can be added to malware, allowing you to ask questions such as:

 

  • Was it downloaded by the user and from which site?
  • Is that site historically common for that particular user to visit?
  • What happened after it was downloaded?
  • Did the user choose to execute the downloaded binary?
  • What actions did the downloaded binary take?

Hashes, signatures, heuristics, semantic-detection, and reputation can all be applied at this level. However, at a gateway or in the cloud, generally only static analysis is performed due to latency and performance requirements.

This is not to say that gateway or cloud security solutions cannot observe malicious patterns at the gateway, but restraints on state and the fact that this is a network bottleneck generally makes any analysis node after the end point thorough. I would argue that both desktop and cloud or gateway security solutions have their benefits though, and if used in conjunction, they add even more visibility into the network. As a result, they supplement what a desktop antivirus program cannot accomplish and add collective analysis.

 

Conclusion

My main point is that to have a secure network you have to think offensively by architecting security to fit your organization needs. Antivirus software on the desktop is not the problem. The problem is the lack of planing that goes into deployment as well as the lack of understanding in the capabilities of desktop, gateway, network, and cloud security solutions. What must change is the haste with which network teams deploy security technologies without having a plan, a threat model, or a holistic organizational security framework in place that takes into account how all security products work together to protect critical assets.

With regard to the cloud, make no mistake that most of the same security technology has simply moved from the desktop to the cloud. Because it is at the network level, the latency of being able to analyze the file/network stream is weaker and fewer checks are performed for the sake of user performance. People want to feel safe relying on the cloud’s security and feel assured knowing that a third-party is handling all security threats, and this might be the case. However, companies need to make sure a plan is in place and that they fully understand the capabilities of the security products they have chosen, whether they be desktop, network, gateway, or cloud based.

If you found this topic interesting, Chris Valasek and I are working on a related project that Chris will be presenting at An Evening with IOActive on Thursday, January 17, 2013. We also plan to talk about this at the IOAsis at the RSA Conference. Look for details!

INSIGHTS | January 7, 2013

The Demise of Desktop Antivirus

Are you old enough to remember the demise of the ubiquitous CompuServe and AOL CD’s that used to be attached to every computer magazine you ever brought between the mid-80’s and mid-90’s? If you missed that annoying period of Internet history, maybe you’ll be able to watch the death of desktop antivirus instead.

65,000 AOL CD’s as art

Just as dial-up subscription portals and proprietary “web browsers” represent a yester-year view of the Internet, desktop antivirus is similarly being confined to the annuls of Internet history. It may still be flapping vigorously like a freshly landed fish, but we all know how those last gasps end.

To be perfectly honest, it’s amazing that desktop antivirus has lasted this long. To be fair though, the product you may have installed on your computer (desktop or laptop) bears little resemblance to the antivirus products of just 3 years ago. Most vendors have even done away from using the “antivirus” term – instead they’ve tried renaming them as “protection suites” and “prevention technology” and throwing in a bunch of additional threat detection engines for good measure.

I have a vision of a hunchbacked Igor working behind the scenes stitching on some new appendage or bolting on an iron plate for reinforcement to the Frankenstein corpse of each antivirus product as he tries to keep it alive for just a little bit longer…

That’s not to say that a lot of effort doesn’t go in to maintaining an antivirus product. However, with the millions upon millions of new threats each month it’s hardly surprising that the technology (and approach) falls further and further behind. Despite that, the researchers and engineers that maintain these products try their best to keep the technology as relevant as possible… and certainly don’t like it when anyone points out the gap between the threat and the capability of desktop antivirus to deal with it.

For example, the New York Times ran a piece on the last day of 2012 titled “Outmaneuvered at Their Own Game, Antivirus Makers Struggle to Adapt” that managed to get many of the antivirus vendors riled up – interestingly enough not because of the claims of the antivirus industry falling behind, but because some of the statistics came from unfair and unscientific tests. In particular there was great annoyance that a security vendor (representing an alternative technology) used VirusTotal coverage as their basis for whether or not new malware could be detected – claiming that initial detection was only 5%.

I’ve discussed the topic of declining desktop antivirus detection rates (and evasion) many, many times in the past. From my own experience, within corporate/enterprise networks, desktop antivirus detection typically hovers at 1-2% for the threats that make it through the various network defenses. For newly minted malware that is designed to target corporate victims, the rate is pretty much 0% and can remain that way for hundreds of days after the malware has been released in to the wild.

You’ll note that I typically differentiate between desktop and network antivirus. The reason for this is because I’m a firm advocate that the battle is already over if the malware makes it down to the host. If you’re going to do anything on the malware prevention side of things, then you need to do it before it gets to the desktop – ideally filtering the threat at the network level, but gateway prevention (e.g. at the mail gateway or proxy server) will be good enough for the bulk of non-targeted Internet threats. Antivirus operations at the desktop are best confined to cleanup, and even then I wouldn’t trust any of the products to be particularly good at that… all too often reimaging of the computer isn’t even enough in the face of malware threats such as TDL.

So, does an antivirus product still have what it takes to earn the real estate it take up on your computer? As a standalone security technology – No, I don’t believe so. If it’s free, never ever bothers me with popups, and I never need to know it’s there, then it’s not worth the effort uninstalling it and I guess it can stay… other than that, I’m inclined to look at other technologies that operate at the network layer or within the cloud; stop what you can before it gets to the desktop. Many of the bloated “improvements” to desktop antivirus products over recent years seem to be analogous to improving the hearing of a soldier so he can more clearly hear the ‘click’ of the mine he’s just stood on as it arms itself.

I’m all in favor of retraining any hunchbacked Igor we may come across. Perhaps he can make artwork out of discarded antivirus DVDs – just as kids did in the 1990’s with AOL CD’s?

— Gunter Ollmann, CTO — IOActive, Inc.