INSIGHTS | July 25, 2013

Las Vegas 2013

Again, that time of the year is approaching; thousands of people from the security community are preparing to head to Las Vegas for the most important hacking events: Black Hat USA and DefCon. IOActive will (as we do every year) have an important presence at these conferences.

We have some great researchers from our team presenting at Black Hat USA and DefCon. At Black Hat USA, Barnaby Jack will be presenting “Implantable medical devices: hacking humans”, and Lucas Apa and Carlos Mario Panagos will be presenting “Compromising industrial facilities from 40 miles away”. At DefCon, Chris Valasek will be presenting “Adventures in automotive networks and control units”.
These will be probably the most commented on talks, so don’t miss them!
During Black Hat USA, IOActive will also be hosting IOAsis. This event gives you an opportunity to meet our researchers, listen to some interesting presentations, participate in a hacking hardware workshop, and more—all while enjoying great drinks, food, and a massage.

 

Also back by popular demand and for the third time in a row, IOActive will be sponsoring and hosting Barcon. This is an invitation-only event where our top, l33t, sexy (maybe not ) researchers meet to drink and talk.

 

Lastly (but not least important), we are once again hosting “Freakshow”, our popular and greatest DefCon party, on Saturday, August 3rd at 9am at The Rio pools.

 

For your convenience, here are the details on our talks at Black Hat USA and DefCon:

 

IMPLANTABLE MEDICAL DEVICES: HACKING HUMANS
Who: Barnaby Jack
Where & When: Black Hat USA, August 1st, 2:15pm

 

In 2006, approximately 350,000 pacemakers and 173,000 ICD’s (Implantable Cardioverter Defibrillators) were implanted in the US alone. 2006 was an important year; this is when the FDA began approving fully wireless-based devices. Today there are well over 3 million pacemakers and over 1.7 million ICDs in use.
In this talk, I will focus on the security of wireless implantable medical devices and discuss how these devices operate and communicate and the security shortcomings of the current protocols. I will reveal IOActive’s internal research software that uses a common bedside transmitter to scan for and interrogate individual medical implants. Finally, I will discuss techniques that manufacturers can implement to improve the security of these devices.

 

COMPROMISING INDUSTRIAL FACILITIES FROM 40 MILES AWAY
Who: Lucas Apa and Carlos Mario Panagos
Where & When: Black Hat USA, August 1st, 3:30pm

 

The evolution of wireless technologies has allowed industrial automation and control systems (IACS) to become strategic assets for companies that rely on processing plants and facilities in industries such as energy production, oil, gas, water, utilities, refining, and petrochemical distribution and processing. Effective wireless sensor networks have enabled these companies to reduce implementation, maintenance, and equipment costs and enhance personal safety by enabling new topologies for remote monitoring and administration in hazardous locations.
However, the manner in which sensor networks handle and control cryptographic keys is very different from the way in which they are handled in traditional business networks. Sensor networks involve large numbers of sensor nodes with limited hardware capabilities, so the distribution and revocation of keys is not a trivial task.
In this presentation, we will review the most commonly implemented key distribution schemes, their weaknesses, and how vendors can more effectively align their designs with key distribution solutions. We will also demonstrate some attacks that exploit key distribution vulnerabilities, which we recently discovered in every wireless device developed over the past few years by three leading industrial wireless automation solution providers. These devices are widely used by many energy, oil, water, nuclear, natural gas, and refined petroleum companies.
An untrusted user or group within a 40-mile range could read from and inject data into these devices using radio frequency (RF) transceivers. A remotely and wirelessly exploitable memory corruption bug could disable all the sensor nodes and forever shut down an entire facility. When sensors and transmitters are attacked, remote sensor measurements on which critical decisions are made can be modified. This can lead to unexpected, harmful, and dangerous consequences.

 

Adventures in Automotive Networks and Control Units
Who: Chris Valasek
Where & When: DefCon, August 2nd, 10:00am
Automotive computers, or Electronic Control Units (ECU), were originally introduced to help with fuel efficiency and emissions problems of the 1970s but evolved into integral parts of in-car entertainment, safety controls, and enhanced automotive functionality.
In this presentation, I will examine some controls in two modern automobiles from a security researcher’s point of view. I will first cover the requisite tools and software needed to analyze a Controller Area Network (CAN) bus. I will also demo software to show how data can be read and written to the CAN bus. Then I will show how certain proprietary messages can be replayed by a device hooked up to an ODB-II connection to perform critical car functionality, such as braking and steering. Finally, I will discuss aspects of reading and modifying the firmware of ECUs installed in today’s modern automobile.

 

INSIGHTS | July 24, 2013

DefCon 21 Preview

Hi Internet!
You may have heard that Charlie Miller (@0xcharlie) and I (@nudehaberdasher) will present a car hacking presentation at DefCon 21 on Friday, August 2 at 10:00am.
“Adventures in Automotive Networks and Control Units” (Track 3)
(https://www.defcon.org/html/defcon-21/dc-21-schedule.html)
I wanted to put up a blog explaining what exactly we’ll be talking about in a bit more detail than was provided in the abstract. Our abstract was purposefully vague because we weren’t really sure what we were going to release at the time of submission, but obviously have a much more concrete set of items now.

Also we wanted to remind everyone that although we did not focus on remote attack vectors, intricate knowledge of a car’s internals / CAN network would be necessary after remotely compromising the vehicle for any amount of control (steering, braking, acceleration, etc).
Talking points
  1.  We will briefly discuss the ISO / Protocol standards that our two automobiles used to communicate on the CAN bus, also providing a Python and C API that can be used to replicate our work. The API is pretty generic so it can easily be modified to work with other makes / models.
  2.  The first type of CAN traffic we’ll discuss is diagnostic CAN messages. These types of message are usually used by mechanics to diagnose problems within the automotive network, sensors, and actuators. Although meant for maintenance, we’ll show how some of these messages can be used to physically control the automobile under certain conditions.
  3.  The second type of CAN data we’ll talk about is normal CAN traffic that the car regularly produces. These types of CAN messages are much more abundant but more difficult to reverse engineer and categorize (i.e. proprietary messages). Although time consuming, we’ll show how these messages, when played on the CAN network, have control over the most safety critical features of the automobile.
  4.  Finally we’ll talk about modifying the firmware and using the proprietary re-flashing processes used for each of our vehicles. Firmware modification is most likely necessary for any sort of persistence when attempting to permanently modify an automobile’s behavior. It will also show just how different this process is for each make/model, proving that ‘just ask the tuning community’ is not a viable option a majority of the time.
So there you have it. While we are NOT covering any remote attack vectors/exploits, we will be releasing documentation, code, tools, sample traffic from each vehicle, and more. At the very least you will be able to recreate our results, and with a little work should be able to start hacking your own car!
Make sure you come by DefCon Friday morning at 10am to see our talk. We promise that it will be worth getting up that early (or staying up through the night). Also, please keep checking back as we’ll post our paper, slides, code, and videos after DefCon.

P.S. If you’re lucky, you too can completely brick your car!

INSIGHTS | July 16, 2013

2013 ISS Conference, Prague

I had the opportunity to attend the 2013 ISS conference in Prague a few weeks ago. The conference is a place where company representatives and law enforcement (and other government agency) officials can meet to share ideas and products information (such as appliances). Even though I had a sore throat, I still found it quite interesting; although not necessarily in terms of the products and presentations – which I felt was overall a bit flat.

It was easy to differentiate between company representatives and government officials. Government officials wore yellow ID tags, while company representatives wore purple ID tags. These tags stated the individual’s first and last name and the company or government agency they represented.

I didn’t know what to expect, because I had never been to an ISS conference. However, I quickly realized that the conference itself could be an attacker’s paradise. For example, one could easily use a camera phone to take undetected photographs of the various government officials.

Being inquisitive by nature, I decided to conduct an experiment. First, I turned my ID tag (which I was supposed to visibly wear around my neck at all times) backwards so that nobody could see who I was. Even with hotel security guards stationed at all entrances, nobody stopped me or checked my badge.

This is an attack scenario in and of itself. My only interaction with a security guard occurred when I tried to take a shortcut to exit the conference—but even then, I was not asked to show my badge.

The first day had many presentations given. I attended a few of these but found that they were not particularly interesting. A law enforcement official illustrated how to find and check EXIF data in pictures. I asked a question relating to a page with “right-click” protection on. He answered by saying that they needed a third-party program (an open source one) to download the page. I was baffled and wondered what ever happened to “View Source”, which exists in most browsers. What about blocking JavaScript? What happened to File > Save As? This disturbed me, but not as much as what happened later on in the day.

Some of the presentations required a yellow (government) ID tag to attend. With security guards stationed outside of each presentation, I wanted to see if I could successfully enter any of these. I used a yellow piece of paper, borrowed the hotel printer to create a new ID tag, and wrote my name as Mr. Sauron Chief of Security, Country of Mordor. Just like that! I smiled, nodded, and entered the presentation as a yellow-badged participant. To be clear, the presentation had not yet begun, so I quickly exited after a minute or two.

Later in the day, I attended a presentation on TOR, Darknet, and so on. During the presentation, I overheard a number of yellow-badged participants indicating they had never heard of TOR. As the presentation went on, I had a general feeling that the presenter viewed TOR as a safe way to stay anonymous. However, I see it as a network by which attackers can obtain a substantial amount of data (usernames, passwords, credentials, and so on) after setting up their own TOR networks.

There were problems with Wi-Fi access at the hotel. Guests had to pay, for example, 80 Euros per day for a 1MB line (per day). No cables and wiring had been set up beforehand for attendees, so technicians were busy setting these up during the conference. I found this to be a bit dangerous.

When I checked the wireless network, I found that the hotel had a “free” access point to which I quickly connected. I found it ironic, being at an ISS conference, that the hotel used insecure items such as clear text as part of their free connection!

If you happen to represent law enforcement in your country, do not (I repeat, DO NOT)…
  • Connect to anything anywhere in the area,
  • Accept invalid SSL certificates,
  • Use your Viber or What’s Up messenger to send messages (clear text protocols),
  • Use non-encrypted protocols to check your email,
  • Publicly display your name and the agency you represent unless asked to do so by a security representative wearing the proper badge

 

The same rules that apply to the Defcon and Blackhat conferences should apply at the ISS conference—or any security conference for that matter!

If I had been an evil attacker at the ISS conference, I could have easily sat in the lounge downstairs all day and overheard all kinds of conversations about products, firewalls, and solutions used by a variety of countries. Also, by simply using the “free” hotel Wi-Fi, I could have gained access to a number of participant email messages, text messages, and web pages sending username and password credentials in clear text. Imagine what I could have done with a hotel voucher containing a locked account!

A colleague of mine attending the conference decided to perform a quick experiment using the SSLstrip tool to test for hotel network vulnerabilities. Moxie Marlinspike introduced this tool at the Black Hat DC 2009 conference to demonstrate how attackers can perform HTTP stripping attacks. SSLstrip prompts users to use an invalid certificate, which they can accept or reject. Much to our surprise, ISS conference participants accepted our invalid certificate. My colleague and I were completely baffled and blown away by this! I would like to note that we were not performing this experiment for malicious reasons. We simply wanted to verify the network vulnerability at the conference and provide our feedback to ISS conference and hotel stakeholders in this blog.

Using a tool similar to SSLstrip, an attacker would not even have to enter the main conference area to perform attacks. He could sit back in the smoker’s lounge, order a beverage of choice, set up sniffing, lean back on the couch, and let participants do the rest of the work!
Don’t get me wrong. The conference offered a lot of interesting topics and presentations. Someone presented a board equipped with Bluetooth, wireless, and a 3g module (for listening to calls) that had Linux as a base operating system. Anyone can buy this, not just government officials. The potential an attacker getting this into his hands is huge, and it is only the size of a Rasberry Pi.

Another security concern at the conference involved the use of Bluetooth and Wi-Fi. People everywhere were using the Internet on their phones and had Bluetooth activated. You have to ask yourself, would these be activated at a Blackhat conference?

It’s obvious that law enforcement and other governmental agencies need training with regard to the popular hacking techniques used at conferences. We encourage such agencies to contact us at IOActive for help in this arena.

 

Perhaps you are reading this blog post and realizing that you too have used free Wi-Fi to check email, turned Bluetooth/Wi-Fi on in public places, or accepted a faulty SSL certificate. Promise me one thing… at the next conference you attend make sure everything at the hotel is safe and turn Bluetooth/Wi-Fi off on your devices. Do not talk loudly about things that are supposed to be confidential Do so after the conference! Also, if you are an organizer at the next ISS conference, please be sure to properly check participant badges. Also, consider using something more secure than a paper ID tag with a name on it.
INSIGHTS | July 11, 2013

Why Vendor Openness Still Matters

When the zombies began rising from their graves in Montana it had already been over 30 days since IOActive had reported issues with Monroe Electronics DASDECS.
 
And while it turned out in the end that the actual attacks which caused the false EAS messages to be transmitted relied on the default password never having been changed, this would have been the ideal point to publicize that there was a known issue and that there was a firmware update available, or would soon be to address this and other problems… maybe a mitigation or two in the mean time right?

At a minimum it would have been an ideal time to provide the simple mitigation:
“rm ~/.ssh/authorized_keys”
 
Unfortunately this never happened, leading to a phenomena I like to call “admin droop”.  This is where an administrator, after discovering the details of a published vulnerability, determines that he’s not vulnerable because he doesn’t run the default configuration and everything is working and doesn’t bother to upgrade to the next version.
 
… it happens…
 
In the absence of reliable information other outlets such as the FCC provided pretty minimal advice about changing passwords and using firewalls; I simply commented to the media that this was “inadequate” advice.
 
Eventually, somewhere around April 24 Monroe, with the assistance of CERT begin contacting customers about a firmware fix, we provided a Shodan XML file with a few hundred vulnerable hosts to help track them down. Finally it looked like things were getting fixed but I was somewhat upset that I still had not seen official acknowledgement of the issue from Monroe but on June 13 Monroe finally published this https://ioactive.com/wp-content/uploads/2013/07/130604-Monroe-Security-PR.pdf 
security advisory where it stated “Removal of default SSH keys and a simplified user option to load new SSH keys”, ok its not much of an “announcement” but its something and  I know it says April 24, but both the filename and metadata (pdfinfo) point to cough later origins…
 
Inside the advisory is this wonderful sentence: 
“The company notes that most of its users already have obtained this update.”… That sound s like something worth testing!
 
Then it happened, before I could say “admin droop”… 
 
Found/Vulnerable hosts before we reported the issue:   222
Found hosts After the patch was released, as found by a survey on July  11:  412
 
Version numbers       
         Hosts Found
       Vulnerable (SSH Key)
1.8-5a
                1
                 Yes
1.8-6
                2
                 Yes
2.0-0
              110
                 Yes
2.0-0_a01
                1
                 Yes
2.0-1
               68           
                 Yes
2.0-2 (patched)
               50
                  No
unreachable                              180

While most users may have “obtained” the update, someone certainly didn’t bother applying it…
 
Yep… it’s worse now than it was before we reported the issue in January, and everyone thinks everything is just fine. While I’m sure this would still be a problem had a proper security notice been issued.
 
I can’t say it any better than these awesome folks over at Google http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html : “Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds.”

 

INSIGHTS | July 4, 2013

Why sanitize excessed equipment

My passion for cybersecurity centers on industrial controllers–PLCs, RTUs, and the other “field devices.” These devices are the interface between the integrator (e.g., HMI systems, historians, and databases) and the process (e.g., sensors and actuators). Researching this equipment can be costly because PLCs and RTUs cost thousands of dollars. Fortunately, I have an ally: surplus resellers that sell used equipment.

I have been buying used equipment for a few years now. Equipment often arrives to me literally ripped from a factory floor or even a substation. Each controller typically contains a wealth of information about its origin. I can often learn a lot about a company from a piece of used equipment. Even decades-old control equipment has a lot of memory and keeps a long record about the previous owner’s process. It is possible to learn the “secret recipe” with just a few hours of work at reverse engineering a controller to collect company names, control system network layout, and production history. Even engineers’ names and contact information is likely to be stored in a controller’s log file. For a bad guy, the data could be useful for all sorts of purposes: social engineering employees, insider trading of company stock, and possibly direct attacks to the corporate network.
I reach out to the origin of used equipment when I find these types of information. I help them wipe the equipment, and I point them to where the rest of the equipment is being sold in an attempt to recall it before the stored information ends up in the wrong hands. I am not the only one doing this kind of work. Recently, Billy Rios and Terry McCorkle revealed surplus equipment that they had purchased from a hospital. It had much of the same information about its origin.

These situations can be prevented by sanitizing the equipment before it’s released for disposal. Many equipment manufacturers should be able to provide instructions for this process. One option may be to send the controller back to the manufacturer to be sanitized and refurbished.
A way to provide another layer of protection against information disclosure is to have a robust and well-practiced Incident Response plan. Most places that I contact are great to work with and are receptive to the information.

Ignoring the issue, especially where a public utility is concerned, may be considered a violation. Set up an Incident Response program now and make sure that your process control engineers know to send equipment disposal issues through the IR group.

 

A great deal can be accomplished to keep a control system secure. With a little planning, proper equipment disposal is one of the cheapest things that can be done to keep proprietary process information safe.
WHITEPAPER | July 1, 2013

Best Practices for using Adobe Reader 9.0

Adobe products have long touted how they enable organizations to collaborate and share information in heterogeneous environments. However, a recent stream of vulnerabilities identified in Adobe products has caused a great deal of concern about the overall security threat associated with using these products. IOActive security experts offer suggestions for how to best protect your computer. (more…)

ADVISORIES |

TURCK BL20/BL67 Programmable Gateways undocumented hardcoded accounts

The affected products provide communication between the communications bus and I/O modules. According to TURCK, the BL20 and BL67 are deployed across several sectors. These include agriculture and food, automotive, and critical manufacturing. TURCK estimates that these products are used primarily in the United States and Europe with a small percentage in Asia.

This vulnerability allows an attacker to remotely access the device through its embedded FTP server by using the undocumented, hard-coded credentials. The attacker can then install a trojanized firmware to control communications and processes. (more…)

ADVISORIES |

Protocol Handling Issues in X.Org X Window System Client Libraries

X.Org believes all prior versions of these libraries contain the vulnerabilities discussed in this document, dating back to their introduction.

Versions of the X libraries built on top of the Xlib bridge to the XCB framework are vulnerable to fewer issues than those without. This is due to the added safety and consistency assertions in the XCB calls to read data from the network. However, most of these vulnerabilities are not caught by such checks. (more…)

ADVISORIES |

DASDEC Vulnerabilities

The United States Emergency Alert System (EAS) in 1997 replaced the older and better known Emergency Broadcast System (EBS) used to deliver local or national emergency information. The EAS is designed to “enable the President of the United States to speak to the United States within 10 minutes” after a disaster occurs. In the past, these alerts were passed from station to station using the Associated Press (AP) or United Press International (UPI) “wire services”, which connected to television and radio stations around the U.S. Whenever the station received an authenticated Emergency Action Notification (EAN), the station would disrupt its current broadcast to deliver the message to the public. (more…)

ADVISORIES |

ProSoft Technology RadioLinx ControlScape PRNG Vulnerability

The RadioLinx ControlScape application is used to configure and installradios in a FHSS radio network and to monitor their performance. ProSoft Technology states that default values built into the software work well for initial installation and testing. The software generates a random passphrase and sets the encryption level to 128-bit AES when it creates a new radio network. (more…)