RESEARCH | February 17, 2016

Remotely Disabling a Wireless Burglar Alarm

Countless movies feature hackers remotely turning off security systems in order to infiltrate buildings without being noticed. But how realistic are these depictions? Time to find out.
 
Today we’re releasing information on a critical security vulnerability in a wireless home security system from SimpliSafe. This system consists of two core components, a keypad and a base station. These may be combined with a wide array of sensors ranging from smoke detectors to magnet switches to motion detectors to create a complete home security system. The system is marketed as a cost-effective and DIY-friendly alternative to wired systems that require expensive professional installation and long term monitoring service contracts.
     

 

Looking at the FCC documentation for the system provides a few hints. It appears the keypad and sensors transmit data to the base station using on-off keying in the 433 MHz ISM band. The base station replies using the same modulation at 315 MHz.
 
After dismantling a few devices and looking at which radio(s) were installed on the boards, I confirmed the system is built around a star topology: sensors report to the base station, which maintains all system state data. The keypad receives notifications of events from the base station and drives the LCD and buzzer as needed; it then sends commands back to the base station. Sensors only have transmitters and therefore cannot receive messages.
 
Rather than waste time setting up an SDR or building custom hardware to mess with the radio protocol, I decided to “cheat” and use the conveniently placed test points found on all of the boards. Among other things, the test points provided easy access to the raw baseband data between the MCU and RF upconverter circuit.
 
I then worked to reverse engineer the protocol using a logic analyzer. Although I still haven’t figured out a few bits at the application layer, the link-layer framing was pretty straightforward. This revealed something very interesting: when messages were sent multiple times, the contents (except for a few bits that seem to be some kind of sequence number) were the same! This means the messages are either sent in cleartext or using some sort of cipher without nonces or salts.
 
After a bit more reversing, I was able to find a few bits that reliably distinguished a “PIN entered” packet from any other kind of packet.
 
 

 

I spent quite a while trying to figure out how to convert the captured data bytes back to the actual PIN (in this case 0x55 0x57 -> 2-2-2-2) but was not successful. Luckily for me, I didn’t need that for a replay attack.
 
To implement the actual attack I simply disconnected the MCUs from the base station and keypad, and soldered wires from the TX and RX basebands to a random microcontroller board I had sitting around the lab. A few hundred lines of C later, I had a device that would passively listen to incoming 433 MHz radio traffic until it saw a SimpliSafe “PIN entered” packet, which it recorded in RAM. It then lit up an LED to indicate that a PIN had been recorded and was ready to play back. I could then press a button at any point and play back the same packet to disarm the targeted alarm system.
 
 

 

This attack is very inexpensive to implement – it requires a one-time investment of about $250 for a commodity microcontroller board, SimpliSafe keypad, and SimpliSafe base station to build the attack device. The attacker can hide the device anywhere within about a hundred feet of the target’s keypad until the alarm is disarmed once and the code recorded. Then the attacker retrieves the device. The code can then be played back at any time to disable the alarm and enable an undetected burglary, or worse.
 
While I have not tested this, I expect that other SimpliSafe sensors (such as entry sensors) can be spoofed in the same fashion. This could allow an attacker to trigger false/nuisance alarms on demand.
 
Unfortunately, there is no easy workaround for the issue since the keypad happily sends unencrypted PINs out to anyone listening. Normally, the vendor would fix the vulnerability in a new firmware version by adding cryptography to the protocol. However, this is not an option for the affected SimpliSafe products because the microcontrollers in currently shipped hardware are one-time programmable. This means that field upgrades of existing systems are not possible; all existing keypads and base stations will need to be replaced.
 
IOActive made attempts through multiple channels to contact SimpliSafe upon finding this critical vulnerability, but received no response from the vendor. IOActive also notified CERT of the vulnerability in the normal course of responsible disclosure. The timeline can be found here within the release advisory. 
 
SimpliSafe claims to have its units installed in over 300,000 homes in North America. Consumers of this product need to know the product is inherently insecure and vulnerable to even a low-level attacker. This simple vulnerability is particularly alarming because; 1) it exists within a “security product” that is trusted to secure over a million homes; 2) it enables an attacker to completely own the system (i.e., disable it, change PIN codes, etc.), and; 3) many unsuspecting consumers prominently display window and yards signs promoting their use of this system…essentially self-identifying their home as a viable target for an attacker. 
 
RESEARCH | October 17, 2014

Vicious POODLE Finally Kills SSL

The poodle must be the most vicious dog, because it has killed SSL.

 

POODLE is the latest in a rather lengthy string of vulnerabilities in SSL (Secure Socket Layer) and a more recent protocol, TLS (Transport layer Security). Both protocols secure data that is being sent between applications to prevent eavesdropping, tampering, and message forgery.

POODLE (Padding Oracle On Downgraded Legacy Encryption) rings the death knell for our 18-year-old friend SSL version 3.0 (SSLv3), because at this point, there is no truly safe way to continue using it.

Google announced Tuesday that its researchers had discovered POODLE. The announcement came amid rumors about the researchers’ security advisory white paper which details the vulnerability, which was circulating internally.

SSLv3 had survived numerous prior vulnerabilities, including SSL renegotiation, BEAST, CRIME, Lucky 13, and RC4 weakness. Finally, its time has come; SSLv3 is long overdue for deprecation.

The security industry’s initial view is that POODLE will not be as devastating as other recent vulnerabilities such as Heartbleed, a TLS bug. After all, POODLE is a client-side attack; the others were direct server-side attacks.

However, I believe POODLE will ultimately have a larger overall impact than Heartbleed. Even the hundreds of thousands of applications that use a more recent TLS protocol still use SSLv3 as part of backward compatibility. In addition, some applications that directly use SSLv3 may not support any version of TLS; for these, there might not be a quick fix, if there will be one at all.


POODLE attacks the SSLv3 block ciphers by abusing the non-deterministic nature of block cipher padding of CBC ciphers. The Message Authentication Code (MAC), which checks the integrity of every message after decryption, does not cover these padding bytes. What does this mean? The padding can’t be fully verified. In other words, this attack is very capable of determining the value of HTTPS cookies. This is the heart of the problem. That might not seem like a huge issue until you consider that this may be a session cookie, and the user’s session could be potentially compromised.

TLS version 1.0 (TLSv1.0) and higher versions are not affected by POODLE because these protocols are strict about the contents of the padding bytes. Therefore, TLSv1.0 is still considered safe for CBC mode ciphers. However, we shouldn’t let that lull us into complacency. Keep in mind that even the clients and servers that support recent TLS versions can force the use of SSLv3 by downgrading the transmission channel, which is often still supported. This ‘downgrade dance’ can be triggered through a variety of methods. What’s important to know is that it can happen.

There are a few ways to prevent POODLE from affecting your communication:

Plan A: Disable SSLv3 for all applications. This is the most effective mitigation for both clients and servers.

Plan B: As an alternative, you could disable all CBC Ciphers for SSLv3. This will protect you from POODLE, but leaves only RC4 as the remaining “strong” cryptographic ciphers, which as mentioned above has had weaknesses in the past.

Plan C: If an application must continue supporting SSLv3 in order work correctly, implement the TLS_FALLBACK_SCSV mechanism. Some vendors are taking this approach for now, but it is a coping technique, not a solution. It addresses problems with retried connections and prevents reversion to earlier protocols, as described in the document TLS Fallback Signaling Cipher Suite Value for Preventing Protocol Downgrade Attacks (Draft Released for Comments).

How to Implement Plan A

 

 

 

With no solution that would allow truly safe continued use of SSLv3, you should implement Plan A: Disable SSLv3 for both server and client applications wherever possible, as described below.

Disable SSLv3 for Browsers

 

 

 

 

Browser
 Disabling instructions
Chrome:
Add the command line -ssl-version-min=tls1 so the browser uses TLSv1.0 or higher.
Internet: Explorer:
Go to IE’s Tools menu -> Internet Options -> Advanced tab. Near the bottom of the tab, clear the Use SSL 3.0 checkbox.
Firefox:
Type about:config in the address bar and set security.tls.version.min to 1.
Adium/Pidgin:
 Clear the Force Old-Style SSL checkbox.

Note: Some browser vendors are already issuing patches and others are offering diagnostic tools to assess connectivity.

If your device has multiple users, secure the browsers for every user. Your mobile browsers are vulnerable as well.

Disable SSLv3 on Server Software

 

 

 

 
Server Software 
   Disabling instructions
Apache:
Add -SSLv3 to the SSLProtocol line.
IIS 7:
Because this is an involved process that requires registry tweaking and a reboot, please refer to Microsoft’s instructions: https://support.microsoft.com/kb/187498/en-us
Postfix:
In main.cf, adopt the setting smtpd_tls_mandatory_protocols=!SSLv3 and  ensure that !SSLv2 is present too.

Stay alert for news from your application vendors about patches and recommendations.

POODLE is a high risk for payment gateways and other applications that might expose credit card data and must be fixed in 30 days, according to Payment Card Industry standards. The clock is ticking.

 

 

Conclusion

 

 

 

 

Ideally, the security industry should move to recent versions of the TLS protocol. Each iteration has brought improvements, yet adoption has been slow. For instance, TLS version 1.2, introduced in mid-2008, was scarcely implemented until recently and many services that support TLSv1.2 are still required to support the older TLSv1.0, because the clients don’t yet support the newer protocol version.

A draft of TLS version 1.3 released in July 2014 removes support for features that were discovered to make encrypted data vulnerable. Our world will be much safer if we quickly embrace it.

Robert Zigweid

 

 

References 
RESEARCH | September 10, 2014

Killing the Rootkit

Cross-platform, cross-architecture DKOM detection

To know if your system is compromised, you need to find everything that could run or otherwise change state on your system and verify its integrity (that is, check that the state is what you expect it to be).

“Finding everything” is a bold statement, particularly in the realm of computer security, rootkits, and advanced threats. Is it possible to find everything? Sadly, the short answer is no, it’s not. Strangely, the long answer is yes, it is.

By defining the execution environment at any point in time, predominantly through the use of hardware-based hypervisor or virtualization facilities, you can verify the integrity of that specific environment using cryptographically secure hashing.

Despite the relative ease of hash integrity checks, applying them to memory presents a number of significant challenges; the most difficult being identifying all of the processes that may execute on a system. System process control registers describe the virtual-to-physical memory layout. Only after all processes are found can integrity verification of code, data, and static/structural analysis be conducted.

“DKOM is one of the methods commonly used and implemented by Rootkits, in order to remain undetected, since this the main purpose of a rootkit.” – Harry Miller

Detecting DKOM-based processes has largely been conducted at the logical layer (see the seven different techniques https://code.google.com/p/volatility/wiki/CommandReference#psxview). This is prone to failure and iterative evasion since most process detection techniques are based on recognizing OS artifacts.
Finding Processes by Page Table Detection
When an OS starts up a process, it establishes the ability for virtual memory to be used (to enable memory protection), by creating a page table. The page table is itself a single page of physical memory (0x1000 bytes). It is usually allocated by way of a cache-optimized mechanism, which makes locating it somewhat complicated. Fortunately, we can identify a page table by understanding several established (hardware) requirements for its construction.

Even if an attacker significantly modifies and attempts to hide from standard logical object scanning, there is no way to evade page-table detection without significantly patching the OS fault handler. A major benefit to a DKOM rootkit is that it avoids code patches, that level of modification is easily detected by integrity checks and is counter to the goal of DKOM. DKOM is a codeless rootkit technique, it runs code without patching the OS to hide itself, it only patches data pointers.

IOActive released several versions of this process detection technique. We also built it into our memory integrity checking tools, BlockWatch™ and The Memory Cruncher™.

Processbased Page Table Detection
Any given page of memory could be a page table. Typically a page table is organized as a series of page table entries (PTEs). These entries are usually traversed by selecting some bits from a virtual address and converting them into a series of table lookups.

The magic of this technique comes from the propensity of all OS (at least Windows, Linux, and BSD) to organize their page tables into virtual memory. That way they can use virtual addresses to edit PTEs instead of physical memory addresses.

By making all of the offsets the same with the entry at that offset pointing back to the page table base value (CR3), the page table can essentially be accessed through this special virtual address. Refer to the Linux article for an exhaustive explanation of why this is useful.
Physical Memory Page 
 
If we consider any given page of random physical memory, we can detect the following offsets as a valid PTE. Windows has proven to consume, for every process, entry 0, entry 0x1ED (self map), and a couple of additional kernel regions (consistent across all Win64 versions).
PTE Format
 
typedef struct _HARDWARE_PTE {
    ULONGLONG Valid : 1; Indicates hardware or software handling (Mode 1 and 2)
    ULONGLONG Write : 1;
    ULONGLONG Owner : 1;
    ULONGLONG WriteThrough : 1;
    ULONGLONG CacheDisable : 1;
    ULONGLONG Accessed : 1;
    ULONGLONG Dirty : 1;
    ULONGLONG LargePage : 1; Mode 2
    ULONGLONG Global : 1;
    ULONGLONG CopyOnWrite : 1;
    ULONGLONG Prototype : 1; Mode 2
    ULONGLONG reserved0 : 1;
    ULONGLONG PageFrameNumber : 36; PFN, always incrementing (Mode 1 and 2)
    ULONGLONG reserved1 : 4;
    ULONGLONG SoftwareWsIndex : 11; Mode 2
    ULONGLONG NoExecute : 1;
} HARDWARE_PTE, *PHARDWARE_PTE;
By checking the physical memory offsets we expect, extracting a candidate entry, we can determine if the physical page is a valid page table. There are a number of properties we understand about physical memory: the address of page frame number (PFN) will always increase from earlier pages and will not be larger than the current linear position + memory gap ranges.
What About Shadow Walker Tricks?
 
Shadow walker abuses the nature of the TLB of a running system. Execution may occur at a different address than when reading. If you look/scan/check memory, the address will be cloaked onto what you expect when reading, while execution will actually occur somewhere different.
This is one reason why we analyze memory extracted from a hypervisor “guest” OS snapshot. Analyzing memory from behind a hypervisor establishes a “semantic gap” that ensures our static memory analysis includes all possible memory pages, unaffected by split I/D TLB games.
What About Hardware Rootkits?
 
Using a hypervisor makes verifying device memory easy. Verification at the host or physical layer is extremely complicated. Different hardware vendors have vastly different ways to extract and interact with firmware, UEFI may be verifiable with Mitre’s Copernicus2 or other tools.
In order for a hypervisor to be effected by a hardware rootkit, the hypervisor has to have been “escaped”, which is currently a rare and valuable exploit. It is probably not worth risking such a valuable exploit for a hardware rootkit that can be mitigated by the network. Extracting physical system memory in a consistent way (immune to attack and evasion) has historically been very hard.
If you are concerned about hardware rootkits, there are some extreme techniques that may help.
IOActive has Everything
 
Now that we have established a method for finding everything, the next task is relatively simple. Do some checking to ensure that what we found is what we expected. Using cryptographically secure hash checks in a whitelist fashion is a straight-forward and hard-to-attack technique for integrity verification.
IOActive’s current solution, BlockWatch™, does just that. It manages memory extraction and hash checking that testifies to what we have found.
Weird Rootkits
I classify “weird rootkits” as anything from a RoP-based rootkit to some form of script injection or anything else where the attacker can coerce an application to behave in an unexpected (and rootkit-like) way.
Detecting a RoP is actually quite easy (stack checking a memory snapshot). I covered some of this in a CanSecWest presentation I gave earlier this year. Each return address on a stack must be preceded by a call instruction. You can then validate that the opcode exists and the return address is not spurious (as is the case for a RoP attack). RoP stacks are also exceedingly large and are atypical of normal threads.
What about other attacks, rootkits implemented in server scripts and anything else? If we have found the address spaces for all of the processes and are able to validate the integrity of all of the kernel code, then any scripts or weird rootkits will be observable through normal profiling and logging interfaces.
Summary
 
By leveraging the unique ability of a hypervisor to expose the physical memory of a system in a way that is consistent (not modified by an attacker), we can use a high-assurance process detection technique combined with integrity checking to detect any rootkit.

Shane Macaulay

Additional References
  • BlockWatch™
  • DEF CON 22 Presentation: “Weird-machine Motivated Practical Page Table Shellcode & Finding Out What’s Running on Your System
  • PMODUMP
  • Windows Debugging Blog on understanding !PTE
RESEARCH | July 31, 2014

Hacking Washington DC traffic control systems

This is a short blog post, because I’ve talked about this topic in the past. I want to let people know that I have the honor of presenting at DEF CON on Friday, August 8, 2014, at 1:00 PM. My presentation is entitled “Hacking US (and UK, Australia, France, Etc.) Traffic Control Systems”. I hope to see you all there. I’m sure you will like the presentation.

I am frustrated with Sensys Networks (vulnerable devices vendor) lack of cooperation, but I realize that I should be thankful. This has prompted me to further my research and try different things, like performing passive onsite tests on real deployments in cities like Seattle, New York, and Washington DC. I’m not so sure these cities are equally as thankful, since they have to deal with thousands of installed vulnerable devices, which are currently being used for critical traffic control.

The latest Sensys Networks numbers indicate that approximately 200,000 sensor devices are deployed worldwide. See http://www.trafficsystemsinc.com/newsletter/spring2014.html. Based on a unit cost of approximately $500, approximately $100,000,000 of vulnerable equipment is buried in roads around the world that anyone can hack. I’m also concerned about how much it will cost tax payers to fix and replace the equipment.

One way I confirmed that Sensys Networks devices were vulnerable was by traveling to Washington DC to observe a large deployment that I got to know.

When I exited the train station, the fun began.

INSIGHTS | April 30, 2014

Hacking US (and UK, Australia, France, etc.) Traffic Control Systems

Probably many of you have watched scenes from “Live Free or Die Hard” (Die Hard 4) where “terrorist hackers” manipulate traffic signals by just hitting Enter or typing a few keys. I wanted to do that! I started to look around, and while I couldn’t exactly do the same thing (too Hollywood style!), I got pretty close. I found some interesting devices used by traffic control systems in important US cities, and I could hack them 🙂 These devices are also used in cities in the UK, France, Australia, China, etc., making them even more interesting.
After getting the devices, it wasn’t difficult to find vulnerabilities (actually, it was more difficult to make them work properly, but that’s another story).

The vulnerabilities I found allow anyone to take complete control of the devices and send fake data to traffic control systems. Basically anyone could cause a traffic mess by launching an attack with a simple exploit programmed on cheap hardware ($100 or less).
I even tested the attack launched from a drone flying at over 650 feet, and it worked! Theoretically, an attack could be launched from up to 1 or 2 miles away with a better drone and hardware equipment, I just used a common, commercially available drone and cheap hardware. Since it seems flying a drone in the US is not illegal and anyone will be able to get drones on demand soon, I would be worried about attacks from the sky in the US.
It might also be possible to create self-replicating malware (worm) that can infect these vulnerable devices in order to launch attacks affecting traffic control systems later. The exploited device could then be used to compromise all of the same devices nearby.
 
What worries me the most is that if a vulnerable device is compromised, it’s really, really difficult and really, really costly to detect it. So there could already be compromised devices out there that no one knows about or could know about. 
 
Let me give you an idea of how affected some cities are. The following is from documentation from vulnerable vendors:
  • 250+ customers in 45 US states and 10 countries
  • Important US cities have deployments: New York, Washington DC, San Francisco, Los Angeles, Boston, Seattle, etc.
  • 50,000+ devices deployed worldwide (most of them in the US)
  • Countries include US, United Kingdom, China, Canada, Australia, France, etc. For instance, some UK cities with deployments: London, Shropshire, Slough, Bournemouth, Aberdeen, Blackburn with Darwen Borough Council, Belfast, etc. 
  • Australia has a big deployment on one of the most important and modern freeways.


As you can see, there are 50,000+ devices out there that could be hacked and cause traffic messes in many important cities.

Going onsite
I knew that the devices I have are vulnerable, but I wanted to be 100% sure that I wasn’t missing anything with my tests. Real-life deployments could have different configurations (different hardware/software versions)  

and it seemed that the vendor provided wrong information when I reported the vulnerabilities, so maybe what I found didn’t affect real-life deployments. I put my “tools” in my backpack and went to Seattle, New York, and Washington DC to do some “passive” onsite tests (“no hacking and nothing illegal” :)). Luckily, I could confirm that what I found applied to real-life deployments. BTW, many thanks to Ian Amit for the pictures and videos and for daring to go with me to DC :).

Vulnerabilities
As you may have already realized, I’m not going to share specific nor technical details here (for that, you will have to wait and go to Infiltrate 2014 in a couple of weeks. What I would like to mention is that the vendor was contacted a long time ago (September 2013) through ICS-CERT (the initial report to ICS-CERT was sent on July 31st, 2013). I was told by ICS-CERT that the vendor said that they didn’t think the issues were critical nor even important. For instance, regarding one of the vulnerabilities, the vendor said that since the devices were designed that way (insecure) on purpose, they were working as designed, and that customers (state/city governments) wanted the devices to work that way (insecure), so there wasn’t any security issue. Yes that was the answer, I couldn’t believe it. 
 
Regarding another vulnerability, the vendor said that it’s already fixed on newer versions of the device. But there is a big problem, you need to get a new device and replace the old one. This is not good news for state/city governments, since thousands of devices are already out there, and the time and money it would take to replace all of them is considerable. Meanwhile, the existing devices are vulnerable and open to attack.
 
Another excuse the vendor provided is that because the devices don’t control traffic lights, there is no need for security. This is crazy, because while the devices don’t directly control traffic control systems, they have a direct influence on the actions and decisions of these systems.
I tried several times to make ICS-CERT and the vendor understand that these issues were serious, but I couldn’t convince them. In the end I said, if the vendor doesn’t think they are vulnerable then OK, I’m done with this; I have tried hard, and I don’t want to continue wasting time and effort. Also, since DHS is aware of this (through ICS-CERT), and it seems that this is not critical nor important to them, then there isn’t anything else I can do except to go public.
 
This should be another wake up call for governments to evaluate the security of devices/products before using them in critical infrastructure, and also a request to providers of government devices/products to take security and security vulnerability reports seriously.
 
Impact
 
By exploiting the vulnerabilities I found, an attacker could cause traffic jams and problems at intersections, freeways, highways, etc. 
Electronic signs
It’s possible to make traffic lights (depending on the configuration) stay green more or less time, stay red and not change to green (I bet many of you have experienced something like this as a result of driving during non-traffic hours late at night or being on a bike or in a small car), or flash. It’s also possible to cause electronic signs to display incorrect speed limits and instructions and to make ramp meters allow cars on the freeway faster or slower than needed. 
 
These traffic problems could cause real issues, even deadly ones, by causing accidents or blocking ambulances, fire fighters, or police cars going to an emergency call.
I’m sure there are manual overrides and secondary controls in place that can be used if anomalies are detected and could mitigate possible attacks, and some traffic control systems won’t depend only on these vulnerable devices. This doesn’t mean that attacks are impossible or really complex; the possibility of a real attack shouldn’t be disregarded, since launching an attack is simple. The only thing that could be complex is making an attack have a bigger impact, but complex doesn’t mean impossible.
 

Traffic departments in states/cities with vulnerable devices deployed should pay special attention to traffic anomalies when there is no apparent reason and closely watch the device’s behavior.
 
“In 2012, there were an estimated 5,615,000 police-reported traffic crashes in which 33,561 people were killed and 2,362,000 people were injured; 3,950,000 crashes resulted in property damage only.” US DoT National Highway Traffic Safety Administration: Traffic Safety Facts
 
“Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person” Association for Safe International Road Travel: Annual US Road Crash Statistics
 
If you add malfunctioning traffic control systems to the above stats, the numbers could be a lot worse.
RESEARCH | April 17, 2014

A Wake-up Call for SATCOM Security

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.
 
We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.
 
Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft’s ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 
 
IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 
 
Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  
 
In previous blog posts I’ve explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.

 
What about the results? 
 
Insecure and undocumented protocols, backdoors, hard-coded credentials…mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.
 
Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.
 
I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research IOActive_SATCOM_Security_WhitePaper
INSIGHTS | December 4, 2013

Practical and cheap cyberwar (cyber-warfare): Part II

Disclaimer: I did not perform any illegal attacks on the mentioned websites in order to get the information I present here. No vulnerability was exploited on the websites, and they are not known to be vulnerable.
 
Given that we live in an age of information leakage where government surveillance and espionage abound, I decided in this second part to focus on a simple technique for information gathering on human targets. If an attacker is targeting a specific country, members of the military and defense contractors would make good human targets. When targeting members of the military, an attacker would probably focus on high ranking officers and for defense contractors, an attacker would focus on key decision makers, C-level executives, etc. Why? Because if an attacker compromises these people, they could gain access to valuable information related to their work, their personal life, and family. Data related to their work could help an attacker strategically by enabling them to build better defenses, steal intellectual property, and craft specific attacks. Data related to a target’s personal life could help attackers for blackmailing or attacking the target’s families.
 
 
There is no need to work for the NSA and have a huge budget in order to get juicy information. Everything is just one click away; attackers only need to find ways to easily milk the web. One easy way to gather information about people is to get their email addresses as I have described last year here http://blog.ioactive.com/2012/08/the-leaky-web-owning-your-favorite-ceos.html . Basically you use a website registration form and/or forgotten password functionality to find out if an email address is already registered on a website. With a list of email addresses attackers can easily enumerate the websites/services where people have accounts. Given the ever-increasing number of online accounts one person usually has, this could provide a lot of useful information. For instance, it could make perform phishing attacks and social engineering easier (see http://www.infosecurity-magazine.com/view/35048/hackers-target-mandiant-ceo-via-limo-service/). Also, if one of the sites where the target has an account is vulnerable, that website could be hacked in order to get the target’s account password. Due to password reuse, attackers can compromise all the target accounts most of the time. 
 
 
This is intended to be easy and practical, so let’s get hands on. I did this research about a year ago. First, I looked for US Army email addresses. After some Google.com searches, I ended up with some PDF files with a lot of information about military people and defense contractors:

I extracted some emails and made a list. I ended up with:
 
1784 total email addresses: military (active and retired), civilians, and defense contractors.
 
I could have gotten more email addresses, but that was enough for testing purposes. I wasn’t planning on doing a real attack.
 
I had a very simple (about 50 LoC or so) Python tool (thanks to my colleague Ariel Sanchez for quickly building original tool!) that can be fed a list of websites and email addresses. I had already built the website database with 20 or so common and well known websites (I should have used military related ones too for better results, but it still worked well), I loaded the list of email addresses I had found, and then ran the tool. A few minutes later I ended up with a long list of email addresses and the websites where those email addresses were used (meaning where the owners of those email addresses have accounts):
 
Site
Accounts
     %
Facebook
  308
17.26457
Google
  229
12.83632
Orbitz
  182
10.20179
WashingtonPost
  149
8.352018
Twitter 
  108
6.053812
Plaxo
  93
5.213004
LinkedIn
  65
3.643498
Garmin
  45
2.522422
MySpace
  44
2.466368
Dropbox
  44
2.466368
NYTimes
  36
2.017937
NikePlus
  23
1.289238
Skype
  16
0.896861
Hulu
  13
0.7287
Economist
  11
0.616592
Sony Entertainment Network
  9
0.504484
Ask
  3
0.168161
Gartner
  3
0.168161
Travelers
  2
0.112108
Naymz
  2
0.112108
Posterous
  1
0.056054
 
Interesting to find accounts on Sony Entertainment Network website, who says the military can’t play Playstation 🙂
 
I decided that I should focus on something juicier, not just random .mil email addresses. So, I did a little research about high ranking US Army officers, and as usual, Google and Wikipedia ended up being very useful.
 
Let’s start with US Army Generals. Since this was done in 2012, some of them could be retired now.

 
I found some retired ones that now may be working for defense contractors and trusted advisors:

 
Active US Army Generals seem not to use their .mil email addresses on common websites; however, we can see a pattern that almost all of them use orbitz.com. For retired ones, since we got the personal (not .mil) email addresses, we can see they use them on many websites.

After US Army Generals, I looked at Lieutenant Generals (we could say future Generals):

Maybe because they are younger they seem to use their .mil email address in several common websites including Facebook.com. Even more, they have most of their Facebook information available to public! I was thinking about publishing the related Facebook information, but I will leave it up to you to explore their Facebook profiles.
 


I also looked for US Army Mayor Generals and found at least 15 of them:
 
Robert Abrams
Email: robert.abrams@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: washingtonpost.com
 
 
Jamos Boozer
Email: james.boozer@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: facebook.com
 
 
Vincent Brooks
Email: vincent.brooks@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
Found account on site: linkedin.com
 
 
James Eggleton
Email: james.eggleton@nullus.army.mil
 
 
 
Found account on site: plaxox.com
 
 
Reuben Jones
Email: reuben.jones@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
 
 
 
David quantock
Email: david-quantock@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: plaxo.com
 
 
 
 
Dave Halverson
Email: dave.halverson@nullconus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Jo Bourque
Email: jo.bourque@nullus.army.mil
 
 
 
Found account on site: washingtonpost.com
 
 
 
 
Kev Leonard
Email: kev-leonard@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
 
James Rogers
Email: james.rogers@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
 
 
 
William Crosby
Email: william.crosby@nullus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Anthony Cucolo
Email: anthony.cucolo@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
Found account on site: linkedin.com
 
 
Genaro Dellrocco
Email: genaro.dellarocco@nullmsl.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Stephen Lanza
Email: stephen.lanza@nullus.army.mil
 
 
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: nytimes.com
 
 
Kurt Stein
Email: kurt-stein@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
 
Later I found about 7 US Army Brigadier General and 120 US Army Colonel email addresses, but I didn’t have time to properly filter the results. These email addresses were associated with many website accounts.
 
Basically, the 1784 total email addresses included a broad list of ranking officers from the US Army.
 
Doing a quick analysis of the gathered information we could infer: 
  • Many have Facebook accounts exposing to public the family and friend relations that could be targeted by attackers. 
  • Most of them read and are probably subscribed to The Washington Post (makes sense, no?). This could be an interesting avenue for attacks such as phishing and watering hole attacks. 
  • Many of them use orbitz.com, probably for car rentals. Hacking this site can give attackers a lot of information about how they move, when they travel, etc. 
  • Many of them have accounts on google.com probably meaning they have Android devices (Smartphones, tablets, etc.).This could allow attackers to compromise the devices remotely (by email for instance) with known or 0days exploits since these devices are not usually patched and not very secure.
  • And last but not least, many of them including Generals use garmin.com or nikeplus.com. Those websites are related with GPS devices including running watches. These websites allow you to upload GPS information making them very valuable for attackers for tracking purposes. They could know on what area a person usually runs, travel, etc.

 

 
As we can see, it’s very cheap and easy to get information about ranking members of the US Army. People serving in the US Army should take extra precautions. They should not make their email addresses public and should only use them for “business” related issues, not personal activities.
INSIGHTS | November 11, 2013

Practical and cheap cyberwar (cyber-warfare): Part I

Every day we hear about a new vulnerability or a new attack technique, but most of the time it’s difficult to imagine the real impact. The current emphasis on cyberwar (cyber-warfare if you prefer) leads to myths and nonsense being discussed. I wanted to show real life examples of large scale attacks with big impacts on critical infrastructure, people, companies, etc.
 

The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported.


The attack in this post is based on research done by my fellow pirates, Lucas Apa and Carlos Penagos. They found critical vulnerabilities in wireless industrial control devices. This research was first presented at BH USA 2013. You can find their full presentation here https://www.blackhat.com/us-13/archives.html#Apa
 
A common information leak occurs when vendors highlight how they helped Company X with their services or products. This information is very useful for supply chain attacks. If you are targeting Company X, it’s good to look at their service and product providers. It’s also useful to know what software/devices/technology they use.

 

In this case, one of the vendors that sells vulnerable wireless industrial control devices is happy to announce in a press release that Company X has acquired its wireless sensors and is using them in the Haynesville Shale fields. So, as an attacker, we now know that Company X is using vulnerable wireless sensors at the Haynesville Shale fields. Haynesville Shale fields, what’s that? Interesting, with a quick Google search you end up with:
 
 
 
How does Google know about shale well locations? It’s simple, publically-available information. You can display wells by name, organization, etc.:
 
 
 
 
 
Even interactive maps are available:
 
 
 
You can find all of Company X’s wells along with their exact location (geographical coordinates). You know almost exactly where the vulnerable wireless sensors are installed.
 
Since the wells are at a remote location, exploiting the wireless sensor vulnerabilities becomes an interesting challenge. Enter drones, UAV unmanned aerial vehicles. Commercially available drones range from a couple hundred dollars to tens of thousands dollars, depending on range, endurance, functionality, etc. You can even build your own and save some money. The idea is to put the attack payload in a drone, send it to the wells’ location, and launch the attack. This isn’t difficult to do since drones can be programmed to fly to x,y coordinates and execute the payload while flying around the target coordinates (no need to return). 
 
Depending on your budget, you can launch an attack from a nearby location or very far away. Depending on the drone’s endurance, you can be X miles away from the target. You can extend the drone’s range depending on the radio and antenna used. 
 
The types of exploits you could launch from the drone range from bricking all of the wireless devices to causing some physical harm on the shale gas installations. Manipulating device firmware or injecting fake data on radio packets could make the control systems believe things like the temperature or pressure are wrong. Just bricking the devices could result in significant lost money to Company X. The devices would need to be reconfigured/reflashed. The exploits could interfere with shale gas extraction and even halt production. The consequences of an attack could be even more catastrophic depending on how the vulnerable devices are being used.
 
Attacks could be expanded to target more than just one vendor’s device. Drones could do reconnaissance first, scan and identify devices from different vendors, and then launch attacks targeting all of the specific devices.
 
In order to highlight attack possibilities and impact consequences I extracted the following from http://www.onworld.com/news/newsoilandgas.html (the companies mentioned in this article are not necessarily vulnerable, this is just for illustrative purposes):
 
“…Pipelines & Corrosion Monitoring
Wireless flow, pressure, level, temperature and valve position monitoring are used to streamline pipeline operation and storage while increasing safety and regulatory compliance. In addition, wireless sensing solutions are targeted at the billions of dollars per year that is spent managing pipeline corrosion. While corrosion is a growing problem for the aging pipeline infrastructure it can also lead to leaks, emissions and even deadly explosions in production facilities and refineries….”
 
Leaks and deadly explosions can have sad consequences.
 
Cyber criminals, terrorists, state actors, etc. can launch big impact attacks with relatively small budgets. Their attacks could produce economical loses, physical damage, even possible explosions.
 
While isolated attacks have a small impact when put in the context of cyberwar, they can cause panic in populations, political crisis, or geopolitical problems if combined with other larger impact attacks.
Probably in a future post I will describe more of these kinds of large scale, big impact attacks.
INSIGHTS | October 17, 2013

Strike Two for the Emergency Alerting System and Vendor Openness

Back in July I posted a rant about my experiences reporting the DASDEC issues and the problems I had getting things fixed. Some months have passed and I thought it would be a good time to take a look at how the vulnerable systems have progressed since then.

Well, back then my biggest complaint was the lack of forthrightness in Monroe Electronics’ public reporting of the issues; they were treated as a marketing problem rather than a security one. The end result (at the time) was that there were more vulnerable systems available on the internet – not fewer – even though many of the deployed appliances had adopted the 2.0-2 patch.

What I didn’t know at the time was that the 2.0-2 patch wasn’t as effective as one would have hoped; in most cases bad and predictable credentials were left in place intentionally – as in I was informed that Monroe Electronics were “intentionally not removing the exposed key(s) out of concern for breaking things.”

In addition to not removing the exposed keys, it didn’t appear that anyone even tried to review or audit any other aspect of the DASDEC security before pushing the update out. If someone told you that you had a shared SSH key for root you might say… check the root password wasn’t the same for every box too right? Yeah… you’d think so wouldn’t you!

After discovering that most of the “patched” servers running 2.0-2 were still vulnerable to the exposed SSH key I decided to dig deeper in to the newly issued security patch and discovered another series of flaws which exposed more credentials (allowing unauthenticated alerts) along with a mixed bag of predictable and hardcoded keys and passwords. Oh, and that there are web accessible back-ups containing credentials.

Even new features introduced to the 2.0-2 version since I first looked at the technology appeared to contain a new batch of hardcoded (in their configuration) credentials.

Upon our last contact with CERT we were informed that ‘[t]hese findings are entering the realms of “not terribly serious” and “not something the vendor can practically do much about.”‘

Go team cyber-security!

So… on one hand we’ve had one zombie alert and a good hand-full of responsibly disclosed issues which began back in January 2013… on the other hand I’m not sure anything changed except for a few default passwords and some version numbers.

Let’s not forget that the EAS is a critical national infrastructure component designed to save lives in an emergency. Ten months on and the entire system appears more vulnerable than when we began pointing out the vulnerabilities.
INSIGHTS | July 11, 2013

Why Vendor Openness Still Matters

When the zombies began rising from their graves in Montana it had already been over 30 days since IOActive had reported issues with Monroe Electronics DASDECS.
 
And while it turned out in the end that the actual attacks which caused the false EAS messages to be transmitted relied on the default password never having been changed, this would have been the ideal point to publicize that there was a known issue and that there was a firmware update available, or would soon be to address this and other problems… maybe a mitigation or two in the mean time right?

At a minimum it would have been an ideal time to provide the simple mitigation:
“rm ~/.ssh/authorized_keys”
 
Unfortunately this never happened, leading to a phenomena I like to call “admin droop”.  This is where an administrator, after discovering the details of a published vulnerability, determines that he’s not vulnerable because he doesn’t run the default configuration and everything is working and doesn’t bother to upgrade to the next version.
 
… it happens…
 
In the absence of reliable information other outlets such as the FCC provided pretty minimal advice about changing passwords and using firewalls; I simply commented to the media that this was “inadequate” advice.
 
Eventually, somewhere around April 24 Monroe, with the assistance of CERT begin contacting customers about a firmware fix, we provided a Shodan XML file with a few hundred vulnerable hosts to help track them down. Finally it looked like things were getting fixed but I was somewhat upset that I still had not seen official acknowledgement of the issue from Monroe but on June 13 Monroe finally published this /wp-content/uploads/2013/07/130604-Monroe-Security-PR.pdf 
security advisory where it stated “Removal of default SSH keys and a simplified user option to load new SSH keys”, ok its not much of an “announcement” but its something and  I know it says April 24, but both the filename and metadata (pdfinfo) point to cough later origins…
 
Inside the advisory is this wonderful sentence: 
“The company notes that most of its users already have obtained this update.”… That sound s like something worth testing!
 
Then it happened, before I could say “admin droop”… 
 
Found/Vulnerable hosts before we reported the issue:   222
Found hosts After the patch was released, as found by a survey on July  11:  412
 
Version numbers       
         Hosts Found
       Vulnerable (SSH Key)
1.8-5a
                1
                 Yes
1.8-6
                2
                 Yes
2.0-0
              110
                 Yes
2.0-0_a01
                1
                 Yes
2.0-1
               68           
                 Yes
2.0-2 (patched)
               50
                  No
unreachable                              180

While most users may have “obtained” the update, someone certainly didn’t bother applying it…
 
Yep… it’s worse now than it was before we reported the issue in January, and everyone thinks everything is just fine. While I’m sure this would still be a problem had a proper security notice been issued.
 
I can’t say it any better than these awesome folks over at Google http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html : “Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds.”

 

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close