RESEARCH | January 11, 2018

SCADA and Mobile Security in the IoT Era

Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”


Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems.
The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.

SCADA and Mobile Applications
ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.

Figure 1: Modern ICS infrastructure including mobile apps

Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.


Local Applications

Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial).

Remote Applications
Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. 

In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.

Typical Threats And     Attacks

In this section, we discuss the typical threats to this heterogeneous landscape of applications and how attacks could be conducted. We also map the threats to the application types.
 
Threat Types
There are three main possible ICS threat types:
  • Unauthorized physical access to the device or “virtual” access to device data
  • Communication channel compromise (MiTM)
  • Application compromise

Table 1 summarizes the threat types.

Table 1: SCADA mobile client threat list
 
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
 
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
 
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
 
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
  • Perform analysis and fill out the test checklist
  • Perform client and backend fuzzing
  • If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
  • Application purpose, type, category, and basic information 
  • Permissions
  • Password protection
  • Application intents, exported providers, broadcast services, etc.
  • Native code
  • Code obfuscation
  • Presence of web-based components
  • Methods of authentication used to communicate with the backend
  • Correctness of operations with sessions, cookies, and tokens 
  • SSL/TLS connection configuration
  • XML parser configuration
  • Backend APIs
  • Sensitive data handling
  • HMI project data handling
  • Secure storage
  • Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting  SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
 
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
 
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
 
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics

In our white paperwe provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings.

Remediation And Best Practices
In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems.

In the following list, we gathered the most important items to consider when developing a mobile SCADA application:

  • Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
  • Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
  • Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
  • Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
  • Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
  • Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
  • Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
  • Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
  • Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
  • If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
  • Limit the permissions your application requires to the strict minimum.
  • Implement obfuscation and anti-tampering protections in your application.

Conclusions
Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. 

We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.

In 2015, we wrote:

SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.

We now concede that we were too optimistic and acknowledge that our previous statement was wrong.

Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks.

Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late.

For the complete analysis, please download our white paper here.

Acknowledgments

Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
 
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
 
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
 
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
 
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
INSIGHTS | June 4, 2013

Industrial Device Firmware Can Reveal FTP Treasures!

Security professionals are becoming more aware of backdoors, security bugs, certificates, and similar bugs within ICS device firmware. I want to highlight another bug that is common in the firmware for critical industrial devices: the remote access provided by some vendors between their devices and ftp servers for troubleshooting or testing. In many cases this remote access could allow an attacker to compromise the device itself, the company the device belongs to, or even the entire vendor organization.
I discovered this vulnerability while tracking connectivity test functions within the firmware for an industrial device gateway. During my research I landed on a script with ftp server information that is transmitted in the clear:
The script tests connectivity and includes two subtests. The first is a simple ping to the host “8.8.8.8”. The second test uses an internal ftp server to download a test file from the vendor’s ftp server and to upload the results back to the vendor’s server. The key instructions use the wget and wput commands shown in the screen shot.
In this script, the wget function downloads a test file from the vendor’s ftp server. When the test is complete, the wput function inserts the serial number of the device into the file name before the file is uploaded back to the vendor.
This is probably done to ensure that file names identify unique devices and for traceability.
This practice actually involves two vulnerabilities:
  • Using the device serial number as part of a filename on a relatively accessible ftp server.
  • Hardcoding the ftp server credentials within the firmware.
This combination could enable an attacker to connect to the ftp server and obtain the devices serial numbers.
The risk increases for the devices I am investigating. These device serial numbers are also used by the vendor to generate default admin passwords. This knowledge and strategy could allow an attacker to build a database of admin passwords for all of this vendor’s devices.
Another security issue comes from a different script that points to the same ftp server, this time involving anonymous access.
This vendor uses anonymous access to upload statistics gathered from each device, probably for debugging purposes. These instructions are part of the function illustrated below.
As in the first script, the name of the zipped file that is sent back to the ftp server includes the serial number.
The script even prompts the user to add the company name to the file name.
An attacker with this information can easily build a database of admin passwords linked to the company that owns the device.

 

The third script, which also involves anonymous open access to the ftp server, gathers the device configuration and uploads it to the server. The only difference between this and the previous script is the naming function, which uses the configs- prefix instead of the stats- prefix.
The anonymous account can only write files to the ftp server. This prevents anonymous users from downloading configuration files. However, the server is running an old version of the ftp service, which is vulnerable to public exploits that could allow full access to the server.
A quick review shows that many common information security best practice rules are being broken:
  1. Naming conventions disclose device serial numbers and company names. In addition, these serial numbers are used to generate unique admin passwords for each device.
  2. Credentials for the vendor’s ftp server are hard coded within device firmware. This would allow anyone who can reverse engineer the firmware to access sensitive customer information such as device serial numbers.
  3. Anonymous write access to the vendor’s ftp server is enabled. This server contains sensitive customer information, which can expose device configuration data to an attacker from the public Internet. The ftp server also contains sensitive information about the vendor.
  4. Sensitive and critical data such as industrial device configuration files are transferred in clear text.
  5. A server containing sensitive customer data and running an older version of ftp that is vulnerable to a number of known exploits is accessible from the Internet.
  6. Using Clear text protocols to transfer sensitive information over internet
Based on this review we recommend some best practices to remediate the vulnerabilities described in this article:
  1. Use secure naming conventions that do not involve potentially sensitive information.
  2. Do not hard-code credentials into firmware (read previous blog post by Ruben Santamarta).
  3. Do not transfer sensitive data using clear text protocols. Use encrypted protocols to protect data transfers.
  4. Do not transfer sensitive data unless it is encrypted. Use high-level encryption to encrypt sensitive data before it is transferred.
  5. Do not expose sensitive customer information on public ftp servers that allow anonymous access.
  6. Enforce a strong patch policy. Servers and services must be patched and updated as needed to protect against known vulnerabilities.
INSIGHTS | May 23, 2013

Identify Backdoors in Firmware By Using Automatic String Analysis

The Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) this Friday published an advisory about some backdoors I found in two programmable gateways from TURCK, a leading German manufacturer of industrial automation products.
Using hard-coded account credentials in industrial devices is a bad idea. I can understand the temptation among manufacturers to include a backdoor “support” mechanism in the firmware for a product such as this. This backdoor allows them to troubleshoot problems remotely with minimal inconvenience to the customer.

On the other hand, it is only a matter of time before somebody discovers these ‘secret’ backdoors. In many cases, it takes very little time.

The TURCK backdoor is similar to other backdoors that I discussed at Black Hat and in previous blog posts. The common denominator is this: you do not need physical access to an actual device to find its backdoor. All you have to do is download device firmware from the vendor’s website. Once you download the firmware, you can reverse engineer it and learn some interesting secrets.
For example, consider the TURCK firmware. The binary file contains a small header, which simplifies the reverse engineering process:

 

 

The offset 0x1c identifies the base address. Directly after the header are the ARM opcodes. This is all the information you need to load the firmware into an IDA disassembler and debugger.

A brief review revealed that the device has its own FTP server, among other services. I also discovered several hard-coded credentials that are added when the system starts. Together, the FTP server and hard-coded credentials are a dangerous combination. An attacker with those credentials can completely compromise this device.
Find Hidden Credentials Fast
You can find hidden credentials such as this using manual analysis. But I have a method to discover hidden credentials more quickly. It all started during a discussion with friends at the LaCon conference regarding these kinds of flaws. One friend (hello @p0f ! ) made an obvious but interesting comment: “You could just find these backdoors by running the ‘strings’ command on the firmware file, couldn’t you?”

This simple observation is correct. All of the backdoors that are already public (such as Schneider, Siemens, and TURCK) could have been identified by looking at the strings … if you know common IT/embedded syntax. You still have to verify that potential backdoors are valid. But you can definitely identify suspicious elements in the strings.
There is a drawback to this basic approach. Firmware usually contains thousands of strings and it can take a lot of time to sift through them. It can be much more time consuming than simply loading the firmware in IDA, reconstructing symbols, and finding the ‘interesting’ functions.
So how can one collect intelligence from strings in less time? The answer is an analysis tool we created and use at IOActive called Stringfighter.
How Does Stringfighter Work?
Imagine dumping strings from a random piece of firmware. You end up with the following list of strings:
[Creating socket]
ERROR: %n
Initializing 
Decompressing kernel
GetSrvAddressById
NO_ADDRESS_FOUND
Xi$11ksMu8!Lma

 

From your security experience you notice that the last string seems different from the others—it looks like a password rather than a valid English word or a symbol name. I wanted to automate this kind of reasoning.
This is not code analysis. We only assume certain common patterns (such as sequential strings and symbol tables) usually found in this kind of binary. We also assume that compilers under certain circumstances put strings close to their references.
As in every decision process, the first step is to filter input based on selected features. In this case we want to discover ‘isolated’ strings. By ‘isolated’ I’m referring to ‘out-of-context’ elements that do not seem related to other elements.
An effective algorithm known as the ‘Levenshtein distance’ is frequently used to measure string similarity:
To apply this algorithm we create a window of n bytes that scans for ‘isolated’ strings. Inside that window, each string is checked against its ‘neighbors’ based on several metrics including, but not limited to, the Levenshtein distance.
However, this approach poses some problems. After removing blocks of ‘related strings,’ some potentially isolated strings are false positive results. For example, an ‘isolated’ string may be related to a distant ‘neighbor.’
We therefore need to identify additional blocks of ‘related strings’ (especially those belonging to symbol tables) formed between distant blocks that later become adjacent.
To address this issue we build a ‘first-letter’ map and run this until we can no longer remove additional strings.
At this point we should have a number of strings that are not related. However, how can we decide whether the string is a password? I assume developers use secure passwords, so we have to find strings that do not look like natural words.
We can consider each string as a first-order Markov source. To do this, use the following formula to calculate its entropy rate:
We need a large dictionary of English (or any other language) words to build a transition matrix. We also need a black list to eliminate common patterns.
We also want to avoid the false positives produced by symbols. To do this we detect symbols heuristically and ‘normalize’ them by summing the entropy for each of its tokens rather than the symbol as a whole.
These features help us distinguish strings that look like natural language words or something more interesting … such as undocumented passwords.
The following image shows a Stringfighter analysis of the TURCK BL20 firmware. The first string in red is highlighted because the tool detected that it is interesting.
In this case it was an undocumented password, which we eventually confirmed by analyzing the firmware.
The following image shows a Stringfighter analysis of the Schneider NOE 771 firmware. It revealed several backdoor passwords (VxWorks hashes). Some of these backdoor passwords appear in red.

Stringfighter is still a proof-of-concept prototype. We are still testing the ideas behind this tool. But it has exceeded our initial expectations and has detected backdoors in several devices. The TURCK backdoor identified in the CERT advisory will not be the last one identified by IOActive Labs. See you in the next advisory!
INSIGHTS | January 30, 2013

Energy Security: Less Say, More Do

Due to recent attacks on many forms of energy management technology ranging from supervisory control and data acquisition (SCADA) networks and automation hardware devices to smart meters and grid network management systems, companies in the energy industry are increasing significantly the amount they spend on security. However, I believe these organizations are still spending money in the wrong areas of security.  Why? The illusion of security, driven by over-engineered and over-funded policy and control frameworks and the mindset that energy security must be regulated before making a start is preventing, not driving, real world progress.

Sadly, I don’t see organizations in the oil and gas exploration, utility, and consumer energy management sectors taking more visible and proactive approaches to improving the security of their assets in 2013 any more than they did in 2012.
It’s only January, you protest. But let me ask you: on what areas are your security teams going to focus in 2013?
I’ve had the privilege in the past six months of travelling to Asia, the Middle East, Europe and the U.S. to deliver projects and have seen a number of consistent shortcomings in security programs in almost every energy-related organization that I have dealt with. Specialized security teams within IT departments are commonplace now, which is great. But these teams have been in place for some time. And even though as an industry we spend millions on security products every year, the number of security incidents is also increasing every year.  I’m sure this trend will continue in 2013. It is clear to me (and this is a global issue in energy security), that the great majority of organizations do not know where or how to correctly spend their security budgets.
Information security teams focus heavily on compliance, policies, controls, and the paper perception of what good security looks like when in fact there is little or no evidence that this is the case. Energy organizations do very little testing to validate the effectiveness of their security controls, which leaves these companies exposed to attacks and wondering what they are doing wrong.

 

For example, automated malware has been mentioned many times in the press and is a persistent threat, but companies are living under the misapprehension that having endpoint solutions alone will protect them from this threat. Network architectures are still being poorly designed and communication channels are still operating in the clear, leaving critical infrastructure solutions exposed and vulnerable.
I do not mean to detract from technology vendors who are working hard to keep up with all the new malware challenges, and let’s face it, we would we would be lost without many of their solutions. But organizations that are purchasing these products need to “trust but verify” these products and solutions by requiring vendors and solution integrators to prove that the security solutions they are selling are in fact secure. The energy industry as a whole needs to focus on proving the existence of controls and to not rely on documents and designs that say how a system should be secure. Policies may make you look good, but how many people read them? And, if they did read them, would they follow them? How would you know? And could you place your hand on heart and swear to the CEO, “I’m confident that our critical systems and data cannot be compromised.”?

 

I say, “Less say, more do in 2013.” Energy companies globally need to stop waiting for regulations or for incidents to happen and must do more to secure their systems and supply. We know we have a problem in the industry and it won’t go away while we wait for more documents that define how we should improve our security defenses. Make a start. The concepts aren’t new, and it’s better to invest money and effort in improved systems rather than churning out more polices and paper controls and hoping they make you more secure. And it is hope, because without evidence how can you really be sure the controls you design and plan are in place and effective?

 

Start by making improvements in the following areas and your overall security posture will also improve (a lot of this is old news, but sadly is not being done):

 

Recognize that compliance doesn’t guarantee security. You must validate it.
·         Use ISA99 for SCADA and ISO27001/2/5 for security risk management and controls.
·         Use compliance to drive budget conversations.
·         Don’t get lost in a policy framework. Instead focus on implementing, then validating.
·         Always validate paper security by testing internal and external controls!
Understand what you have and who might want to attack it.
·         Define critical assets and processes.
·         Create a list of who could affect these assets and how.
·         Create a layered security architecture to protect these assets.
·         Do this work in stages. Create value to the business incrementally.
·         Test the effectiveness of your plans!
Do the basics internally, including:
·         Authentication for logins and machine-to-machine communications.
·         Access control to ensure that permissions for new hires, job changers, and departing employees are managed appropriately.
·         Auditing to log significant events for critical systems.
·         Availability by ensuring redundancy and that the organization can recover from unplanned incidents.
·         Integrity by validating critical values and ensuring that accuracy is always upheld.
·         Confidentiality by securing or encrypting sensitive communications.
·         Education to make staff aware of good security behaviors. Take a Health & Safety approach.
Trust but verify when working with your suppliers:
·         Ask vendors to validate their security, not just tell you “it’s secure.”
·         Ask suppliers what their security posture is. Do they align to any security standards? When was the last time they performed a penetration test on client-related systems? Do they use a Security Development Lifecycle for their products?
·         Test their controls or ask them to provide evidence that they do this themselves!
Work with agencies who are there to assist you and make them part of your response strategy, such as:
·         Computer Emergency Readiness Team (CERT)
·         Centre for the Protection of National Infrastructure (CPNI)
·         North American Electric Reliability Corporation (NERC)
Trevor Niblock, Director, ICS and Smart Grid Services
INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.
INSIGHTS | June 28, 2012

Thoughts on FIRST Conference 2012

I recently had the opportunity to attend the FIRST Conference in Malta and meet Computer Emergency Response Teams from around the world. Some of these teams and I have been working together to reduce the internet exposure of Industrial Control Systems, and I met new teams who are interested in the data I share. For those of you who do not work with CERTs, FIRST is the glue that holds together the international collaborative efforts of these teams—they serve as both an organization that makes trusted introductions, and vets new teams or researchers (such as myself).

It was quite an honor to present a talk to this audience of 500 people from strong technical teams around the world. However, the purpose of this post is not my presentation, but rather to focus on all of the other great content that can be found in such forums. While it is impossible to mention all the presentations I saw in one blog post, I’d like to highlight a few.
A session from ENISA and RAND focused on the technical and legal barriers to international collaboration between National CERTS in Europe. I’m interested in this because during the process of sharing my research with various CERTs, I have come to understand they aren’t equal, they’re interested in different types of information, and they operate within different legal frameworks. For example, in some European countries an IP address is considered private information and will not be accepted in incident reports from other teams. Dr. Silvia Portesi and Neil Robinson covered a great wealth of this material type in their presentation and report, which can be found at the following location:
In the United Kingdom, this problem has been analyzed by Andrew Cormack, Chief Regulatory Advisor at Janet. If I recall correctly, our privacy model is far more usable in this respect  and Andrew explained it to me like this:
If an organization cannot handle private data to help protect privacy (which is part of its mission), then we are inhibiting the mission of the organization with our interpretation of the law.
This is relevant to any security researcher who works within incident response frameworks in Europe and who takes a global view of security problems.
Unfortunately, by attending this talk—which was directly relevant to my work—I had to miss a talk by Eldar Lillevik and Marie Moe of the NorCERT team. I had wanted to meet with them regarding some data I shared months ago while working in Norway. Luckily, I bumped into them later and they kindly shared the details I had missed; they also spent some of their valuable time helping me improve my own reporting capabilities for CERTs and correcting some of my misunderstandings. They are incredibly knowledgeable people, and I thank them for both their time and their patience with my questions.
Of course, I also met with the usual suspects in ICS/Smart Grid/SCADA security: ICS-CERT and Siemens. ICS-CERT was there to present on what has been an extraordinary year in ICS incident response. Of note, Siemens operates the only corporate incident response team in the ICS arena that’s devoted to their own products. We collectively shared information and renewed commitments to progress the ICS agenda in Incident Response by continuing international collaboration and research. I understand that GE-CIRT was there too, and apparently they presented on models of Incident Response.
Google Incident Response gave some excellent presentations on detecting and preventing data exfiltration, and network defense. This team impressed me greatly: they presented as technically-savvy, capable defenders who are actively pursuing new forensic techniques. They demonstrated clearly their operational maturity: no longer playing with “models,” they are committed to holistic operational security and aggressive defense.
Austrian CERT delivered a very good presentation on handling Critical Infrastructure Information Protection that focused on the Incident Response approach to critical infrastructure. This is a difficult area to work in because standard forensic approaches in some countries—such as seizing a server used in a crime—aren’t appropriate in control system environments. We met later to talk over dinner and I look forward to working with them again.
Finally, I performed a simple but important function of my own work, which comprises meeting people face-to-face and verifying their identities. This includes our mutually signing crypto-keys, which allows us to find and identify other trusted researchers in case of an emergency. Now that SCADA security is a global problem, I believe it’s incredibly important (and useful) to have contacts around the world with which IOActive already shares a secure channel