WHITEPAPER | February 10, 2020

LoRaWAN Networks Susceptible to Hacking: Common Cyber Security Problems, How to Detect and Prevent Them

LoRaWAN is fast becoming the most popular wireless, low-power WAN protocol. It is used around the world for smart cities, industrial IoT, smart homes, etc., with millions of devices already connected.

The LoRaWAN protocol is advertised as having “built-in encryption” making it “secure by default.” As a result, users are blindly trusting LoRaWAN networks and not paying attention to cyber security; however, implementation issues and weaknesses can make these networks easy to hack.

Currently, cyber security vulnerabilities in LoRaWAN networks are not well known, and there are no existing tools for testing LoRaWAN networks or for detecting cyber attacks, which makes LoRaWAN deployments an easy target for attackers.

In this paper, we describe LoRaWAN network cyber security vulnerabilities and possible cyber attacks, and provide useful techniques for detecting them with the help of our open-source tools.

RESEARCH | April 17, 2014

A Wake-up Call for SATCOM Security

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.
 
We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.
 
Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft’s ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 
 
IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 
 
Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  
 
In previous blog posts I’ve explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.

 
What about the results? 
 
Insecure and undocumented protocols, backdoors, hard-coded credentials…mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.
 
Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.
 
I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research IOActive_SATCOM_Security_WhitePaper
INSIGHTS | February 17, 2014

INTERNET-of-THREATS

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.
 
Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.
 
While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.
 
Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.
 
Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.
 
If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.
 
Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.
 
Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.
 
Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.
 
This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.
 
I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.
INSIGHTS | November 11, 2013

Practical and cheap cyberwar (cyber-warfare): Part I

Every day we hear about a new vulnerability or a new attack technique, but most of the time it’s difficult to imagine the real impact. The current emphasis on cyberwar (cyber-warfare if you prefer) leads to myths and nonsense being discussed. I wanted to show real life examples of large scale attacks with big impacts on critical infrastructure, people, companies, etc.
 

The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported.


The attack in this post is based on research done by my fellow pirates, Lucas Apa and Carlos Penagos. They found critical vulnerabilities in wireless industrial control devices. This research was first presented at BH USA 2013. You can find their full presentation here https://www.blackhat.com/us-13/archives.html#Apa
 
A common information leak occurs when vendors highlight how they helped Company X with their services or products. This information is very useful for supply chain attacks. If you are targeting Company X, it’s good to look at their service and product providers. It’s also useful to know what software/devices/technology they use.

 

In this case, one of the vendors that sells vulnerable wireless industrial control devices is happy to announce in a press release that Company X has acquired its wireless sensors and is using them in the Haynesville Shale fields. So, as an attacker, we now know that Company X is using vulnerable wireless sensors at the Haynesville Shale fields. Haynesville Shale fields, what’s that? Interesting, with a quick Google search you end up with:
 
 
 
How does Google know about shale well locations? It’s simple, publically-available information. You can display wells by name, organization, etc.:
 
 
 
 
 
Even interactive maps are available:
 
 
 
You can find all of Company X’s wells along with their exact location (geographical coordinates). You know almost exactly where the vulnerable wireless sensors are installed.
 
Since the wells are at a remote location, exploiting the wireless sensor vulnerabilities becomes an interesting challenge. Enter drones, UAV unmanned aerial vehicles. Commercially available drones range from a couple hundred dollars to tens of thousands dollars, depending on range, endurance, functionality, etc. You can even build your own and save some money. The idea is to put the attack payload in a drone, send it to the wells’ location, and launch the attack. This isn’t difficult to do since drones can be programmed to fly to x,y coordinates and execute the payload while flying around the target coordinates (no need to return). 
 
Depending on your budget, you can launch an attack from a nearby location or very far away. Depending on the drone’s endurance, you can be X miles away from the target. You can extend the drone’s range depending on the radio and antenna used. 
 
The types of exploits you could launch from the drone range from bricking all of the wireless devices to causing some physical harm on the shale gas installations. Manipulating device firmware or injecting fake data on radio packets could make the control systems believe things like the temperature or pressure are wrong. Just bricking the devices could result in significant lost money to Company X. The devices would need to be reconfigured/reflashed. The exploits could interfere with shale gas extraction and even halt production. The consequences of an attack could be even more catastrophic depending on how the vulnerable devices are being used.
 
Attacks could be expanded to target more than just one vendor’s device. Drones could do reconnaissance first, scan and identify devices from different vendors, and then launch attacks targeting all of the specific devices.
 
In order to highlight attack possibilities and impact consequences I extracted the following from http://www.onworld.com/news/newsoilandgas.html (the companies mentioned in this article are not necessarily vulnerable, this is just for illustrative purposes):
 
“…Pipelines & Corrosion Monitoring
Wireless flow, pressure, level, temperature and valve position monitoring are used to streamline pipeline operation and storage while increasing safety and regulatory compliance. In addition, wireless sensing solutions are targeted at the billions of dollars per year that is spent managing pipeline corrosion. While corrosion is a growing problem for the aging pipeline infrastructure it can also lead to leaks, emissions and even deadly explosions in production facilities and refineries….”
 
Leaks and deadly explosions can have sad consequences.
 
Cyber criminals, terrorists, state actors, etc. can launch big impact attacks with relatively small budgets. Their attacks could produce economical loses, physical damage, even possible explosions.
 
While isolated attacks have a small impact when put in the context of cyberwar, they can cause panic in populations, political crisis, or geopolitical problems if combined with other larger impact attacks.
Probably in a future post I will describe more of these kinds of large scale, big impact attacks.
INSIGHTS | October 17, 2013

Strike Two for the Emergency Alerting System and Vendor Openness

Back in July I posted a rant about my experiences reporting the DASDEC issues and the problems I had getting things fixed. Some months have passed and I thought it would be a good time to take a look at how the vulnerable systems have progressed since then.

Well, back then my biggest complaint was the lack of forthrightness in Monroe Electronics’ public reporting of the issues; they were treated as a marketing problem rather than a security one. The end result (at the time) was that there were more vulnerable systems available on the internet – not fewer – even though many of the deployed appliances had adopted the 2.0-2 patch.

What I didn’t know at the time was that the 2.0-2 patch wasn’t as effective as one would have hoped; in most cases bad and predictable credentials were left in place intentionally – as in I was informed that Monroe Electronics were “intentionally not removing the exposed key(s) out of concern for breaking things.”

In addition to not removing the exposed keys, it didn’t appear that anyone even tried to review or audit any other aspect of the DASDEC security before pushing the update out. If someone told you that you had a shared SSH key for root you might say… check the root password wasn’t the same for every box too right? Yeah… you’d think so wouldn’t you!

After discovering that most of the “patched” servers running 2.0-2 were still vulnerable to the exposed SSH key I decided to dig deeper in to the newly issued security patch and discovered another series of flaws which exposed more credentials (allowing unauthenticated alerts) along with a mixed bag of predictable and hardcoded keys and passwords. Oh, and that there are web accessible back-ups containing credentials.

Even new features introduced to the 2.0-2 version since I first looked at the technology appeared to contain a new batch of hardcoded (in their configuration) credentials.

Upon our last contact with CERT we were informed that ‘[t]hese findings are entering the realms of “not terribly serious” and “not something the vendor can practically do much about.”‘

Go team cyber-security!

So… on one hand we’ve had one zombie alert and a good hand-full of responsibly disclosed issues which began back in January 2013… on the other hand I’m not sure anything changed except for a few default passwords and some version numbers.

Let’s not forget that the EAS is a critical national infrastructure component designed to save lives in an emergency. Ten months on and the entire system appears more vulnerable than when we began pointing out the vulnerabilities.
INSIGHTS | April 2, 2013

Spotting Fake Chips in the Supply Chain

In the information security world we tend to focus upon vulnerabilities that affect the application and network architecture layers of the enterprise and, every so often, some notable physical devices. Through various interrogatory methods we can typically uncover any vulnerabilities that may be present and, through discussion with the affected business units, derive a relative statement of risk to the business as a whole.

 

An area of business rarely dissected from an information security perspective however is the supply chain. For manufacturing companies and industrial suppliers, nothing is more critical to their continued business success than maintaining the integrity and reliability of their supply chain. For some industries – such as computer assembly or truck fabrication facilities – even the smallest hiccup in their just-in-time ordering system can result in entire assembly lines being gummed up and product not being rolled out the front door.

 

The traditional approach to identifying vulnerabilities within the supply chain is largely through a paper-based audit process – sometimes top-and-tailed with a security assessment of PC-based assessments. Rarely (if ever) are the manufacturing machines and associated industrial control systems included in physical assessments or penetration tests for fear of disrupting the just-in-time manufacturing line.

 

Outside the scope of information security assessment, and often beyond the capabilities of automated quality assurance practices within an organizations assembly line, lies the frailty of being victim to failure of a third-party supplier’s tainted supply chain.

 

For example, let’s look at a common microprocessor ordered through a tainted supply chain.

 

Dissecting a ST19XT34 Microprocessor

 

In early 2012 samples of the ST ST19XT34 were ordered from www.hkinventory.com.  The ST19XT34 is a secure microprocessor designed for very large volume and cost-effective secure portable applications (such as smartcards used within Chip&PIN technologies). The ST19X platform includes an internal Modular Arithmetic Processor (MAP) and DES accelerator – designed to speed up cryptographic calculations using Public Key Algorithms and Secret Key Algorithms.

 

The ST19XT34 chips that IOActive were charged to investigate were encapsulated within a standard SOIC package and were supposed to have 34kb of EEPROM.

 

Upon visual analysis the devices appeared to be correct.  However, after decapsulation, it was clear that the parts provided were not what had been ordered.

 
 
 

In the above image we have a ‘fake’ ST19XT34 on the left with a sample of the genuine chip on the right.  It is almost impossible to tell the left device was altered unless you have a known original part.

 
 
 

After decapsulation of the various parts it was easy to immediately recognize the difference between the two SOIC part.  The left ‘fake’ device was actually an ST ST19AF08 with the right being the genuine ST19XT34.

 
 

The ST19AF08 is a 600 nanometer 3 metal device (on left).  It contains an 8 KB EEPROM.

 

The ST19XT34 is a 350 nanometer 3 metal device (on right).  It contains a 34 KB EEPROM making the die much larger than the older and smaller sized device.

 

Microprocessor Supply Chain Frailty

 

As the example above clearly shows, it is often very difficult to identify a tainted supply chain. While an x-ray in the above case could also have verified the integrity of the supplier if it had been part of the quality assurance process, it would not have detected more subtle modifications to the supplied microprocessors.

 

If it is so easy to taint the supply chain and introduce fraudulently marked microprocessors, how hard is it to insert less obvious – more insidious – changes to the chips? For example, what if a batch of ST19XT34 chips had been modified to weaken the DES/triple-DES capabilities of the chip, or perhaps the random number generator was rigged with a more predictable pseudo random algorithm – such that an organized crime unit or government entity could trivially decode communications or replay transactions?

 

The frailty of today’s supply chain is a genuine concern for many. The capability of organized crime and foreign government entities to include backdoors, add malicious code, or subvert “secure” routines within fake or counterfeit microprocessors is not a science fiction story, but something that can occur today. The ability to inject these modified chips in to the supply chain of any global manufacturer of goods is, quite frankly, trivial.

 

The cost of entry for organized criminals and government entities to undertake this kind of manipulation of the supply chain is high, but well within their financial capabilities – and, more importantly, they could reap great rewards from their investment. 

 

Identifying a tainted supply chain is not a simple task. It requires specialized equipment capable of dissecting microprocessors at the nanometer scale, fiddly extraction of microcode, and skilled security analysts to sift through the code looking for backdoors and deliberate weaknesses in the algorithms. 

 

It’s an expensive and time consuming proposition – but fast becoming  a critical component when it comes to assuring that today’s smartphones, Chip&PIN technologies and critical infrastructure control systems aren’t subject to organized subversion.

INSIGHTS | January 30, 2013

Energy Security: Less Say, More Do

Due to recent attacks on many forms of energy management technology ranging from supervisory control and data acquisition (SCADA) networks and automation hardware devices to smart meters and grid network management systems, companies in the energy industry are increasing significantly the amount they spend on security. However, I believe these organizations are still spending money in the wrong areas of security.  Why? The illusion of security, driven by over-engineered and over-funded policy and control frameworks and the mindset that energy security must be regulated before making a start is preventing, not driving, real world progress.

Sadly, I don’t see organizations in the oil and gas exploration, utility, and consumer energy management sectors taking more visible and proactive approaches to improving the security of their assets in 2013 any more than they did in 2012.
It’s only January, you protest. But let me ask you: on what areas are your security teams going to focus in 2013?
I’ve had the privilege in the past six months of travelling to Asia, the Middle East, Europe and the U.S. to deliver projects and have seen a number of consistent shortcomings in security programs in almost every energy-related organization that I have dealt with. Specialized security teams within IT departments are commonplace now, which is great. But these teams have been in place for some time. And even though as an industry we spend millions on security products every year, the number of security incidents is also increasing every year.  I’m sure this trend will continue in 2013. It is clear to me (and this is a global issue in energy security), that the great majority of organizations do not know where or how to correctly spend their security budgets.
Information security teams focus heavily on compliance, policies, controls, and the paper perception of what good security looks like when in fact there is little or no evidence that this is the case. Energy organizations do very little testing to validate the effectiveness of their security controls, which leaves these companies exposed to attacks and wondering what they are doing wrong.

 

For example, automated malware has been mentioned many times in the press and is a persistent threat, but companies are living under the misapprehension that having endpoint solutions alone will protect them from this threat. Network architectures are still being poorly designed and communication channels are still operating in the clear, leaving critical infrastructure solutions exposed and vulnerable.
I do not mean to detract from technology vendors who are working hard to keep up with all the new malware challenges, and let’s face it, we would we would be lost without many of their solutions. But organizations that are purchasing these products need to “trust but verify” these products and solutions by requiring vendors and solution integrators to prove that the security solutions they are selling are in fact secure. The energy industry as a whole needs to focus on proving the existence of controls and to not rely on documents and designs that say how a system should be secure. Policies may make you look good, but how many people read them? And, if they did read them, would they follow them? How would you know? And could you place your hand on heart and swear to the CEO, “I’m confident that our critical systems and data cannot be compromised.”?

 

I say, “Less say, more do in 2013.” Energy companies globally need to stop waiting for regulations or for incidents to happen and must do more to secure their systems and supply. We know we have a problem in the industry and it won’t go away while we wait for more documents that define how we should improve our security defenses. Make a start. The concepts aren’t new, and it’s better to invest money and effort in improved systems rather than churning out more polices and paper controls and hoping they make you more secure. And it is hope, because without evidence how can you really be sure the controls you design and plan are in place and effective?

 

Start by making improvements in the following areas and your overall security posture will also improve (a lot of this is old news, but sadly is not being done):

 

Recognize that compliance doesn’t guarantee security. You must validate it.
·         Use ISA99 for SCADA and ISO27001/2/5 for security risk management and controls.
·         Use compliance to drive budget conversations.
·         Don’t get lost in a policy framework. Instead focus on implementing, then validating.
·         Always validate paper security by testing internal and external controls!
Understand what you have and who might want to attack it.
·         Define critical assets and processes.
·         Create a list of who could affect these assets and how.
·         Create a layered security architecture to protect these assets.
·         Do this work in stages. Create value to the business incrementally.
·         Test the effectiveness of your plans!
Do the basics internally, including:
·         Authentication for logins and machine-to-machine communications.
·         Access control to ensure that permissions for new hires, job changers, and departing employees are managed appropriately.
·         Auditing to log significant events for critical systems.
·         Availability by ensuring redundancy and that the organization can recover from unplanned incidents.
·         Integrity by validating critical values and ensuring that accuracy is always upheld.
·         Confidentiality by securing or encrypting sensitive communications.
·         Education to make staff aware of good security behaviors. Take a Health & Safety approach.
Trust but verify when working with your suppliers:
·         Ask vendors to validate their security, not just tell you “it’s secure.”
·         Ask suppliers what their security posture is. Do they align to any security standards? When was the last time they performed a penetration test on client-related systems? Do they use a Security Development Lifecycle for their products?
·         Test their controls or ask them to provide evidence that they do this themselves!
Work with agencies who are there to assist you and make them part of your response strategy, such as:
·         Computer Emergency Readiness Team (CERT)
·         Centre for the Protection of National Infrastructure (CPNI)
·         North American Electric Reliability Corporation (NERC)
Trevor Niblock, Director, ICS and Smart Grid Services
INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.
INSIGHTS | October 24, 2012

The WECC / NERC Wash-up

Last week in San Diego, IOActive spoke at both the Western Electricity Coordinating Council (WECC) and NERC GridSec (GridSecCon) conferences. WECC is primarily an auditor audience and NERC-CIP is compliance-focused, while GridSecCon is the community and technical security authority for the electricity industry in the U.S. There was a great turnout for both conferences, with more than 200 attendees across three days per conference. IOActive security researcher Eireann Leverett presented “The Last Gasp of the Industrial Air-Gap…”at WECC and participated in a discussion panel on Industry Best Practice for Grid Security at GridSecCon.

 

WECC

 

An auditors forum, what can I say…other than they do have a great sense of humor [they got Eireann’s Enron corporate failure joke], apparently enjoy a drink like everyone else and definitely need our help and perspective when it comes to understanding the disparity between being compliant and being secure. Day 1 was a closed session for WECC members only, where they discuss god only knows what. Day 2 is where IOActive got involved, with the morning session focusing on Cyber Security and explaining in a little more detail the security challenges the industry faces which are aligned to CIP audit and compliance programmes. Eireann, Jonathan Pollet and Tom Parker all gave great talks which some could argue were a little too technical for the audience, but they were well received nonetheless. More work is definitely needed with forums such as WECC and the NERC standards council to ensure the gap between CIP compliance and the actual state of Energy security is further reduced.

 

GridSecCon

 

The audience at GridSecCon was anything but a single area of expertise like we saw at the WECC conference. I engaged with folk from security, engineering, risk management, consulting, audit and compliance and technology backgrounds.From that its fair to say the Industrial Control System and Energy sector is a red-hot opportunity for new rules, new products, new ideas…and new failures! Eireann was in fine form again on the panel – which was made up of consulting, product vendors and Government – throwing in a quote from Trotsky, a Russian Marxist revolutionary.

. I laughed, but not sure the rest of the crowd appreciated the irony. European humor at work J.  I didn’t see as much as I would have liked to on the Supply Chain side of things including the term Security of Supply, which is widely used in Europe. More work is definitely needed in these areas and is something I will look at in 2013.

 

Day 1 saw some interesting talks on Social Engineering threats and a panel discussion on Malware threats to the sector. The threat of Social Engineering in the Energy sector in terms of awareness is clearly on the rise.  Positively, it seemed organizations were placing more emphasis [I said more, not enough] on educating SCADA Operations staff around Phishing and Telephony based attacks. Tim Roxy from NERC oversaw the Malware panel as participants openly discussed the threat of new Malware targeting the Energy sector, in particular the effects of SQL Slammer on SCADA systems and a review of the recent attack on Saudi Aramco (Shamoon Malware). It was unclear if the SCADA networks at Saudi Aramco were affected but obviously there are similar challenges in store as SCADA and Corporate networks continue to converge. The incident also triggered an unprecedented response exercise involving reviews of up to 120 of Saudi Aramco’s Plant sites across the Middle East region.

 
Day 2 was kicked off by an excellent Key Note talk by Admiral Thad Allen, [retired] US Coast Guard, on Incident Response and his view of the challenges national infrastructure security is facing in the US, which could easily be applied globally. Undeniably, Admiral Allen said complexity was the biggest challenge we face in securing existing and new national infrastructure. His talk gave examples of his experience in dealing with incidents such as hurricane Katrina in New Orleans, in particular, the importance of defining exactly what the problem is before even thinking about how to respond to it. Not correctly understanding the problem in relation to coordinating an effective response could mean an expensive and ineffective solution, which is exactly where the Energy sector sits today – “stop admiring the problem, start working on the solution.”

Technical vs. Risk Management – the age-old conundrum

 

It still surprises me to see after 15 odd years of our industry coming to the forefront and an estimated 50+ billion dollar spend in implementing technical security measures that we continue to see the topic of technical vs. risk management come up at these conferences. If technical solutions were security nirvana we wouldn’t be worried about anything today would we? Of course we need both areas, and each is an important as the other. Sure, the technical stuff may seem more interesting, but if we can’t sell the importance of what the tech tells us in a business language the overall security of the Energy industry will continue to struggle for traction. Likewise, the perceived notion that compliance to standards like CIP/ISO27000 etc. etc. keep us safe at night will continue to skew the real picture unless we can talk tech, risk and compliance at the same time.

 

What are the conferences missing?

 

Maybe I don’t attend enough conferences, and I understand client sensitivities in sharing this sort of information, but what these conferences need more of is a view from the field – what is actually going on below all the conversations about risk management, compliance and products. Again, stop admiring the problem, admit we have one by analyzing what’s actually going on in the field, and use this to inform programmes of work to solve the issues. We know what good looks like; only talking about it is as useful in the real world as a chocolate teapot…

 

Key Take Aways

 

Did I learn anything new? Of course I did, however a lot of the core messages like “we need to talk tech and risk management” and “sector-wide information sharing” continue to be old wine in new bottles for me, especially while governments set strict rules on who they value information from [who they deem as appropriate] and how it can be done [at their approval]. And it’s a little troubling if we have a whole industry sector with its concerns around the security of national infrastructure still trying to understand the importance of risk management or the gaps between compliance and actual security. Saying that, WECC and NERC are clearly making concerted efforts to move thing in the right direction.

 

 Wicked Problems and Black Swans [Day 2 Key Note, Admiral Thad Allen]. Again, a great talk by Admiral Allen and some great perspective. A Wicked Problem: something we know is there but we don’t have an answer to [lack of Utilities investment in Grid security]. A Black Swan: an outcome so dire it doesn’t seem likely [Grid failure/compromise].

 

As I see it, we need to be more vocal in participating in the sector forums and share [in a generic fashion] what we are seeing in the field, which should further inform organizations like WECC and NERC with a view to continued security improvement across the sector.

 

 San Diego is hot, the Tex Mex food is great…and I’ll hopefully see you all at WECC & NERC in 2013!

 

INSIGHTS | October 11, 2012

SexyDefense Gets Real

As some of you know by now, the recent focus of my research has been defense. After years of dealing almost exclusively with offensive research, I realized that we have been doing an injustice to ourselves as professionals. After all, we eventually get to help organizations protect themselves (having the mindset that the best way to learn defense is to study the offensive techniques), but nevertheless, when examining how organizations practice defense one has a feeling of missing something.
For far too long the practice (and art?) of defense has been entrusted to bureaucrats and was lowered down to a technical element that is a burden on an organization. We can see it from the way that companies have positioned defensive roles: “firewall admin,” “IT security manager,” “incident handler,” and even the famous “CISO.” CISOs have been getting less and less responsibility over time, basically watered down to dealing with the network/software elements of the organization’s security. No process, no physical, no human/social. These are all handled by different roles in the company (audit, physical security, and HR, respectively).
This has led to the creation of the marketing term “APT”: Advanced Persistent Threat. The main reason why non-sophisticated attackers are able to deploy an APT is the fact that organizations are focusing on dealing with extremely narrow threat vectors; any threat that encompasses multiple attack vectors that affect different departments in an organization automatically escalates into an APT since it is “hard” to deal with such threats. I call bullshit on that.
As an industry, we have not really been supportive of the defensive front. We have been pushing out products that deal mainly with past threats and are focused on post-mortem detection of attacks. Anti-virus systems, firewalls, IDS, IPS, and DLP – these are all products that are really effective against attacks from yesteryears. We ignore a large chunk of the defense spectrum nowadays, and attackers are happily using this against us, the defenders.
When we started SexyDefense, the main goal was to open the eyes of defensive practitioners, from the hands-on people to executive management. The reason for this is that this syndrome needs to be fixed throughout the ranks. I already mentioned that the way we deal with security in terms of job titles is wrong. It’s also true for the way we approach it on Day 1. We make sure that we have all the products that industry best practices tell us to have (which are from the same vendors that have been pushing less-than-effective products for years), and then we wait for the alert telling us that we have been compromised for days or weeks.
What we should be doing is first understanding what are we protecting! How much is it worth to the organization? What kind of processes, people, and technologies “touch” those assets, and how do they affect it? What kind of controls are there to protect such assets? And ultimately, what are the vulnerabilities in processes, people, and technologies related to said assets?
These are tough questions – especially if you are dealing with an “old school” practice of security in a large organization. Now try asking the harder question: who is your threat? No, don’t say hackers! Ask the business line owners, the business development people, sales, marketing, and finance. These are the people who probably know best what are the threats to the business, and who is out there to get it. Now align that information with the asset related ones, and you get a more complete picture of what you are protecting, and from whom. In addition, you can already see which controls are more or less effective against such threats, as it’s relatively easy to figure out the capabilities, intent, and accessibility of each adversary to your assets.
Now, get to work! But don’t open that firewall console or that IPS dashboard. “Work” means gathering intelligence on your threat communities, keeping track of organizational information and changes, and owning up to your home-field advantage. You control the information and resources used by the organization. Use them to your advantage to thwart threats, to detect intelligence gathering against your organization, to set traps for attackers, and yes, even to go the whole 9 yards and deal with counterintelligence. Whatever works within the confines of the law and ethics.
If this sounds logical to you, I invite you to read my whitepaper covering this approach [sexydefense.com] and participate in one of the SexyDefense talks in a conference close to you (or watch the one given at DerbyCon online: [http://www.youtube.com/watch?v=djsdZOY1kLM].
If you have not yet run away, think about contributing to the community effort to build a framework for this, much like we did for penetration testing with PTES. Call it SDES for now: Strategic Defense Execution Standard. A lot of you have already been raising interest in it, and I’m really excited to see the community coming up with great ideas and initiatives after preaching this notion for a fairly short time.
Who knows what this will turn into?

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close