RESEARCH | March 9, 2018

Robots Want Bitcoins too!

Ransomware attacks have boomed during the last few years, becoming a preferred method for cybercriminals to get monetary profit by encrypting victim information and requiring a ransom to get the information back. The primary ransomware target has always been information. When a victim has no backup of that information, he panics, forced to pay for its return.
(more…)

EDITORIAL | January 31, 2018

Security Theater and the Watch Effect in Third-party Assessments

Before the facts were in, nearly every journalist and salesperson in infosec was thinking about how to squeeze lemonade from the Equifax breach. Let’s be honest – it was and is a big breach. There are lessons to be learned, but people seemed to have the answers before the facts were available.

It takes time to dissect these situations and early speculation is often wrong. Efforts at attribution and methods take months to understand. So, it’s important to not buy into the hysteria and, instead, seek to gain a clear vision of the actual lessons to be learned. Time and again, these supposed “watershed moments” and “wake-up calls” generate a lot of buzz, but often little long-term effective action to improve operational resilience against cyber threats.


At IOActive we guard against making on-the-spot assumptions. We consider and analyze the actual threats, ever mindful of the “Watch Effect.” The Watch Effect can be simply explained:  you wear a watch long enough, you can’t even feel it.
I won’t go into what third-party assessments Equifax may or may not have had because that’s largely speculation. The company has probably been assessed many times, by many groups with extensive experience in the prevention of cyber threats and the implementation of active defense. And they still experienced a deep impact cyber incursion.

The industry-wide point here is: Everyone is asking everyone else for proof that they’re secure.

The assumption and Watch Effect come in at the point where company executives think their responses to high-level security questions actually mean something.

Well, sure, they do mean something. In the case of questionnaires, you are asking a company to perform a massive amount of tedious work, and, if they respond with those questions filled in, and they don’t make gross errors or say “no” where they should have said “yes”, that probably counts for something.

But the question is how much do we really know about a company’s security by looking at their responses to a security questionnaire?

The answer is, “not much.”

As a company that has been security testing for 20 years now, IOActive has successfully breached even the most advanced cyber defenses across countless companies during penetration tests that were certified backwards and forwards by every group you can imagine. So, the question to ask is, “Do questionnaires help at all? And if so, how much?”
 
Here’s a way to think about that.

At IOActive we conduct full, top-down security reviews of companies that include business risk, crown-jewel defense, and every layer that these pieces touch. Because we know how attackers get in, we measure and test how effective the company is at detecting and responding to cyber events – and use this comprehensive approach to help companies understand how to improve their ability to prevent, detect, and ever so critically, RESPOND to intrusions. Part of that approach includes a series of interviews with everyone from the C-suite to the people watching logs. What we find is frightening.

We are often days or weeks into an assessment before we discover a thread to pull that uncovers a major risk, whether that thread comes from a technical assessment or a person-to-person interview or both.

That’s days—or weeks—of being onsite with full access to the company as an insider.

Here’s where the Watch Effect comes in. Many of the companies have no idea what we’re uncovering or how bad it is because of the Watch Effect. They’re giving us mostly standard answers about their day-to-day, the controls they have in place, etc. It’s not until we pull the thread and start probing technically – as an attacker – that they realize they’re wearing a broken watch.

Then they look down at a set of catastrophic vulnerabilities on their wrist and say, “Oh. That’s a problem.”

So, back to the questionnaire…

If it takes days or weeks for an elite security firm to uncover these vulnerabilities onsite with full cooperation during an INTERNAL assessment, how do you expect to uncover those issues with a form?

You can’t. And you should stop pretending you can. Questionnaires depend far too much upon the capability and knowledge of the person or team filling it out, and often are completed with impartial knowledge. How would one know if a firewall rule were updated improperly to “any/any” in the last week if it is not tested and verified?

To be clear, the problem isn’t that third party assessments only give 2/10 in security assessment value. The problem is that executives THINK it’s giving them 6/10, or 9/10.

It’s that disconnect that’s causing the harm.

Eventually, companies will figure this out. In the meantime, the breaches won’t stop.

Until then, we as technical practitioners can do our best to convince our clients and prospects to understand the value these types of cursory, external glances at a company provide. Very little. So, let’s prioritize appropriately.

EDITORIAL | January 24, 2018

Cryptocurrency and the Interconnected Home

There are many tiny elements to cryptocurrency that are not getting the awareness time they deserve. To start, the very thing that attracts people to cryptocurrency is also the very thing that is seemingly overlooked as a challenge. Cryptocurrencies are not backed by governments or institutions. The transactions allow the trader or investor to operate with anonymity. We have seen a massive increase in the last year of cyber bad guys hiding behind these inconspicuous transactions – ransomware demanding payment in bitcoin; bitcoin ATMs being used by various dealers to effectively clean money.

Because there are few regulations governing crypto trading, we cannot see if cryptocurrency is being used to fund criminal or terrorist activity. There is an ancient funds transfer capability, designed to avoid banks and ledgers called Hawala. Hawala is believed to be the method by which terrorists are able to move money, anonymously, across borders with no governmental controls. Sound like what’s happening with cryptocurrency? There’s an old saying in law enforcement – follow the money. Good luck with that one.

Many people don’t realize that cryptocurrencies depend on multiple miners. This allows the processing to be spread out and decentralized. Miners validate the integrity of the transactions and as a result, the miners receive a “block reward” for their efforts. But, these rewards are cut in half every 210,000 blocks. A bitcoin block reward when it first started in 2009 was 50 BTC, today it’s 12.5. There are about 1.5 million bitcoins left to mine before the reward halves again.

This limit on total bitcoins leads to an interesting issue – as the reward decreases, miners will switch their attention from bitcoin to other cryptocurrencies. This will reduce the number of miners, therefore making the network more centralized. This centralization creates greater opportunity for cyber bad guys to “hack” the network and wreak havoc, or for the remaining miners to monopolize the mining.

At some point, and we are already seeing the early stages of this, governments and banks will demand to implement more control. They will start to produce their own cryptocurrency. Would you trust these cryptos? What if your bank offered loans in Bitcoin, Ripple or Monero? Would you accept and use this type of loan?

Because it’s a limited resource, what happens when we reach the 21 million bitcoin limit? Unless we change the protocols, this event is estimated to happen by 2140.  My first response  – I don’t think bitcoins will be at the top of my concerns list in 2140.

The Interconnected Home

So what does crypto-mining malware or mineware have to do with your home? It’s easy enough to notice if your laptop is being overused – the device slows down, the battery runs down quickly. How can you tell if your fridge or toaster are compromised? With your smart home now interconnected, what happens if the cyber bad guys operate there? All a cyber bad guy needs is electricity, internet and CPU time. Soon your fridge will charge your toaster a bitcoin for bread and butter. How do we protect our unmonitored devices from this mineware? Who is responsible for ensuring the right level of security on your home devices to prevent this?

Smart home vulnerabilities present a real and present danger. We have already seen baby monitors, robots, and home security products, to name a few, all compromised. Most by IOActive researchers. There can be many risks that these compromises introduce to the home, not just around cryptocurrency. Think about how the interconnected home operates. Any device that’s SMART now has the three key ingredients to provide the cyber bad guy with everything he needs – internet access, power and processing.

Firstly, I can introduce my mineware via a compromised mobile phone and start to exploit the processing power of your home devices to mine bitcoin. How would you detect this? When could you detect this? At the end of the month when you get an electricity bill. Instead of 50 pounds a month, its now 150 pounds. But how do you diagnose the issue? You complain to the power company. They show you the usage. It’s correct. Your home IS consuming that power.

They say that crypto mining is now using as much power as a small country. That’s got a serious impact on the power infrastructure as well as the environment. Ahhhh you say, I have a smart meter, it can give me a real time read out of my usage. Yes, it’s a computer. And, if I’m a cyber bad guy, I can make that computer tell me the latest football scores if I want. The key for a corporation when a cyber bad guy is attacking is to reduce dwell time. Detect and stop the bad guy from playing in your network. There are enterprise tools that can perform these tasks, but do you have these same tools at home? How would you Detect and React to a cyber bad guy attacking your smart home?

IOActive has proven these attack vectors over and over. We know this is possible and we know this is almost impossible to detect. Remember, a cyber bad guy makes several assessments when deciding on an attack – the risk of detection, the reward for the effort, and the penalty for capture. The risk of detection is low, like very low. The reward, well you could be mining blocks for months without stopping, that’s tens of thousands of dollars. And the penalty… what’s the penalty for someone hacking your toaster… The impact is measurable to the homeowner. This is real, and who’s to say not happening already. Ask your fridge!!

What’s the Answer –  Avoid Using Smart Home Devices Altogether?

No, we don’t believe the best defense is to avoid adopting this new technology. The smart and interconnected home can offer its users fantastic opportunities. We believe that the responsibility rests with the manufacturer to ensure that devices are designed and built in a safe and secure way. And, yes, everything is designed; few things are designed well.IOActive researchers spend 99% of their time trying to identify vulnerabilities in these devices for the safety of everyone, not just corporations. The power is in the hands of the consumer. As soon as the consumer starts to purchase products based not only on their power efficiency, but their security rating as well, then we will see a shift into a more secure home.

In the meantime, consider the entry point for most cyber bad guys. Generally, this is your desktop, laptop or mobile device. Therefore, ensure you have suitable security products running on these devices, make sure they are patched to the correct levels, be conscious of the websites you are visiting. If you control the available entry points, you will go a long way to protecting your home.
RESEARCH | January 11, 2018

SCADA and Mobile Security in the IoT Era

Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”


Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems.
The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.

SCADA and Mobile Applications
ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.

Figure 1: Modern ICS infrastructure including mobile apps

Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.


Local Applications

Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial).

Remote Applications
Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. 

In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.

Typical Threats And     Attacks

In this section, we discuss the typical threats to this heterogeneous landscape of applications and how attacks could be conducted. We also map the threats to the application types.
 
Threat Types
There are three main possible ICS threat types:
  • Unauthorized physical access to the device or “virtual” access to device data
  • Communication channel compromise (MiTM)
  • Application compromise

Table 1 summarizes the threat types.

Table 1: SCADA mobile client threat list
 
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
 
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
 
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
 
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
  • Perform analysis and fill out the test checklist
  • Perform client and backend fuzzing
  • If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
  • Application purpose, type, category, and basic information 
  • Permissions
  • Password protection
  • Application intents, exported providers, broadcast services, etc.
  • Native code
  • Code obfuscation
  • Presence of web-based components
  • Methods of authentication used to communicate with the backend
  • Correctness of operations with sessions, cookies, and tokens 
  • SSL/TLS connection configuration
  • XML parser configuration
  • Backend APIs
  • Sensitive data handling
  • HMI project data handling
  • Secure storage
  • Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting  SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
 
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
 
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
 
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics

In our white paperwe provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings.

Remediation And Best Practices
In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems.

In the following list, we gathered the most important items to consider when developing a mobile SCADA application:

  • Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
  • Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
  • Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
  • Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
  • Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
  • Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
  • Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
  • Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
  • Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
  • If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
  • Limit the permissions your application requires to the strict minimum.
  • Implement obfuscation and anti-tampering protections in your application.

Conclusions
Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. 

We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.

In 2015, we wrote:

SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.

We now concede that we were too optimistic and acknowledge that our previous statement was wrong.

Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks.

Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late.

For the complete analysis, please download our white paper here.

Acknowledgments

Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
 
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
 
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
 
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
 
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
EDITORIAL | November 14, 2017

Treat the Cause, not the Symptoms!

With the publication of the National Audit Office report on WannaCry fresh off the press, I think it’s important that we revisit what it actually means. There are worrying statements within the various reports around preventative measures that could have been taken. In particular, where the health service talks about treating the cause, not the symptom, you would expect that ethos to cross functions, from the primary caregivers to the primary security services. 

I read that the NHS Digital team carried out an onsite cyber assessment of 88 out of 236 Trusts. None passed. Not one. Think about this. These trusts are businesses whose core function is the health and well-being of its customers, the patients. If this were a bank, and someone did an onsite assessment and said: “well the bank left all the doors open and didn’t lock the vault”, would you put your hard-earned money in there for safe keeping? I don’t think so. More importantly, if the bank said after a theft of all the money, “well the thieves used masks; we didn’t recognize them; they were very sophisticated”, would you be happy? No. Now imagine what could have been found if someone had carried out an in-depth assessment, thinking like the adversary. 


The report acknowledges the existence of a cyber-attack plan. However, the plan hadn’t been communicated. So, no one knew who was doing what because the plan hadn’t been practiced and perfected. The only communication channel the plan provided for, email, was shut down. This meant that primary caregivers ended up communicating with personal devices using WhatsApp, potentially exposing Patient Medical Records on personal mobile phones through a social messaging tool. 

The report also states the NHS Digital agency had no power to force the Trusts to “take remedial action even if it [NHS Digital] has concerns about the vulnerability of an organization”. At IOActive, we constantly talk to our customers about what to do in the case of a found vulnerability. Simply ticking a box without follow up is a pointless exercise. “My KPI is to perform a security assessment of 50% of the Trusts” – box ticked. That’s like saying “I will perform triage on 50% of my patients, but won’t treat them”. Really?! 

An efficacy assessment of your security practices is not an audit report. It is not a box-ticking exercise. It is a critical function designed specifically to enable you to identify vulnerabilities within your organization’s security posture and empower you to facilitate appropriate controls to manage risk at a business level. Cyber Security and Information Security are not IT issues; they are a business issue. As such, the business should absolutely be focused on having skilled experts providing actionable intelligence, enabling them to make business decisions based on risk, impact and likelihood. It’s not brain surgery, or maybe it is.

It’s generally accepted that, if the bank had taken basic IT security steps, this problem would have been avoided. Treat the cause not the symptom. We are hearing a lot of evidence that this was an orchestrated attack from a nation-state. However, I’m pretty sure, with the basic failures of the NHS Digital to protect the environment, it wouldn’t have taken a nation-state to launch this destructive attack. 

Amyas Morse, Head of NAO said: “It was a relatively unsophisticated attack and could have been prevented by the NHS following basic IT security best practices. There are more sophisticated cyber-threats out there than WannaCry, so the Department and the NHS need to get their act together to ensure the NHS is better protected against future attacks.” I can absolutely guarantee there are more sophisticated attacks out there. 

Eighty-one NHS organizations were impacted. Nineteen-thousand five hundred medical appointments canceled. Six hundred GP surgeries unable to support patients. Five hospitals diverted ambulances elsewhere. Imagine the human factor. You’re waiting for a lifesaving operation – canceled. You’ve been in a car crash – ambulance diverted 40 miles away. All because Windows 7 wasn’t patched. Is that acceptable for an organization trusted with the care and well-being of you and your loved ones? Imagine the damage had this attack been more sophisticated.

Cybersecurity Assessments are not audit activities. They are mission critical assessments for the longevity of your business. The NHS got lucky. There are not many alternatives for health care. It’s not like you can pop down the street and choose the hospital next door. And that means they can’t be complacent about their duty of care. People’s lives are at stake. Treat the cause not the symptoms.

INSIGHTS | June 28, 2017

WannaCry vs. Petya: Keys to Ransomware Effectiveness

With WannaCry and now Petya we’re beginning to see how and why the new strain of ransomware worms are evolving and growing far more effective than previous versions.

I think there are 3 main factors: Propagation, Payload, and Payment.*

  1. Propagation: You ideally want to be able to spread using as many different types of techniques as you can.
  2. Payload: Once you’ve infected the system you want to have a payload that encrypts properly, doesn’t have any easy bypass to decryption, and clearly indicates to the victim what they should do next.
  3. Payment: You need to be able to take in money efficiently and then actually decrypt the systems of those who pay. This piece is crucial, otherwise people will quickly learn they can’t get their files back even if they do pay and be inclined to just start over.


WannaCry vs. Petya

WannaCry used SMB as its main spreading mechanism, and its payment infrastructure lacked the ability to scale. It also had a kill switch, which was famously triggered and halted further propagation.

Petya on the other hand appears to be much more effective at spreading since it’s using both EternalBlue and credential sharing
/ PSEXEC to infect more systems. This means it can harvest working credentials and spread even if the new targets aren’t vulnerable to an exploit.


[NOTE: This is early analysis so some details could turn out to be different as we learn more.]

What remains to be seen is how effective the payload and payment infrastructures are on this one. It’s one thing to encrypt files, but it’s something else entirely to decrypt them.

The other important unknown at this point is if Petya is standalone or a component of a more elaborate attack. Is what we’re seeing now intended to be a compelling distraction?
  
There’s been some reports indicating these exploits were utilized by a sophisticated threat actor against the same targets prior to WannaCry. So it’s possible that WannaCry was poorly designed on purpose. Either way, we’re advising clients to investigate if there is any evidence of a more strategic use of these tools in the weeks leading up to Petya hitting.   

*Note: I’m sure there are many more thorough ways to analyze the efficacy of worms. These are just three that came to mind while reading about Petya and thinking about it compared to WannaCry.

RESEARCH | March 1, 2017

Hacking Robots Before Skynet

Robots are going mainstream in both private and public sectors – on military missions, performing surgery, building skyscrapers, assisting customers at stores, as healthcare attendants, as business assistants, and interacting closely with our families in a myriad of ways. Robots are already showing up in many of these roles today, and in the coming years they will become an ever more prominent part of our home and business lives. But similar to other new technologies, recent IOActive research has found robotic technologies to be highly insecure in a variety of ways that could pose serious threats to the people and organizations they operate in and around.
 
This blog post is intended to provide a brief overview of the full paper we’ve published based on this research, in which we discovered critical cybersecurity issues in several robots from multiple vendors. The goal is to make robots more secure and prevent vulnerabilities from being used maliciously by attackers to cause serious harm to businesses, consumers, and their surroundings. The paper contains more information about the research, findings, and cites many sources used in compiling the information presented in the paper and this post.
 
Robot Adoption and Cybersecurity
Robots are already showing up in thousands of homes and businesses. As many of these “smart” machines are self-propelled, it is important that they’re secure, well protected, and not easy to hack. If not, instead of helpful resources they could quickly become dangerous tools capable of wreaking havoc and causing substantive harm to their surroundings and the humans they’re designed to serve.
 
We’re already experiencing some of the consequences of substantial cybersecurity problems with Internet of Things (IoT) devices that are impacting the Internet, companies and commerce, and individual consumers alike. Cybersecurity problems in robots could have a much greater impact. When you think of robots as computers with arms, legs, or wheels, they become kinetic IoT devices that, if hacked, can pose new serious threats we have never encountered before.
 
With this in mind, we decided to attempt to hack some of the more popular home, business, and industrial robots currently available on the market. Our goal was to assess the cybersecurity of current robots and determine potential consequences of possible cyberattacks. Our results show how insecure and susceptible current robot technology is to cyberattacks, confirming our initial suspicions.
 
Cybersecurity Problems in Today’s Robots
We used our expertise in hacking computers and embedded devices to build a foundation of practical cyberattacks against robot ecosystems. A robot ecosystem is usually composed of the physical robot, an operating system, firmware, software, mobile/remote control applications, vendor Internet services, cloud services, networks, etc. The full ecosystem presents a huge attack surface with numerous options for cyberattacks.
 
We applied risk assessment and threat modeling tools to robot ecosystems to support our research efforts, allowing us to prioritize the critical and high cybersecurity risks for the robots we tested. We focused on assessing the most accessible components of robot ecosystems, such as mobile applications, operating systems, firmware images, and software. Although we didn’t have all the physical robots, it didn’t impact our research results. We had access to the core components, which provide most of the functionality for the robots; we could say these components “bring them to life.”
 
Our research covered home, business, and industrial robots, as well as the control software used by several other robots. The specific robot vendors evaluated in the research are identified in the published research paper.
 
We found nearly 50 cybersecurity vulnerabilities in the robot ecosystem components, many of which were common problems. While this may seem like a substantial number, it’s important to note that our testing was not even a deep, extensive security audit, as that would have taken a much larger investment of time and resources. The goal for this work was to gain a high level sense of how insecure today’s robots are, which we accomplished. We will continue researching this space and go deeper in future projects.
 
An explanation of each main cybersecurity issue discovered is available in the published research paper, but the following is a high-level (non-technical) list of what we found:
·         Insecure Communications
·         Authentication Issues
·         Missing Authorization
·         Weak Cryptography
·         Privacy Issues
·         Weak Default Configuration
·         Vulnerable Open Source Robot Frameworks and Libraries
 
We observed a broad problem in the robotics community: researchers and enthusiasts use the same – or very similar – tools, software, and design practices worldwide. For example, it is common for robots born as research projects to become commercial products with no additional cybersecurity protections; the security posture of the final product remains the same as the research or prototype robot. This practice results in poor cybersecurity defenses, since research and prototype robots are often designed and built with few or no protections. This lack of cybersecurity in commercial robots was clearly evident in our research.
 
Cyberattacks on Robots

Our research uncovered both critical- and high-risk cybersecurity problems in many robot features. Some of them could be directly abused, and others introduced severe threats. Examples of some of the common robot features identified in the research as possible attack threats are as follows:

  • Microphones and Cameras
  • External Services Interaction
  • Remote Control Applications
  • Modular Extensibility
  • Network Advertisement
  • Connection Ports
A full list with descriptions for each is available in the published paper.
 
New technologies are typically prone to security problems, as vendors prioritize time-to-market over security testing. We have seen vendors struggling with a growing number of cybersecurity issues in multiple industries where products are growing more connected, including notably IoT and automotive in recent years. This is usually the result of not considering cybersecurity at the beginning of the product lifecycle; fixing vulnerabilities becomes more complex and expensive after a product is released.
 
The full paper provides an overview of the many implications of insecure robots as they become more prominent in home, business, industry, healthcare, and other applications. We’ve also included many recommendations in the paper for ways to design and build robotic technology more securely based on our findings.
 
Click here for more information on the research and to view the full paper for additional details and descriptions.   
EDITORIAL | October 16, 2015

Five Reasons Why You Should Go To BruCON

BruCON is one of the most important security conferences in Europe. Held each October, the ‘Bru’ in ‘BruCON’ refers to Brussels, the capital of Belgium, where it all started. Nowadays, it’s held in the beautiful city of Ghent, just 55 mins from its origin. I had the chance to attend this year, and here are the five things that make it a great conference, in my opinion.

You can check out BruCON’s promo video here: https://www.youtube.com/watch?v=ySmCRemtMc4.
1. The conference
Great talks presented by international speakers; from deeply technical talks, to threat intelligence and other high-level stuff. You might run into people and friends from Vegas or another security conference.
A circular and well illuminated stage. You better not be caught taking a nap, unless you want a picture of yourself sleeping on the Internet.

(Shyama Rose talking about BASE jumping and risk)

While paid trainings take place two or three days before the conference, free workshops are available to the public during the two-day conference.

(Beau Woods (@beauwoods) giving a cool workshop named:

«Escalating Privileges Through Better Communication»)

While at BruCON, I presented my research on security deficiencies in electroencephalography (EEG) technologies. EEG is a non-invasive method of recording electrical brain activity (synaptic activity between neurons) taken from the scalp. EEG has increasingly been adopted across different industries, and I showed how the technology is prone to common network and application attacks. I demonstrated brain signal sniffing and data tampering through man-in-the-middle attacks, as well as denial-of-service bugs in EEG servers. Client-side applications that analyze EEG data are also prone to application flaws, and I showed how trivial fuzzing can uncover many of them. You can find the related material here: slides, demos (videos) [resource no longer available], and my live talk (video).
2. The city
The medieval architecture in Ghent will enchant you. It’s a really cool city, everywhere you look. Full of restaurants, cafes, and of course, pubs. Students from all ages also make this city full of vibe. You can easily spend two more days and enjoy all Ghent has to offer.
All of this is just around the corner from the venue.
3. The b33r
I’m not a b33r connoisseur, but most of the beers I tasted while in Ghent were, according to my taste, really good. Belgium is catalogued as one of the best beer countries in the world, which speaks for itself. In a nice gesture by the conference organizers, speakers were given a single bottle of a hard-to-find beer, Westvleteren 12, which has been rated as “the best beer in the world.” I have no idea know how they got them, as the brewer does not produce large amounts of this beer and only sells it to a select few people.
Beer at the venue, beer between talks, and there was even a night where we mixed good beer with ice cream. It was interesting.
4. The venue
The conference is held in the heart of the city, near all the hotels, just two or three blocks from where you’re most likely to stay. The organizers thought out every single detail to help you arrive right on time at the venue.
The main hall is perfect for networking and serves b33r, coffee, and food all day, not only during specific hours.
A renowned hacker DJ is dropping some bass or a pianist girl (computer scientist who wrote code for Google) took turns to make those moments even better.

 

Wouldn’t be a good conference without a Wall of Sheep; it was RickRolled though ;D
(Picture taken by @sehnaoui)
The party venue, just three blocks from the stage, had a good hacker ambience and a good sound atmosphere created by two well-known DJs: @CountNinjula & @KeithMyers.

 

(Picture taken by @wimremes)
5. The old video game consoles
If you’re not interested in a talk, or simply bored, just head upstairs and travel back in time with a whole hall of old consoles. “Here comes a new challenger” are accepted…as long as there’s b33r or money involved.

 

 

 
Well, that’s it, another great security conference for you to consider in the future.
Finally, thanks to all this year’s organizers and volunteers. Perhaps you’ll join in next year 😉

 

(Photo taken by @SenseiZeon)
Cheers!

EDITORIAL | March 24, 2015

Lawsuit counterproductive for automotive industry

It came to my attention that there is a lawsuit attempting to seek damages against automakers revolving around their cars being hackable.

The lawsuit cites Dr. Charlie Miller’s and my work several times, along with several other researchers who have been involved in automotive security research.

I’d like to be the first to say that I think this lawsuit is unfortunate and subverts the spirit of our research. Charlie and I approached our work with the end goals of determining if technologically advanced cars could be controlled with CAN messages and informing the public of our findings. Obviously, we found this to be true and were surprised at how much could be manipulated with network messages. We learned so much about automobiles, their communications, and their associated physical actions.

Our intent was never to insinuate deliberate negligence on the part of the manufacturers. Instead, like most security researchers, we wanted to push the boundaries of what was thought to be possible and have fun doing it. While I do believe there is risk associated with vehicle connectivity, I think that a lawsuit can only be harmful as it has the potential to take funds away from what is really important:  securing the modern vehicle. I think any money automobile manufacturers must spend on legal fees would be more wisely spent on researching and developing automotive intrusion detection/prevention systems.

The automotive industry is not sitting idly by, but constantly working to improve the security of their past, present, and future vehicles. Security isn’t something that changes overnight, especially in the case of automobiles, which take even longer since there are both physical and software elements to be tested. Offensive security researchers will always be ahead of the people trying to formulate defenses, but that does not mean the defenders are not doing anything.

While our goals were public awareness and industry change, we did not want change to stem from the possible exploitation of public fears. Our hope was that by showing what is possible, we could work with the people who make the products we use and love on an everyday basis to improve vehicle security.

– cv