INSIGHTS | October 23, 2017

Embedding Defense in Server-side Applications

Applications always contain security flaws, which is why we rely on multiple layers of defense. Applications are still struggling with their defenses, even though we go through exhaustive measures of testing and defense layers. Perhaps we should rethink our approach to application defense, with the goal of introducing defensive methods that cause attackers to cease, or induce them to take incorrect actions based on false premises.

 

There are a variety of products that provide valuable resources when basic, off-the-shelf protection is required or the application source code is not available. However, implementations in native code provide a greater degree of defense that cannot be achieved using third-party solutions. This document aims to:
  • Enumerate a comprehensive list of current and new defensive techniques.
    Multiple defensive techniques have been disclosed in books, papers and tools. This specification collects those techniques and presents new defensive options to fill the opportunity gap that remains open to attackers.

 

  • Enumerate methods to identify attack tools before they can access functionalities.
    The tools used by attackers identify themselves in several ways. Multiple features of a request can disclose that it is not from a normal user; a tool may abuse security flaws in ways that can help to detect the type of tool an attacker is using, and developers can prepare diversions and traps in advance that the specific tool would trigger automatically.

 

  • Disclose how to detect attacker techniques within code.
    Certain techniques can identify attacks within the code. Developers may know in advance that certain conditions will only be triggered by attackers, and they can also be prepared for certain unexpected scenarios.

 

  • Provide a new defensive approach.
    Server-side defense is normally about blocking malicious IP addresses associated to attackers; however, an attacker’s focus can be diverted or modified. Sometimes certain functionalities may be presented to attackers only to better prosecute them if those functionalities are triggered.

 

  • Provide these protections for multiple programming languages.
    This document will use pseudo code to explain functionalities that can reduce the effectiveness of attackers and expose their actions, and working proof-of-concept code will be released for four programming languages: Java, Python, .NET, and PHP.
Fernando Arnaboldi appeared at Ruxcon to present defensive techniques that can be embedded in server-side applications. The techniques described are included in a specification, along with sample implementations for multiple programming languages.
INSIGHTS | February 8, 2012

I can still see your actions on Google Maps over SSL

A while ago, yours truly gave two talks on SSL traffic analysis: one at 44Con and one at RuxCon. A demonstration of the tool was also given at last year’s BlackHat Arsenal by two of my co-workers. The presented research and tool may not have been as groundbreaking as some of the other talks at those conferences, but attendees seemed to like it, so I figured it might make some good blog content. 

Traffic analysis is definitely not a new field, neither in general nor when applied to SSL; a lot of great work has been done by reputable research outlets, such as Microsoft Research with researchers like George Danezis. What recent traffic analysis research has tried to show is that there are enormous amounts of useful information to be obtained by an attacker who can monitor the encrypted communication stream. 

A great example of this can be found in the paper with the slightly cheesy title Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow. The paper discusses some approaches to traffic analysis on SSL-encrypted web applications and applies them to real-world systems. One of the approaches enables an attacker to build a database that contains traffic patterns of the AutoComplete function in drop-down form fields (like Google’s Auto Complete). Another great example is the ability to—for a specific type of stock management web application—reconstruct pie charts in a couple of days and figure out the contents of someone’s stock portfolio.

After discussing these attack types with some of our customers, I noticed that most of them seemed to have some difficulty grasping the potential impact of traffic analysis on their web applications. The research papers I referred them to are quite dry and they’re also written in dense, scientific language that does nothing to ease understanding. So, I decided to just throw some of my dedicated research time out there and come up with a proof of concept tool using a web application that everyone knows and understands: Google Maps.

Since ignorance is bliss, I decided to just jump in and try to build something without even running the numbers on whether it would make any sense to try. I started by running Firefox and Firebug in an effort to make sense of all the JavaScript voodoo going on there. I quickly figured out that Google Maps works by using a grid system in which PNG images (referred to as tiles) are laid out. Latitude and longitude coordinates are converted to x and y values depending on the selected zoom level; this gives a three dimensional coordinate system in which each separate (x, y, z)-triplet represents two PNG images. The first image is called the overlay image and contains the town, river, highway names and so forth; the second image contains the actual satellite data. 

Once I had this figured out the approach became simple: scrape a lot of satellite tiles and build a database of the image sizes using the tool GMapCatcher. I then built a tool that uses libpcap to approximate the image sizes by monitoring the SSL encrypted traffic on the wire. The tool tries to match the image sizes to the recorded (x,y,z)-triplets in the database and then tries to cluster the results into a specific region. This is notoriously difficult to do since one gets so many false positives if the database is big enough. Add to this the fact that it is next to impossible to scrape the entire Google Maps database since, first, they will ban you for generating so much traffic and, second, you will have to store many petabytes of image data. 

With a little bit of cheating—I used a large browser screen so I would have more data to work with—I managed to make the movie Proof of Concept – SSL Traffic Analysis of Google Maps. 

As shown in the movie, the tool has a database that contains city profiles including Paris, Berlin, Amsterdam, Brussels, and Geneva. The tool runs on the right and on the left is the browser accessing Google Maps over SSL. In the first attempt, I load the city of Paris and zoom in a couple of times. On the second attempt I navigate to Berlin and zoom in a few times. On both occasions the tool manages to correctly guess the locations that the browser is accessing. 

Please note that it is a shoddy proof of concept, but it shows the concept of SSL traffic analysis pretty well. It also might be easier to understand for less technically inclined people, as in “An attacker can still figure out what you’re looking at on Google Maps” (with the addendum that it’s never going to be a 100% perfect and that my shoddy proof of concept has lots of room for improvement). 

For more specific details on this please refer to the IOActive white paper Traffic Analysis on Google Maps with GMaps-Trafficker or send me a tweet at @santaragolabs.