RESEARCH | September 22, 2020

Uncovering Unencrypted Car Data in BMW Connected App

Uncovering Unencrypted Car Data in BMW Connected App

TL; DR: Modern mobile OSes encrypt data by default, nevertheless, the defense-in-depth paradigm dictates that developers must encrypt sensitive data regardless of the protections offered by the underlying OS. This is yet another case study of data stored unencrypted, and most importantly, a reminder to developers not to leave their apps’ data unencrypted. In this case study, physical access to an unlocked phone, trusted computer or unencrypted backups of an iPhone is required to exfiltrate the data, which in turn does not include authentication data and cannot be used to control or track the vehicle in any way.

Introduction

“While modern mobile operating systems allow encrypting mobile devices, which users can use to protect themselves, it is ultimately the developer’s responsibility to make sure that their software is thoroughly safeguarded. To this end, developers should provide reliable mobile app data encryption that leaves no user data without protection.” — Dmitriy Smetana.

Physical theft is not the only attack vector that threatens the data stored on a mobile phone. Imagine, for instance, a shared computer at home or in the office where a phone has been authenticated and trusted. When the phone is connected and authenticated, a malicious actor with access to this computer would be able to extract its apps’ data. The likelihood is low in the real world, though.

One day during the pandemic I was wondering if my car’s mobile app was encrypting the data or not. So, I decided to analyze it:

Scope and Tools Used

The following navigation-equipped cars were used for this analysis:

  • X5 xDrive40i (2020)
  • 120i (2020)
  • X1 sDrive20iA X Line (2018)

BMW Connected is a mobile app compatible with 2014 and newer navigation-equipped vehicles (BMW ConnectedDrive). It allows the user to monitor and remotely control some features such as:

  • Lock/Unlock
  • Location tracking
  • Lights
  • Horn
  • Climate control
  • Destinations (navigation system)
  • Doors and windows status
  • Fuel level
  • Mileage

BMW Connected App Demonstration (Youtube)

BMW Connected App Demonstration

The latest version of the app available on Apple Store was:

  • BMW Connected for iOS v10.6.2.1807

I installed the app on two iPhones, neither of which were jailbroken:

  • iPhone XS Max (iOS 13.4.1)
  • iPhone 8 Plus (iOS 13.3.1)

Then, I found unencrypted data using the following basic tools:

  • Windows 10 Home
  • Ubuntu Linux 19.10
    • strings
    • plistutil
    • base64
    • jq

Analysis and Results

You’ll see how easy it was to extract and decode the stored data.

Data Stored Unencrypted

The cars were added and authenticated within the app:

For both installations, the same behavior was observed: data was stored base64-encoded but unencrypted in .plist files. I used the plistutil command to decode such files, then, I piped the output through other command-line tools to strip empty lines and spaces.

Once I had the base64 strings, I decoded them with the base64 tool and finally, formatted and colorized the JSON output with the jq tool:

  • Favorite locations (FavoritesCollection.plist)

  • Directions sent to the vehicle (TripStorage.plist)

  • Status of doors and windows (VehicleHub.Suite.plist)

  • Mileage and remaining fuel (VehicleHub.Suite.plist)

  • History of remote actions (VehicleHub.Suite.plist)

  • Car color and picture (VehicleHub.Suite.plist)

  • Next maintenance due dates (VehicleHub.Suite.plist)

  • VIN and model

  • Owner’s first and last name and last logged date (group.de.bmw.connected.plist)

Weak Password and PIN Policies

On registration, I noticed the password policy only required eight characters from at least two of the following three charsets:

  • Letters (abc = ABC)
  • Numbers
  • Special characters

Such a policy might seem good enough; however, making the password case-insensitive significantly decreases its complexity. During testing, it was possible to login with any of the following passwords:

  • Qwerty12
  • QWERTY12
  • QwErTy12

Also, the app permits users to select an easy-to-guess PIN, which is used to unlock the car or access the app if the smartphone does not implement FaceID, TouchID, or a passcode. The existing PIN code policy allows users to choose weak combinations, such as consecutive numbers (e.g. “1234”) or the same number (e.g. “0000”).

However, the most commonly used feature for authentication is either FaceID or TouchID.

Recommendations

The takeaways are very simple:

  • For end-users:
    • Only authenticate your phone on trusted computers.
    • Avoid connecting and trusting your phone to shared workstations.
    • Use complex passwords and PIN codes.
  • For developers:
    • Do not put your complete trust in the operating system.
    • Encrypt sensitive data on your own.

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure.

The following is the communication timeline with BMW Group:

  • May 2, 2020: IOActive’s assessment of the BMW Connected App started
  • May 15, 2020: IOActive sent a vulnerabilities report to BMW Group following its guidelines
  • May 20, 2020: BMW Group replied. They internally sent the report to the responsible departments
  • May 26, 2020: IOActive asked BMW Group for any updates or questions and let them know about our intention to write a blog post
  • May 28, 2020: BMW Group said to wait for a new app release prior to publishing a blog post and asked for details to include the researcher in BMW Group’s Hall of Fame site
  • Aug 07, 2020: BMW Group and IOActive had a call to discuss the technical information that would be published in a blog post
  • Aug 13, 2020: BMW Group sent an email explaining how they would fix the issues
  • Aug 19, 2020: IOActive sent a draft of the blog post to be published to BMW Group for review
  • Aug 24, 2020: BMW Group suggested some modifications
  • Sep 08, 2020: IOActive sent the second version of the draft of the blog post to be published to BMW Group for review
  • Sep 11, 2020: BMW Group agreed with the final content
  • Sep 22, 2020: IOActive blog published.

The Fix

BMW Group’s security statement:

“Thanks to the notification of Alejandro Hernandez at IOActive via our responsible disclosure channel, we were able to change the way the app’s data cache is handled. Our app development team added an encryption step that makes use of the secure enclave of Apple devices, at which we generate a key that is used for storing the favorites and vehicle metadata that Alejandro was able to extract. We appreciate Alejandro for sharing his research with us and would like to thank him for reaching out to us.”

Acknowledgments

I would like to give special thanks to my friend Juan José Romo, who lent me two brand new cars for testing.

Also, I’d like to thank Richard Wimmer and Hendrik Schweppe of BMW Group for their quick response and cooperation in fixing the issues presented here.

Thanks for reading,
Alejandro @nitr0usmx

RESEARCH | January 6, 2016

Drupal – Insecure Update Process

Just a few days after installing Drupal v7.39, I noticed there was a security update available: Drupal v7.41. This new version fixes an open redirect in the Drupal core. In spite of my Drupal update process checking for updates, according to my local instance, everything was up to date: 

 

Issue #1: Whenever the Drupal update process fails, Drupal states that everything is up to date instead of giving a warning.

 

The issue was due to some sort of network problem. Apparently, in Drupal 6 there was a warning message in place, but this is not present in Drupal 7 or Drupal 8.
 
Nevertheless, if the scheduled update process fails, it is always possible to check for the latest updates by using the link that says “Check Manually“. This link is valuable for an attacker because it can be used to perform a cross-site request forgery (CSRF) attack to force the admin to check for updates whenever they decide:
 
  • http://yoursite/?q=admin/reports/updates/check
 
Since there is a CSRF vulnerability in the “Check manually” functionality (Drupal 8 is the only one not affected), this could also be used as a server-side request forgery (SSRF) attack against drupal.org. Administrators may unwillingly be forcing their servers to request unlimited amounts of information from updates.drupal.org to consume network bandwidth.
 
Issue #2: An attacker may force an admin to check for updates due to a CSRF vulnerability on the update functionality
 
An attacker may care about updates because they are sent unencrypted, as the following Wireshark screenshot shows: 

 

 

To exploit unencrypted updates, an attacker must be suitably positioned to eavesdrop on the victim’s network traffic. This scenario typically occurs when a client communicates with the server over an insecure connection, such as public WiFi, or a corporate or home network that is shared with a compromised computer.  
 
The update process downloads a plaintext version of an XML file at http://updates.drupal.org/release-history/drupal/7.x and checks to see if it is the latest version. This XML document can point to a backdoored version of Drupal.  

 

  1. The current security update (named on purpose “7.41 Backdoored“)
  2. The security update is required and a download link button
  3. The URL of the malicious update that will be downloaded
 
However, updating Drupal is a manual process. Another possible attack vector is to offer a backdoored version of any of the modules installed on Drupal. In the following example, a fake “Additional Help Hint” update is offered to the user:

 

 

Offering fake updates is a simple process. Once requests are being intercepted, a fake update response can be constructed for any module. When administrators click on the “Download these updates” buttons, they will start the update process.
 
This is how it looks from an attacker’s perspective before and after upgrading the “Additional Help Hint” module. First it checks for the latest version, and then it downloads the latest (malicious) version available. 


 

As part of the update, I included a reverse shell from pentestmonkey (http://pentestmonkey.net/tools/web-shells/php-reverse-shell) that will connect back to me, let me interact with the Linux shell, and finally, allow me to retrieve the Drupal database password:


Issue #3: Drupal security updates are transferred unencrypted without checking the authenticity, which could lead to code execution and database access.
 
You may have heard about such things in the past. Kurt Seifried from Linux Magazine wrote an article entitled “Insecure updatesare the rule, not the exception” that mentioned that Drupal (among others) were not checking the authenticity of the software being downloaded. Moreover, Drupal itself has had an open discussion about this issue since April 2012 (https://www.drupal.org/node/1538118). This discussion was reopened after I reported the previous vulnerabilities to the Drupal Security Team on the 11th of November 2015.
 
You probably want to manually download updates for Drupal and their add-ons. At the moment of publishing there are no fixes available.
 

TL;DR – It is possible to achieve code execution and obtain the database credentials when performing a man-in-the-middle attack against the Drupal update process. All Drupal versions are affected.

RESEARCH | November 25, 2015

Privilege Escalation Vulnerabilities Found in Lenovo System Update

Lenovo released a new version of the Lenovo System Update advisory (https://support.lenovo.com/ar/es/product_security/lsu_privilege) about two new privilege escalation vulnerabilities I had reported to Lenovo a couple of weeks ago (CVE-2015-8109, CVE-2015-8110). IOActive and Lenovo have issued advisories on these issues.
 

Before digging into the details, let’s go over a high-level overview of how the Lenovo System Update pops up the GUI application with Administrator privileges.
 
Here is a discussion of the steps depicted above:


1 – The user starts System Update by running the tvsu.exe binary which runs the TvsuCommandLauncher.exe with a specific argument. Previously, Lenovo fixed vulnerabilities that IOActive discovered where an attacker could impersonate a legitimate caller and pass the command to be executed to the SUService service through named pipes to gain a privilege escalation. In the newer version, the argument is a number within the range 1-6 that defines a set of tasks within the dll TvsuServiceCommon.dll

 

2 – TvsuCommandLauncher.exe then, as usual, contacts the SUService service that is running with System privileges, to process the required query with higher privileges.

 

3 – The SUService service then launches the UACSdk.exe binary with System privileges to prepare to execute the binary and run the GUI interface with Administrator privileges.

 

4 – UACSdk.exe checks if the user is a normal unprivileged user or a Vista Administrator with the ability to elevate privileges.

 

5 – Depending on user privileges:

 

    • For a Vista Admin user, the user’s privileges are elevated.
    • For an unprivileged user, UACSdk.exe creates a temporary Administrator account with a random password which is deleted it once the application is closed.

The username for the temporary Administrator account follows the pattern tvsu_tmp_xxxxxXXXXX, where each lowercase x is a randomly generated lower case letter and each uppercase X is a randomly generated uppercase letter. A 19-byte, random password is generated.


Here is a sample of a randomly created user:    



6 – Through tvsukernel.exe binary, the main Lenovo System Update GUI application is then run with Administrator privileges. 




BUG 1 : Lenovo System Update Help Topics Privilege Escalation
The first vulnerability is within the Help system and has two entry points by which a user can open an online help topic that starts an instance of Internet Explorer.

1 – The link in the main application interface 

 
 

2 – By clicking on the Help icon 
at top right and then clicking Settings
 

 

 

 
Since the main application Tvsukernel.exe is running as Administrator, the web browser instance that starts to open a Help URL inherits the parent Administrator privileges.
From there, an unprivileged attacker has many ways to exploit the web browser instance running under Administrator privileges to elevate his or her own privileges to Administrator or SYSTEM.
BUG 2 : Lenovo System Weak Cryptography Function Privilege Escalation
Through a more technical bug and exploitable vulnerability, the temporary administrator account is created in specific circumstances related to Step 5b in the overview.
The interesting function for setting the temporary account is sub_402190 and contains the following important snippets of code:
 
 
The function sub_401810 accepts three arguments and is responsible for generating a random string pattern with the third argument length.
 
Since sub_401810 generates a pattern using RAND, the initialization of the seed is based on the addition of current time and rand values and defined as follows:
 
 
 
Once the seed is defined, the function generates the random value using a loop with RAND and division/multiplication with specific values.
 
Rather than posting the full code, I’ll note that a portion of those loops looks like the following:
 
 
 
The first call to this function is used to generate the 10-letter suffix for the Administrator username that will be created as “tvsu_tmp_xxxxxXXXXX”
 
Since it is based on rand, the algorithm is actually predictable. It is possible for an attacker to regenerate the same username based on the time the account was created.
 
To generate the password (which is more critical), Lenovo has a more secure method: Microsoft Crypto API (Method #1) within the function sub_401BE0. We will not dig into this method, because the vulnerability IOActive discovered is not within this function. Instead, let’s look at how Method #2 generates a password, if Method #1 fails.
 
Let’s return to the snippets of code related to password generation:
 
 
 
We can clearly see that if function sub_401BE0 fails, the execution flow fails back using the custom RAND-based algorithm (defined earlier in function sub_401810) to generate a predictable password for the temporary Administrator account. In other words, an attacker could predict the password created by Method #2.
 

This means an attacker could under certain circumstances predict both the username and password and use them to elevate his or her privileges to Administrator on the machine.
INSIGHTS | May 7, 2013

Bypassing Geo-locked BYOD Applications

In the wake of increasingly lenient BYOD policies within large corporations, there’s been a growing emphasis upon restricting access to business applications (and data) to specific geographic locations. Over the last 18 months more than a dozen start-ups in North America alone have sprung up seeking to offer novel security solutions in this space – essentially looking to provide mechanisms for locking application usage to a specific location or distance from an office, and ensuring that key data or functionality becomes inaccessible outside these prescribed zones.
These “Geo-locking” technologies are in hot demand as organizations try desperately to regain control of their networks, applications and data.

Over the past 9 months I’ve been asked by clients and potential investors alike for advice on the various technologies and the companies behind them. There’s quite a spectrum of available options in the geo-locking space; each start-up has a different take on the situation and has proposed (or developed) a unique way in tackling the problem. Unfortunately, in the race to secure a position in this evolving security market, much of the literature being thrust at potential customers is heavy in FUD and light in technical detail.
It may be because marketing departments are riding roughshod over the technical folks in order to establish these new companies, but in several of the solutions being proposed I’ve had concerns over the scope of the security element being offered. It’s not because the approaches being marketed aren’t useful or won’t work, it’s more because they’ve defined the problem they’re aiming to solve so narrowly that they’ve developed what I could only describe as tunnel-vision to the spectrum of threat organizations are likely to face in the BYOD realm.
In the meantime I wanted to offer this quick primer on the evolving security space that has become BYOD geo-locking.
Geo-locking BYOD
The general premise behind the current generation of geo-locking technologies is that each BYOD gadget will connect wirelessly to the corporate network and interface with critical applications. When the device is moved away from the location, those applications and data should no longer be accessible.
There are a number of approaches, but the most popular strategies can be categorized as follows:
  1. Thick-client – A full-featured application is downloaded to the BYOD gadget and typically monitors physical location elements using telemetry from GPS or the wireless carrier directly. If the location isn’t “approved” the application prevents access to any data stored locally on the device.
  2. Thin-client – a small application or driver is installed on the BYOD gadget to interface with the operating system and retrieve location information (e.g. GPS position, wireless carrier information, IP address, etc.). This application then incorporates this location information in to requests to access applications or data stored on remote systems – either through another on-device application or over a Web interface.
  3. Share-my-location – Many mobile operating systems include opt-in functionality to “share my location” via their built-in web browser. Embedded within the page request is a short geo-location description.
  4. Signal proximity – The downloaded application or driver will only interface with remote systems and data if the wireless channel being connected to by the device is approved. This is typically tied to WiFi and nanocell routers with unique identifiers and has a maximum range limited to the power of the transmitter (e.g. 50-100 meters).

The critical problem with the first three geo-locking techniques can be summed up simply as “any device can be made to lie about its location”.

The majority of start-ups have simply assumed that the geo-location information coming from the device is correct – and have not included any means of securing the integrity of that device’s location information. A few have even tried to tell customers (and investors) that it’s impossible for a device to lie about its GPS location or a location calculated off cell-tower triangulation. I suppose it should not be a surprise though – we’ve spent two decades trying to educate Web application developers to not trust client-side input validation and yet they still fall for web browser manipulations.
A quick search for “fake location” on the Apple and Android stores will reveal the prevalence and accessibility of GPS fakery. Any other data being reported from the gadget – IP address, network MAC address, cell-tower connectivity, etc. – can similarly be manipulated. In addition to manipulation of the BYOD gadget directly, alternative vectors that make use of private VPNs and local network jump points may be sufficient to bypass thin-client and “share-my-location” geo-locking application approaches.
That doesn’t mean that these geo-locking technologies should be considered unicorn pelts, but it does mean that organization’s seeking to deploy these technologies need to invest some time in determining the category of threat (and opponent) they’re prepared to combat.
If the worst case scenario is of a nurse losing a hospital iPad and that an inept thief may try to access patient records from another part of the city, then many of the geo-locking approaches will work quite well. However, if the scenario is that of a tech-savvy reporter paying the nurse to access the hospital iPad and is prepared in install a few small applications that manipulate the geo-location information in order to remotely access celebrity patient records… well, then you’ll need a different class of defense.
Given the rapid evolution of BYOD geo-locking applications and the number of new businesses offering security solutions in this space, my advice is two-fold – determine the worst case scenarios you’re trying to protect against, and thoroughly assess the technology prior to investment. Don’t be surprised if the marketing claims being made by many of these start-ups are a generation or two ahead of what the product is capable of performing today.
Having already assessed or reviewed the approaches of several start-ups in this particular BYOD security realm, I believe some degree of skepticism and caution is warranted.
— Gunter Ollmann, CTO IOActive
INSIGHTS | December 18, 2012

Striking Back GDB and IDA debuggers through malformed ELF executables

Day by day the endless fight between the bad guys and good guys mostly depends on how fast a countermeasure or anti-reversing protection can be broken. These anti-reversing mechanisms can be used by attackers in a number of ways: to create malware, to be used in precompiled zero-day exploits in the black market, to hinder forensic analysis, and so on. But they can also be used by software companies or developers that want to protect the internal logic of their software products (copyright).

The other day I was thinking: why run and hide (implementing anti-reversing techniques such as the aforementioned) instead of standing up straight and give the debugger a punch in the face (crashing the debugging application). In the next paragraphs I’ll explain briefly how I could implement this anti-reversing technique on ELF binaries using a counterattack approach.

ELF executables are the equivalent to the .exe files in Windows systems, but in UNIX-based systems (such as Linux and *BSD). As an executable file format, there are many documented reversing [1] and anti-reversing techniques on ELF binaries, such as the use of the ptrace() syscall for dynamic anti-debugging [2]:
 
void anti_debug(void) __attribute__ ((constructor));
 
void anti_debug(void)
{
     if(ptrace(PTRACE_TRACEME, 0, 0, 0) == -1){
           printf(“Debugging not allowed!n”);
           exit(0xdead);
     }
}
 
Trying to debug with GNU debugger (the most famous and commonly used debugger in UNIX-based systems) an ELF executable that contains the above code will result in:

However, as can be seen, even with the anti-debugging technique at runtime, the ELF file was completely loaded and parsed by the debugger.

The ELF files contain different data structures, such as section headers, program headers, debugging information, and so on. So the Linux ELF loader and other third party applications know how to build their layout in memory and execute/analyze them. However, these third party applications, such as debuggers, sometimes *TRUST* on the metadata of the supplied ELF file to be analyzed, and here is where the fun begins.
I found one bug in GNU gdb 7.5.1 and another one in IDA Pro 6.3 (the latest versions when this paper was written), using Frixyon fuzzer (my ELF file format fuzzer still in development). To explain these little bugs that crash the debuggers, we’ll use the following code (evil.c):

 

#include <stdio.h>
 
int main()
{
        printf(“It could be a malicious program }:)n”);
 
        return 0;
}

Crashing GNU gdb 7.5.1
Compiling this with gcc using the –ggdb flag, the resulting ELF file will have section headers with debugging-related information:
 
 

 

After a bit of analysis, I found a bug in the DWARF [3] (a debugging file format used by many compilers and debuggers to support source-level debugging) processor that fails when parsing the data within the .debug_line section. This prevents gdb from loading an ELF executable for debugging due to a NULL pointer dereference. Evidently it could be used to patch malicious executables (such as rootkits, zero-day exploits, and malware) that wouldn’t be able to be analyzed by gdb.

In gdb-7.5.1/gdb/dwarf2read.c is the following data structure:

 

 
struct line_header
{
  unsigned int num_include_dirs, include_dirs_size;
  char **include_dirs;
  struct file_entry
  {
    char *name;
    unsigned int dir_index;
    unsigned int mod_time;
    unsigned int length;
  } *file_names;
}
 
The problem exists when trying to open a malformed ELF that contains a file_entry.dir_index > 0 and char **include_dirs pointing to NULL. To identify the bug, I did something called inception debugging: to debug gdb with gdb:
 
 
The root cause of the problem is that there’s no validation to verify if include_dirs is different from NULLbefore referencing it.
To simplify this process, I’ve developed a tool to patch the ELF executable given as an argument, gdb_751_elf_shield.c:
 
 
After patching a binary with this code, it will be completely executable since the operating system ELF loader only uses the Program Headers (not the Section Headers). But, it wouldn’t be able to be loaded by gdb as shown below:

 

 

T

Timeline:
12/11/2012      The bug was found on GNU gdb 7.5.
19/11/2012      The bug was reported through the official GNU gdb’s bug tracker:
http://sourceware.org/bugzilla/show_bug.cgi?id=14855
10/12/2012      Retested with the latest release (7.5.1), which still has the bug.
12/12/2012      The status on the tracker is still “NEW”.
 

C

Crashing IDA Pro 6.3
The IDA Pro ELF loader warns you when it finds invalid or malformed headers or fields, and asks if you want to continue with the disassembly process. However, there’s a specific combination of fields that makes IDA Pro enter an unrecoverable state and closes itself completely, which shouldn’t happen.
The aforementioned fields are found in the ELF headers, e_shstrndxand e_shnum, where the first one is an index of the Section Header Table with e_shnumelements. So IDA will fail if e_shstrndx > e_shnum because there is no validation to verify both values before referencing it.
The following screenshot illustrates the unrecoverable error:
 
 

I have also programmed a simple tool (ida_63_elf_shield.c) to patch the ELF executables to make them impossible for IDA Pro to load. This code only generates two random numbers and assigns the bigger one to e_shstrndx:

      srand(time(NULL)); // seed for rand()
 
      new_shnum    = (Elf32_Half) rand() % 0x1337;
      new_shstrndx = (Elf32_Half) 0;
 
      while(new_shstrndx < new_shnum)
            new_shstrndx = (Elf32_Half) rand() % 0xDEAD;
 
      header->e_shnum    = new_shnum;
      header->e_shstrndx = new_shstrndx;
 
After patching a file, IDA will open a pop-up window saying that an error has occurred and after clicking the OK button, IDA will close:

 

 
imeline:
21/11/2012      The bug was found on IDA Demo 6.3.
22/11/2012      The bug was tested on IDA Pro 6.3.120531 (32-bit).
22/11/2012      The bug was reported through the official Hex-Rays contact emails.
23/11/2012     Hex-Rays replied and agreed that the bug leads to an unrecoverable state and will be fixed in the next release.
A real life scenario
Finally, to illustrate that neither patching tool will corrupt the integrity of the ELF files at execution, I will insert a parasite code to an ELF executable using Silvio Cesare’s algorithm [4], patching the entrypoint to a “fork() + portbind(31337) + auth(Password: n33tr0u5)” payload [5], which at the end has a jump to the original entrypoint:
 

As can be seen, the original binary (hostname) works perfectly after executing the parasite code (backdoor on port 31337). Now, let’s see what happens after patching it:

 

It worked perfectly and evidently it cannot be loaded by gdb !
In conclusion, the debuggers have certain parsing tasks and are software too, therefore they are also prone to bugs and security flaws. Debugging tools shouldn’t blindly trust in the data input supplied, in this case, the metadata of an ELF executable file. Always perform bound checking before trying to access invalid memory areas that might crash our applications.
Thanks for reading.
Alejandro.
Tools
– gdb (GNU debugger) <= 7.5.1 (crash due a NULL pointer dereference)
ELF anti-debugging/reversing patcher
– IDA Pro 6.3 (crash due an internal error)
ELF anti-debugging/reversing patcher
References
[1] Reverse Engineering under Linux by Diego Bauche Madero
[2] Abusing .CTORS and .DTORS for fun ‘n profit by Itzik Kotler
[3] DWARF
[4] UNIX Viruses by Silvio Cesare
[5] ELF_data_infector.c by Alejandro Hernández
 
INSIGHTS | August 17, 2012

One Mail to Rule Them All

This small research project was conducted over a four-week period a while back, so current methods may differ as password restoration methods change.
While writing this blog post, the Gizmodo writer Mat Honan’s account was hacked with some clever social engineering that ultimately brought numerous small bits and pieces of information together into one big chunk of usable data. The downfall in all this is that different services use different alternative methods to reset passwords: some have you enter the last four digits of your credit card and some would like to know your mother’s maiden name; however, the attacks described here differ a bit, but the implications are just as devastating.
For everything we do online today we need an identity, a way to be contacted. You register on some forum because you need an answer, even if it’s just once and just to read that answer. Afterwards, you have an account there, forcing you to trust the service provider. You register on Facebook, LinkedIn, and Twitter; some of you use online banking services, dating sites, and online shopping. There’s a saying that all roads lead to Rome? Well, the big knot in this thread is—you guessed it—your email address.

 

Normal working people might have 1-2 email addresses: a work email that belongs to the company and a private one that belongs to the user. Perhaps the private one is one of the popular web-based email services like Gmail or Hotmail. To break it down a bit, all the sensitive info in your email should be stored in a secure vault, home, or in a bank because it’s important information that, in an attackers hand, could turn your life into a nightmare.

 

I live in a EU country where our social security numbers aren’t considered information worthy of protecting and can be obtained by anyone. Yes, I know—it’s a huge risk. But in some cases you need some form of identification to pick up the sent package. Still, I consider this a huge risk.

 

Physically, I use paper destroyers when I’ve filed a paper and then put it in my safe. I destroy the remnants of important stuff I have read. Unfortunately, storing personal data in your email is easy, convenient, and honestly, how often do you DELETE emails anyway? And if you do, are you deleting them from the trash right away? In addition, there’s so much disk space that you don’t have to care anymore. Lovely.

 

So, you set your email account at the free hosting service and you have to select a password. Everybody nags nowadays to have a secure and strong password. Let’s use 03L1ttl3&bunn13s00!—that’s strong, good, and quite easy to remember. Now for the secure question. Where was your mother born? What’s your pets name? What’s your grandparent’s profession? Most people pick one and fill it out.

 

Well, in my profession security is defined by the weakest link; in this case disregarding human error and focusing on the email alone. This IS the weakest link. How easy can this be? I wanted to dive in to how my friends and family have set theirs up, and how easy it is to find this information, either by goggling it or doing a social engineering attack. This is 2012, people should be smarter…right? So with mutual agreement obtained between myself, friends, and family, this experiment is about to begin.

 

A lot of my friends and former colleagues have had their identities stolen over the past two years, and there’s a huge increase. This has affected some of them to the extent that they can’t take out loans without going through a huge hassle. And it’s not often a case that gets to court, even with a huge amount of evidence including video recordings of the attackers claiming to be them, picking up packages at the local postal offices. 
Why? There’s just too much area to cover, and less man power and competence to handle it. The victims need to file a complaint, and use the case number and a copy of the complaint; and fax this around to all the places where stuff was ordered in their name. That means blacklisting themselves in their system, so if they ever want to shop there again, you can imagine the hassle of un-blacklisting yourself then trying to prove that you are really you this time.

 

A good friend of mine was hiking in Thailand and someone got access to his email, which included all his sensitive data: travel bookings, bus passes, flights, hotel reservations. The attacker even sent a couple of emails and replies, just to be funny; he then canceled the hotel reservations, car transportations, airplane tickets, and some of the hiking guides. A couple days later he was supposed to go on a small jungle hike—just him, his camera, and a guide—the guide never showed up, nor did his transportation to the next location. 
Thanks a lot. Sure, it could have been worse, but imagine being stranded in a jungle somewhere in Thailand with no Internet. He also had to make a couple of very expensive phone calls, ultimately abort his photography travel vacation, and head on home.

 

One of my best friends uses Gmail, like many others. While trying a password restore on that one, I found an old Hotmail address, too. Why? When I asked him about it afterwards, he said he had his Hotmail account for about eight years, so it’s pretty nested in with everything and his thought was, why remove it? It could be good to go back and find old funny stuff, and things you might forget. He’s not keen to security and he doesn’t remember that there is a secret question set. So I need that email.
Lets use his Facebook profile as a public attacker would—it came out empty, darn; he must be hiding his email. However, his friends are displayed. Let’s make a fake profile based on one of his older friends—the target I chose was a girl he had gone to school with. How do I know that? She was publicly sharing a photo of them in high school. Awesome. Fake profile ready, almost identical to the girl, same photo as hers, et cetera. And Friend Request Sent.
A lot of email vendors and public boards such as Facebook have started to implement phone verification, which is a good thing. Right? So I decided to play a small side experiment with my locked mobile phone.
I choose a dating site that has this feature enabled then set up an account with mobile phone verification and an alternative email. I log out and click Forgot password? I enter my username or email, “IOACasanova2000,” click and two options pop up: mobile phone or alternative email. My phone is locked and lying on the table. I choose phone. Send. My phone vibrates and I take a look at the display:  From “Unnamed Datingsite” “ZUGA22”. That’s all I need to know to reset the password.
Imagine if someone steals or even lends your phone at a party. Or if you’re sloppy enough to leave in on a table. I don’t need your pin—at least not for that dating site.What can you do to protect yourself from this?   Edit the settings so the preview shows less of the message. My phone shows three lines of every SMS; that’s way too much. However, on some brands you can disable SMS notifications from showing up on a locked screen.
From my screen i got a instant; Friend Request Accepted.
I quickly check my friend’s profile and see:
hismainHYPERLINK “mailto:hismaingmail@nullgmail.com”GmailHYPERLINK “mailto:hismaingmail@nullgmail.com”@HYPERLINK “mailto:hismaingmail@nullgmail.com”GmailHYPERLINK “mailto:hismaingmail@nullgmail.com”.com
hishotmail@nullhotmail.com

 

I had a dog, and his name was BINGO! Hotmail dot com and password reset.
hishotmail@nullhotmail.com

 

The anti bot algorithm… done…
And the Secret question is active…
“What’s your mother’s maiden name”…

 

I already know that, but since I need to be an attacker, I quickly check his Facebook, which shows his mother’s maiden name! I type that into Hotmail and click OK….

 

New Password: this1sAsecret!123$

 

I’m half way there….

 

Another old colleague of mine got his Hotmail hacked and he was using the simple security question “Where was your mother born”. It was the same city she lived in today and that HE lived in, Malmö (City in Sweden). The attack couldn’t have come more untimely as he was on his way, in an airplane, bound for the Canary Islands with his wife. After a couple of hours at the airport, his flight, and a taxi ride, he gets  a “Sorry, you don’t have a reservation here sir.” from the clerk. His hotel booking was canceled.

 

Most major sites are protected with advanced security appliances and several audits are done before a site is approved for deployment, which makes it more difficult for an attacker to find vulnerabilities using direct attacks aimed at the provided service. On the other hand, a lot of companies forget to train their support personnel and that leaves small gaps. As does their way of handling password restoration. All these little breadcrumbs make a bun in the end, especially when combined with information collected from other vendors and their services—primarily because there’s no global standard for password retrieval. Nor what should, and should not be disclosed over the phone.

 

You can’t rely on the vendor to protect you—YOU need to take precautions yourself. Like destroying physical papers, emails, and vital information. Print out the information and then destroy the email. Make sure you empty the email’s trashcan feature (if your client offers one) before you log out. Then file the printout and put it in your home safety box. Make sure that you minimize your mistakes and the information available about you online. That way, if something should happen with your service provider, at least you know you did all you could. And you have minimized the details an attacker might get.

 

I think you heard this one before, but it bears repeating: Never use the same password twice!
I entered my friend’s email in Gmail’s Forgot Password and answered the anti-bot question.
There we go; I quickly check his Hotmail and find the Gmail password restore link. New password, done.

Now for the gold: his Facebook. Using the same method there, I gained access to his Facebook; he had Flickr as well…set to login with Facebook. How convenient. I now own his whole online “life”.. There’s an account at an online electronics store; nice, and it’s been approved for credit.

An attacker could change the delivery address and buy stuff online. My friend would be knee deep in trouble. Theres also a iTunes account tied to his email, which would allow me to remote-erase his phones and pads. Lucky for him, I’m not that type of attacker.

 

Why would anyone want to have my information? Maybe you’re not that important; but consider that maybe I want access to your corporate network. I know you are employed because of that LinkedIn group. Posting stuff in that group with a malicious link from your account is more trustworthy than just a stranger with a URL. Or maybe you’re good friends with one of the admins—what if I contact him from your account and mail, and ask him to reset your corporate password to something temporary?
I’ve tried the method on six of my friends and some of my close relatives (with permission, of course). It worked on five of them. The other one had forgot what she put as the security question, so the question wasn’t answered truthfully. That saved her.
When I had a hard time finding information, I’d used voice-changing software on my computer, transforming my voice to that of a girl. Girls are gentle and less likely to try a hoax you; that’s how the mind works. Then I’d use Skype to dial them, telling them that I worked for the local church historical department, and the records about their grandfather were a bit hard to read. We are currently adding all this into a computer so people could more easily do ancestor searching and in this case, what I wanted was her grandfather’s profession. So I asked a couple of question then inserted the real question in the middle. Like the magician I am. Mundus vult decipi is latin for; The world wan’t to be decived.
In this case, it was easy.
She wasn’t suspicious at all I thanked her for her trouble and told her I would send two movie tickets as a thank you. And I did.
Another quick fix you can do today while cleaning your email? Use an email forwarder and make sure you can’t log into the email provided with the forwarding email. For example, in my domain there’s the email “spam@nullxxxxxxxxx.se” that is use for registering on forums and other random sites. This email doesn’t have a login, which means you can’t really log into the email provider with that email. And mail is then forwarded to the real address. An attacker trying to reset that password would not succeed.
Create a new email such as “imp.mail2@nullsomehost.com” and use THIS email for important stuff, such as online shopping, etc. Don’t disclose it on any social sites or use it to email anyone; this is just a temporary container for your online shopping and password resets from the shopping sites. Remember what I said before? Print it, delete it. Make sure you add your mobile number as a password retrieval option to minimize the risk.
It’s getting easier and easier to use just one source for authentication and that means if any link is weak, you jeopardize all your other accounts aswell. You also might pose a risk to your employer.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close