RESEARCH | April 20, 2017

Linksys Smart Wi-Fi Vulnerabilities

By Tao Sauvage

Last year I acquired a Linksys Smart Wi-Fi router, more specifically the EA3500 Series. I chose Linksys (previously owned by Cisco and currently owned by Belkin) due to its popularity and I thought that it would be interesting to have a look at a router heavily marketed outside of Asia, hoping to have different results than with my previous research on the BHU Wi-Fi uRouter, which is only distributed in China.

Smart Wi-Fi is the latest family of Linksys routers and includes more than 20 different models that use the latest 802.11N and 802.11AC standards. Even though they can be remotely managed from the Internet using the Linksys Smart Wi-Fi free service, we focused our research on the router itself.

Figure 1: Linksys EA3500 Series UART connection

With my friend @xarkes_, a security aficionado, we decided to analyze the firmware (i.e., the software installed on the router) in order to assess the security of the device. The technical details of our research will be published soon after Linksys releases a patch that addresses the issues we discovered, to ensure that all users with affected devices have enough time to upgrade.

In the meantime, we are providing an overview of our results, as well as key metrics to evaluate the overall impact of the vulnerabilities identified.

Security Vulnerabilities

After reverse engineering the router firmware, we identified a total of 10 security vulnerabilities, ranging from low- to high-risk issues, six of which can be exploited remotely by unauthenticated attackers.

Two of the security issues we identified allow unauthenticated attackers to create a Denial-of-Service (DoS) condition on the router. By sending a few requests or abusing a specific API, the router becomes unresponsive and even reboots. The Admin is then unable to access the web admin interface and users are unable to connect until the attacker stops the DoS attack.

Attackers can also bypass the authentication protecting the CGI scripts to collect technical and sensitive information about the router, such as the firmware version and Linux kernel version, the list of running processes, the list of connected USB devices, or the WPS pin for the Wi-Fi connection. Unauthenticated attackers can also harvest sensitive information, for instance using a set of APIs to list all connected devices and their respective operating systems, access the firewall configuration, read the FTP configuration settings, or extract the SMB server settings.

Finally, authenticated attackers can inject and execute commands on the operating system of the router with root privileges. One possible action for the attacker is to create backdoor accounts and gain persistent access to the router. Backdoor accounts would not be shown on the web admin interface and could not be removed using the Admin account. It should be noted that we did not find a way to bypass the authentication protecting the vulnerable API; this authentication is different than the authentication protecting the CGI scripts.

Linksys has provided a list of all affected models:

  • EA2700
  • EA2750
  • EA3500
  • EA4500v3
  • EA6100
  • EA6200
  • EA6300
  • EA6350v2
  • EA6350v3
  • EA6400
  • EA6500
  • EA6700
  • EA6900
  • EA7300
  • EA7400
  • EA7500
  • EA8300
  • EA8500
  • EA9200
  • EA9400
  • EA9500
  • WRT1200AC
  • WRT1900AC
  • WRT1900ACS
  • WRT3200ACM

Cooperative Disclosure
We disclosed the vulnerabilities and shared the technical details with Linksys in January 2017. Since then, we have been in constant communication with the vendor to validate the issues, evaluate the impact, and synchronize our respective disclosures.

We would like to emphasize that Linksys has been exemplary in handling the disclosure and we are happy to say they are taking security very seriously.

We acknowledge the challenge of reaching out to the end-users with security fixes when dealing with embedded devices. This is why Linksys is proactively publishing a security advisory to provide temporary solutions to prevent attackers from exploiting the security vulnerabilities we identified, until a new firmware version is available for all affected models.

Metrics and Impact 

As of now, we can already safely evaluate the impact of such vulnerabilities on Linksys Smart Wi-Fi routers. We used Shodan to identify vulnerable devices currently exposed on the Internet.

 

 

Figure 2: Repartition of vulnerable Linksys routers per country

We found about 7,000 vulnerable devices exposed at the time of the search. It should be noted that this number does not take into account vulnerable devices protected by strict firewall rules or running behind another network appliance, which could still be compromised by attackers who have access to the individual or company’s internal network.

A vast majority of the vulnerable devices (~69%) are located in the USA and the remainder are spread across the world, including Canada (~10%), Hong Kong (~1.8%), Chile (~1.5%), and the Netherlands (~1.4%). Venezuela, Argentina, Russia, Sweden, Norway, China, India, UK, Australia, and many other countries representing < 1% each.

We performed a mass-scan of the ~7,000 devices to identify the affected models. In addition, we tweaked our scan to find how many devices would be vulnerable to the OS command injection that requires the attacker to be authenticated. We leveraged a router API to determine if the router was using default credentials without having to actually authenticate.

We found that 11% of the ~7000 exposed devices were using default credentials and therefore could be rooted by attackers.

Recommendations

We advise Linksys Smart Wi-Fi users to carefully read the security advisory published by Linksys to protect themselves until a new firmware version is available. We also recommend users change the default password of the Admin account to protect the web admin interface.

Timeline Overview

  • January 17, 2017: IOActive sends a vulnerability report to Linksys with findings 
  • January 17, 2017: Linksys acknowledges receipt of the information
  • January 19, 2017: IOActive communicates its obligation to publicly disclose the issue within three months of reporting the vulnerabilities to Linksys, for the security of users
  • January 23, 2017: Linksys acknowledges IOActive’s intent to publish and timeline; requests notification prior to public disclosure
  • March 22, 2017: Linksys proposes release of a customer advisory with recommendations for  protection
  • March 23, 2017: IOActive agrees to Linksys proposal
  • March 24, 2017: Linksys confirms the list of vulnerable routers
  • April 20, 2017: Linksys releases an advisory with recommendations and IOActive publishes findings in a public blog post
RESEARCH | March 1, 2017

Hacking Robots Before Skynet

Robots are going mainstream in both private and public sectors – on military missions, performing surgery, building skyscrapers, assisting customers at stores, as healthcare attendants, as business assistants, and interacting closely with our families in a myriad of ways. Robots are already showing up in many of these roles today, and in the coming years they will become an ever more prominent part of our home and business lives. But similar to other new technologies, recent IOActive research has found robotic technologies to be highly insecure in a variety of ways that could pose serious threats to the people and organizations they operate in and around.
 
This blog post is intended to provide a brief overview of the full paper we’ve published based on this research, in which we discovered critical cybersecurity issues in several robots from multiple vendors. The goal is to make robots more secure and prevent vulnerabilities from being used maliciously by attackers to cause serious harm to businesses, consumers, and their surroundings. The paper contains more information about the research, findings, and cites many sources used in compiling the information presented in the paper and this post.
 
Robot Adoption and Cybersecurity
Robots are already showing up in thousands of homes and businesses. As many of these “smart” machines are self-propelled, it is important that they’re secure, well protected, and not easy to hack. If not, instead of helpful resources they could quickly become dangerous tools capable of wreaking havoc and causing substantive harm to their surroundings and the humans they’re designed to serve.
 
We’re already experiencing some of the consequences of substantial cybersecurity problems with Internet of Things (IoT) devices that are impacting the Internet, companies and commerce, and individual consumers alike. Cybersecurity problems in robots could have a much greater impact. When you think of robots as computers with arms, legs, or wheels, they become kinetic IoT devices that, if hacked, can pose new serious threats we have never encountered before.
 
With this in mind, we decided to attempt to hack some of the more popular home, business, and industrial robots currently available on the market. Our goal was to assess the cybersecurity of current robots and determine potential consequences of possible cyberattacks. Our results show how insecure and susceptible current robot technology is to cyberattacks, confirming our initial suspicions.
 
Cybersecurity Problems in Today’s Robots
We used our expertise in hacking computers and embedded devices to build a foundation of practical cyberattacks against robot ecosystems. A robot ecosystem is usually composed of the physical robot, an operating system, firmware, software, mobile/remote control applications, vendor Internet services, cloud services, networks, etc. The full ecosystem presents a huge attack surface with numerous options for cyberattacks.
 
We applied risk assessment and threat modeling tools to robot ecosystems to support our research efforts, allowing us to prioritize the critical and high cybersecurity risks for the robots we tested. We focused on assessing the most accessible components of robot ecosystems, such as mobile applications, operating systems, firmware images, and software. Although we didn’t have all the physical robots, it didn’t impact our research results. We had access to the core components, which provide most of the functionality for the robots; we could say these components “bring them to life.”
 
Our research covered home, business, and industrial robots, as well as the control software used by several other robots. The specific robot vendors evaluated in the research are identified in the published research paper.
 
We found nearly 50 cybersecurity vulnerabilities in the robot ecosystem components, many of which were common problems. While this may seem like a substantial number, it’s important to note that our testing was not even a deep, extensive security audit, as that would have taken a much larger investment of time and resources. The goal for this work was to gain a high level sense of how insecure today’s robots are, which we accomplished. We will continue researching this space and go deeper in future projects.
 
An explanation of each main cybersecurity issue discovered is available in the published research paper, but the following is a high-level (non-technical) list of what we found:
·         Insecure Communications
·         Authentication Issues
·         Missing Authorization
·         Weak Cryptography
·         Privacy Issues
·         Weak Default Configuration
·         Vulnerable Open Source Robot Frameworks and Libraries
 
We observed a broad problem in the robotics community: researchers and enthusiasts use the same – or very similar – tools, software, and design practices worldwide. For example, it is common for robots born as research projects to become commercial products with no additional cybersecurity protections; the security posture of the final product remains the same as the research or prototype robot. This practice results in poor cybersecurity defenses, since research and prototype robots are often designed and built with few or no protections. This lack of cybersecurity in commercial robots was clearly evident in our research.
 
Cyberattacks on Robots

Our research uncovered both critical- and high-risk cybersecurity problems in many robot features. Some of them could be directly abused, and others introduced severe threats. Examples of some of the common robot features identified in the research as possible attack threats are as follows:

  • Microphones and Cameras
  • External Services Interaction
  • Remote Control Applications
  • Modular Extensibility
  • Network Advertisement
  • Connection Ports
A full list with descriptions for each is available in the published paper.
 
New technologies are typically prone to security problems, as vendors prioritize time-to-market over security testing. We have seen vendors struggling with a growing number of cybersecurity issues in multiple industries where products are growing more connected, including notably IoT and automotive in recent years. This is usually the result of not considering cybersecurity at the beginning of the product lifecycle; fixing vulnerabilities becomes more complex and expensive after a product is released.
 
The full paper provides an overview of the many implications of insecure robots as they become more prominent in home, business, industry, healthcare, and other applications. We’ve also included many recommendations in the paper for ways to design and build robotic technology more securely based on our findings.
 
Click here for more information on the research and to view the full paper for additional details and descriptions.   
EDITORIAL | October 16, 2015

Five Reasons Why You Should Go To BruCON

BruCON is one of the most important security conferences in Europe. Held each October, the ‘Bru’ in ‘BruCON’ refers to Brussels, the capital of Belgium, where it all started. Nowadays, it’s held in the beautiful city of Ghent, just 55 mins from its origin. I had the chance to attend this year, and here are the five things that make it a great conference, in my opinion.

You can check out BruCON’s promo video here: https://www.youtube.com/watch?v=ySmCRemtMc4.
1. The conference
Great talks presented by international speakers; from deeply technical talks, to threat intelligence and other high-level stuff. You might run into people and friends from Vegas or another security conference.
A circular and well illuminated stage. You better not be caught taking a nap, unless you want a picture of yourself sleeping on the Internet.

(Shyama Rose talking about BASE jumping and risk)

While paid trainings take place two or three days before the conference, free workshops are available to the public during the two-day conference.

(Beau Woods (@beauwoods) giving a cool workshop named:

«Escalating Privileges Through Better Communication»)

While at BruCON, I presented my research on security deficiencies in electroencephalography (EEG) technologies. EEG is a non-invasive method of recording electrical brain activity (synaptic activity between neurons) taken from the scalp. EEG has increasingly been adopted across different industries, and I showed how the technology is prone to common network and application attacks. I demonstrated brain signal sniffing and data tampering through man-in-the-middle attacks, as well as denial-of-service bugs in EEG servers. Client-side applications that analyze EEG data are also prone to application flaws, and I showed how trivial fuzzing can uncover many of them. You can find the related material here: slides, demos (videos) [resource no longer available], and my live talk (video).
2. The city
The medieval architecture in Ghent will enchant you. It’s a really cool city, everywhere you look. Full of restaurants, cafes, and of course, pubs. Students from all ages also make this city full of vibe. You can easily spend two more days and enjoy all Ghent has to offer.
All of this is just around the corner from the venue.
3. The b33r
I’m not a b33r connoisseur, but most of the beers I tasted while in Ghent were, according to my taste, really good. Belgium is catalogued as one of the best beer countries in the world, which speaks for itself. In a nice gesture by the conference organizers, speakers were given a single bottle of a hard-to-find beer, Westvleteren 12, which has been rated as “the best beer in the world.” I have no idea know how they got them, as the brewer does not produce large amounts of this beer and only sells it to a select few people.
Beer at the venue, beer between talks, and there was even a night where we mixed good beer with ice cream. It was interesting.
4. The venue
The conference is held in the heart of the city, near all the hotels, just two or three blocks from where you’re most likely to stay. The organizers thought out every single detail to help you arrive right on time at the venue.
The main hall is perfect for networking and serves b33r, coffee, and food all day, not only during specific hours.
A renowned hacker DJ is dropping some bass or a pianist girl (computer scientist who wrote code for Google) took turns to make those moments even better.

 

Wouldn’t be a good conference without a Wall of Sheep; it was RickRolled though ;D
(Picture taken by @sehnaoui)
The party venue, just three blocks from the stage, had a good hacker ambience and a good sound atmosphere created by two well-known DJs: @CountNinjula & @KeithMyers.

 

(Picture taken by @wimremes)
5. The old video game consoles
If you’re not interested in a talk, or simply bored, just head upstairs and travel back in time with a whole hall of old consoles. “Here comes a new challenger” are accepted…as long as there’s b33r or money involved.

 

 

 
Well, that’s it, another great security conference for you to consider in the future.
Finally, thanks to all this year’s organizers and volunteers. Perhaps you’ll join in next year 😉

 

(Photo taken by @SenseiZeon)
Cheers!

EDITORIAL | March 24, 2015

Lawsuit counterproductive for automotive industry

It came to my attention that there is a lawsuit attempting to seek damages against automakers revolving around their cars being hackable.

The lawsuit cites Dr. Charlie Miller’s and my work several times, along with several other researchers who have been involved in automotive security research.

I’d like to be the first to say that I think this lawsuit is unfortunate and subverts the spirit of our research. Charlie and I approached our work with the end goals of determining if technologically advanced cars could be controlled with CAN messages and informing the public of our findings. Obviously, we found this to be true and were surprised at how much could be manipulated with network messages. We learned so much about automobiles, their communications, and their associated physical actions.

Our intent was never to insinuate deliberate negligence on the part of the manufacturers. Instead, like most security researchers, we wanted to push the boundaries of what was thought to be possible and have fun doing it. While I do believe there is risk associated with vehicle connectivity, I think that a lawsuit can only be harmful as it has the potential to take funds away from what is really important:  securing the modern vehicle. I think any money automobile manufacturers must spend on legal fees would be more wisely spent on researching and developing automotive intrusion detection/prevention systems.

The automotive industry is not sitting idly by, but constantly working to improve the security of their past, present, and future vehicles. Security isn’t something that changes overnight, especially in the case of automobiles, which take even longer since there are both physical and software elements to be tested. Offensive security researchers will always be ahead of the people trying to formulate defenses, but that does not mean the defenders are not doing anything.

While our goals were public awareness and industry change, we did not want change to stem from the possible exploitation of public fears. Our hope was that by showing what is possible, we could work with the people who make the products we use and love on an everyday basis to improve vehicle security.

– cv

INSIGHTS | May 7, 2014

Glass Reflections in Pictures + OSINT = More Accurate Location

By Alejandro Hernández – @nitr0usmx

Disclaimer: The aim of this article is to help people to be more careful when taking pictures through windows because they might reveal their location inadvertently. The technique presented here might be used for many different purposes, such as to track down the location of the bad guys, to simply know in which hotel is that nice room or by some people, to follow the tracks of their favorite artist.
All of the pictures presented here were posted by the owners on Twitter. The tools and information used to determine the locations where the pictures were taken are all publically available on the Internet. No illegal actions were performed in the work presented here. 

 
 
Introduction
Travelling can be enriching and inspiring, especially if you’re in a place you haven’t been before. Whether on vacation or travelling for business, one of the first things that people usually do, including myself, after arriving in their hotel room, is turn on the lights (even if daylight is still coming through the windows), jump on the bed to feel how comfortable it is, walk to the window, and admire the view. If you like what you see, sometimes you grab your camera and take a picture, regardless of reflections in the window.
Without considering geolocation metadata [1] (if enabled), reflections could be a way to get more accurate information about where a picture was taken. How could one of glass’ optical properties [2], reflection, disclose your location? Continue reading.
Of course pictures taken from windows disclose location information such as the city and/or streets; however, people don’t always disclose the specific name of the place they’re standing. Here is where reflections could be useful.
Sometimes, not all of the time, but sometimes, reflections contain recognizable elements that with a little extra help, such as OSINT (Open Source Intelligence) [3], could reveal a more accurate location. The OSINT elements that I used include:
       Google Earth 3D Buildings (http://www.google.com/earth/)
       Google Maps (and Street View) (http://maps.google.com)
       Emporis (buildings information) (http://www.emporis.com)
       SkyscraperPage (buildings information) (http://skyscraperpage.com)
       Foursquare (pictures uploaded by people) (http://foursquare.com)
       TripAdvisor (pictures uploaded by people) (http://www.tripadvisor.com)
       Hotels’ Websites
       Google.com
In the following case studies, I’ll present step-by-step instructions for how to get more accurate information about where a picture was taken using reflections.
CASE #1 – Miami, FL
Searching for “hotel view” pictures on Twitter, I found this one from Scott Hoying (a member of Pentatonix, an a cappella group of five vocalists):
 
Looking at his previous tweet:
 
 We know he was in Miami, but, where exactly? Whether or not you’ve been to Miami, it’s difficult to recognize the buildings outside the window:
 
 So, I went to Wikipedia to get the name of every member of the band:
I looked for them on Twitter and Instagram. Only one other member had published a picture from what appeared to be the same hotel:
I was relatively easy to find that view with Google Earth:
However, from that perspective, there are three hotels:
So, it’s time to focus on the reflection elements present in the picture (same element reflected in different angles and the portraits) as well as the pattern in the bed cover:
Two great resources for reference pictures, in addition to hotels’ websites, are Foursquare and TripAdvisor (some people like to show where they’re staying). So, after a couple of minutes analyzing pictures of the three possible hotels, I finally found our reflected elements in pictures uploaded by people and on the hotel’s website itself:
After some minutes, we can conclude that the band stayed at the Epic Hotel and, perhaps, in a Water View suite:
 
 
CASE #2 – Vancouver, Canada
The following picture was posted by a friend of mine with the comment that she was ready for her talk at #XXXX2014 conference in #Vancouver. The easiest way to get her location would have been to look for the list of partnering hotels for speakers at XXXX2014 conference, but what if the name of the conference hadn’t been available?
Let’s take the longer but more interesting path, starting with only the knowledge that this picture was taken in Vancouver:
The square lamp reflected is pretty clear, isn’t it? ;-). First, let’s find that building in front. For that, you could go straight to Google Earth with the 3D buildings layout enabled, but I preferred to dive into Vancouver’s pictures in Emporis:
We’ve found it’s the Convention Centre (and its exact location evidently). Now, it’s easy to match the perspective from which the picture was taken using Google Earth’s 3D buildings layout:
We see it, but, where are we standing? Another useful OSINT resource I used was the SkyscraperPage of Vancouver, which shows us two options:
 
By clicking on each mark we can see more detailed information. According to this website, the one on the right is used only for offices and retail, but not for lodging: 
However, the other one seems to be our building:
A quick search leads us to the Fairmont Pacific Rim’s Website, where it’s easy to find pictures from inside the hotel:
The virtual tour for the Deluxe Room has exactly the same view:
Turn our virtual head to the left… and voilà, our square lamp:
Now, let’s find out how high up it is to where the picture was taken. Let’s view our hotel from the Convention Center’s perspective and estimate the floor:
From my perspective, it appears to be between the 17th and 20th floor, so I asked the person who took the picture to corroborate:
 
CASE #3 – Des Moines, IA – 1
An easy one, there are not many tall buildings in Des Moines, Iowa, so it was sort of easy to spot this one:
It seems that the building in front is a parking garage. The drapes look narrow and are white / pearl color. The fans on the rooftop were easy to locate on Google Maps:
And we could corroborate using the Street View feature:
We found it was the Des Moines Marriott Downtown. Looking for pictures on TripAdvisor we’ve found what it seems to be the same drapery:
Which floor? Let’s move our head and look towards the window where the picture was taken:
The 3D model also helps:
And… almost!
CASE #4 – Des Moines, IA – 2
Another easy case from the same hotel as the previous case. Look at the detailed reflections: the beds, the portraits, the TV, etc.
These were easy-to-spot elements using Foursquare and TripAdvisor:
Misc. Ideas / Further Research
While brainstorming with my friend Diego Madero about reflections, he suggested going deeper by including image processing to separate the reflections from the original picture. It’s definitely a good idea; however, it takes time to do this (as far as I know).
We also discussed the idea that you could use the information disclosed in reflections to develop a profile of an individual. For example, if the person called room service (plates and bottles reflected), what brand of laptop they are using (logo reflected), or whether they are storing something in the safe (if it’s closed or there’s an indicator like an LED perhaps).
Conclusion
Clear and simple: the reflected images in pictures might disclose information that you wouldn’t be willing to share, such as your location or other personal details. If you don’t want to disclose your location, eliminate reflections by choosing a better angle or simply turning off all of the lights inside the room (including the TV) before taking the picture.
Also, it’s evident that reflections are not only present in windows. While I only considered reflections in windows from different hotels, other things can reflect the surrounding environment:
       “44 Impressive Examples of Reflection Photography”
Finally, do not forget that a reflection could be your enemy.
Recommendations
Here are some other useful links:
       “How to Eliminate Reflections in Glasses in Portraits”
       “How to remove the glare and brightness in an image (Image preprocessing)”
Thanks for reading.
References:
[1] “Geolocation”
[2] “Glass – Optical Properties”
[3] “OSINT – Open Source Intelligence”
INSIGHTS | August 20, 2013

FDA Medical Device Guidance

Last week the US Food and Drug Administration (FDA) finally released a couple of important documents. The first being their guidance on using radio frequency wireless technology in medical devices (replacing a draft from January 3,2007), and a second being their new (draft) guidance on premarket submission for management of cybersecurity in medical devices.

The wireless technology guidance document seeks to address many of the risks and vulnerabilities that have been disclosed in medical devices (embedded or otherwise) in recent years – in particular those with embedded RF wireless functionality…

The recommendations in this guidance are intended for RF wireless medical devices including those that are implanted, worn on the body or other external wireless medical devices intended for use in hospitals, homes, clinics, clinical laboratories, and blood establishments.  Both wireless induction-based devices and radiated RF technology device systems are within the scope of this guidance.

The FDA wishes medical device manufacturers to consider the design, testing and use of wireless medical devices…

In the design, testing, and use of wireless medical devices, the correct, timely, and secure transmission of medical data and information is important for the safe and effective use of both wired and wireless medical devices and device systems. This is especially important for medical devices that perform critical functions such as those that are life-supporting or life-sustaining. For wirelessly enabled medical devices, risk management should include considerations for robust RF wireless design, testing, deployment, and maintenance throughout the life cycle of the product.

For most of you reading the IOActive Labs blog, the most important parts of the guidance document are the advice on security and securing “wireless signals and data”. Section 3.d covers this…

Security of RF wireless technology is a means to prevent unauthorized access to patient data or hospital networks and to ensure that information and data received by a device are intended for that device. Authentication and wireless encryption play vital roles in an effective wireless security scheme. While most wireless technologies have encryption schemes available, wireless encryption might need to be enabled and assessed for adequacy for the medical device’s intended use. In addition, the security measures should be well coordinated among the medical device components, accessories, and system, and as needed, with a host wireless network. Security management should also consider that certain wireless technologies incorporate sensing of like technologies and attempt to make automatic connections to quickly assemble and use a network (e.g., a discovery mode such as that available in Bluetooth™ communications). For certain types of wireless medical devices, this kind of discovery mode could pose safety and effectiveness concerns, for example, where automatic connections might allow unintended remote control of the medical device. 

FDA recommends that wireless medical devices utilize wireless protection (e.g., wireless encryption,6 data access controls, secrecy of the “keys” used to secure messages) at a level appropriate for the risks presented by the medical device, its environment of use, the type and probability of the risks to which it is exposed, and the probable risks to patients from a security breach. FDA recommends that the following factors be considered during your device design and development: 

* Protection against unauthorized wireless access to device data and control. This should include protocols that maintain the security of the communications while avoiding known shortcomings of existing older protocols (such as Wired Equivalent Privacy (WEP)). 

* Software protections for control of the wireless data transmission and protection against unauthorized access. 

Use of the latest up-to-date wireless encryption is encouraged. Any potential issues should be addressed either through appropriate justification of the risks based on your device’s intended use or through appropriate design verification and validation.

Based upon the parts I’ve highlighted above, you’ll probably be feeling a little foreboding. From a “guidance” perspective, it’s less useful than a teenager with a CISSP qualification. The instructions are so general as to be useless.

If I was the geek charged with waving the security batton at some medical device manufacturer I wouldn’t be happy at all. Effectively the FDA are saying “there are a number of security risks with wireless technologies, here are some things you could think about doing, hope that helps.” Even if you followed all this advice, the FDA could turn around later during your submission for certification and say you did it wrong…

The second document the FDA released last week (Content of Premarket Submissions for Management of Cybersecurity in Medical Devices – Draft Guidance for Industry and Food and Drug Administration Staff) is a little more helpful – at the very least they’re talking about “cybersecurity” and there’s a little more meat for your CISSP folks to chew upon (in fact parts of it read like they’ve been copy-pasted right out of a CISSP training manual).

This guidance has been developed by the FDA to assist industry by identifying issues related to cybersecurity that manufacturers should consider in preparing premarket submissions for medical devices. The need for effective cybersecurity to assure medical device functionality has become more important with the increasing use of wireless, Internet- and network-connected devices, and the frequent electronic exchange of medical device-related health information.

Again, it doesn’t go in to any real detail of what device manufacturers should or shouldn’t be doing, but it does set the scene for understanding the scope of part of the threat.

If I was an executive at one of the medical device manufacturers confronted with these FDA Guidance documents for the first time, I wouldn’t feel particularly comforted by them – in fact I’d be more worried about the increased exposure I would have in the future. If a future product of mine was to get hacked, regardless of how close I thought I was following the FDA guidance, I’d be pretty sure that the FDA could turn around and say that I wasn’t really in compliance.

With that in mind, let me slip on my IOActive CTO hat and clearly state that I’d recommend any medical device manufacturer that doesn’t want to get bitten in the future for failing to follow this FDA “guidance” reach out to a qualified security consulting company to get advice on (and to assess) the security of current and future product lines prior to release.

Engaging with a bunch of third-party experts isn’t just a CYA proposition for your company. Bringing to bear an external (impartial) security authority would obviously add extra weight to the approval process; proving the companies technical diligence, and working “above and beyond” the security checkbox of the FDA guidelines. Just as importantly though, securing wireless technologies against today’s and tomorrow’s threats isn’t something that can be done by an internal team (or a flock of CISSP’s) – you really do need to call in the experts with a hackers-eye for security… Ideally a company with a pedigree in cutting-edge security research, and I know just who to call…

INSIGHTS | June 20, 2013

FDA Safety Communication for Medical Devices

The US Food and Drug Agency (FDA) released an important safety communication targeted at medical device manufacturers, hospitals, medical device user facilities, health care IT and procurements staff, along with biomedical engineers in which they warn of risk of failure due to cyberattack – such as through malware or unauthorized access to configuration settings in medical devices and hospital networks.
Have you ever been to view a much anticipated movie based upon an exciting book you happened to have read when you were younger, only to be sorely disappointed by what the director finally pulled together on the big screen? Well that’s how I feel when I read this newest alert from the FDA. Actually it’s not even called an alert… it’s a “Safety Communication”… it’s analogous to Peter Jackson deciding that his own interpretation of JRR Tolkien’s ‘The Hobbit’ wasn’t really worthy of the title so to forestall criticism he named the movie ‘Some Dwarves and a Hobbit do Stuff’.
This particular alert (and I’m calling it an alert because I can’t lower myself to call it a safety communication any longer) is a long time coming. Almost a decade ago me and my teams at the time raised the red flag over the woeful security of hospital networks, then back in 2005 my then research teams raised new red flags related to the encroachment of unsecured WiFi in to medical equipment, for the last couple of years IOActive’s research team have been raising new red flags over the absence of security within implantable medical devices, and then on June 13th 2013 the FDA releases a much watered down alert where the primary recommendations and actions section simply states “[m]any medical devices contain configurable embedded computer systems that can be vulnerable to cybersecurity breaches”. It’s as if the hobbit has been interpreted as a midget with hairy feet.
Yes I joke a little, but I am very disappointed with the status of this alert covering an important topic.
The vulnerabilities being uncovered on a daily basis within hospital networks, medical equipment and implantable devices by professional security teams and researchers are generally more serious than what outsiders give credit. Much of the public cybersecurity discussion as it relates to the medical field to date has been about people hacking hospital data systems for patient records and, most recently, the threat of targeted slayings of people who happen to have vulnerable implanted insulin pumps and heart defibrillators. While both are certainly possible, they’re what I would associate with fringe events.
I believe that the biggest and most likely threats lie in non-malicious actors – the tinkerers, the cyber-crooks, and the “in the wrong place at the wrong time” events. These medical systems are so brittle that even the slightest knock or tire-kicking can cause them to fail. I’ll give you some examples:
  • Wireless heart and drug monitoring stations within emergency wards that have open WiFi connections; where anyone with an iPhone searching for an Internet connection can make an unauthenticated connection and have their web browser bring up the admin portal of the station.
  • Remote surgeon support and web camera interfaces used for emergency operations brought down by everyday botnet malware because someone happened to surf the web one day and hit the wrong site.
  • Internet auditing and scanning services run internationally and encountering medical devices connected directly to the Internet through routable IP addresses – being used as drop-boxes for file sharing groups (oblivious to the fact that it’s a medical device under their control).
  • Common WiFi and Bluetooth auditing tools (available for android smartphones and tablets) identifying medical devices during simple “war driving” exercises and leaving the discovered devices in a hung state.
  • Medial staff’s iPads without authentication or GeoIP-locking of hospital applications that “go missing” or are borrowed by kids and have applications (and games) installed from vendor markets that conflict with the use of the authorized applications.
  • NFC from smartphone’s and payment systems that can record, playback and interfere with the communications of implanted medical devices.
These are really just the day-to-day noise of an Internet connected life – but one that much of the medical industry is currently ill prepared to defend against. Against an experienced attacker or someone determined to cause harm – well, it’s as one sided as a lone hobbit versus the combined armies of Middle Earth.
I will give the alert some credit though, that did clarify a rather important point that may have been a stumbling block for many device vendors in the past:
“The FDA typically does not need to review or approve medical device software changes made solely to strengthen cybersecurity.”
IOActive’s experience when dealing with a multitude of vulnerable medical device manufacturers had often been disheartening in the past. A handful of manufacturers have made great strides in securing their devices and controlling software recently – and there has been a change in the hearts and minds over the last 6 months (pun intended) as more publicity has been drawn to the topic. The medical clients we’ve been working most closely with over recent months have made huge leaps in making their latest devices more secure, and their next generation of devices will be setting the standard for the industry for years to come.
In the meantime though, there’s a tremendous amount of work to be done. The FDA’s alert is significant. It is a formal recognition of the poor state of security within the industry – providing some preliminary guidance. It’s just not quite a call to arms I’d have liked to see after so many years – but I guess they don’t want to raise too much fear, nor the ire of vendors that could face long and costly FDA re‑evaluations of their technologies. Gandalf would be disappointed.
(BTW I actually liked Peter Jackson’s rendition of The Hobbit).