INSIGHTS, RESEARCH | December 3, 2024

Building Management Systems: Latent Cybersecurity Risk

Manage the Cybersecurity Risks of your BMS

Building management systems (BMS) and building automation systems (BAS) are great innovations, but present latent cybersecurity and operational risks to organizations. The consequences of a cyberattack on a BMS or BAS could result in operational disruption from the denial of use of the building.

Over the past decade, there have been several examples of attacks on BMS and components. Weaponization and operationalization of vulnerabilities in BMS by threat actors with tools such as ransomware is likely to occur in the next three years. BMS owners and managers can act today to reduce the risks and build on established cybersecurity frameworks to protect their assets and operations.

There are unique considerations involved in assessing the cybersecurity of operational technology (OT) like BMS and BAS. Therefore, it is imperative to work with experts and firms with considerable experience in assessing OT systems and the security of your BMS supply chain.

Background

BMS and BAS offer great promise in improved efficiency, comfort, security, and occupant experience. However, these complex cyberphysical systems also expose the building and occupants to new risks and threats, with negative consequences that in the past required physical access to realize. Today it’s possible for a building manager or staff to control these systems from a mobile application or a cloud-hosted application.

Key Concepts and Terminology

Staff responsible for a BMS or BAS may have extensive engineering or facilities experience, but no exposure to cybersecurity beyond general cybersecurity user awareness training. Unfortunately, in today’s threatscape, that leads to the BMS posing a latent risk: a risk that is present but unknown and unmanaged.

A BMS is one type of Operational Technology (OT), digital control systems that manage physical processes. These systems typically consist of the same types of components that you would find in a traditional information technology (IT) or information and communication technology (ICT) environment, such as servers, switches, routers, firewalls, databases, communication protocols, and applications. Generally, these are specialized variants of the components, but over the past decade we have observed a strong trend of convergence between IT and OT components.

Often these types of systems will be referred to as cyberphysical systems and may be abbreviated as CPS. Simply, a cyberphysical system is one in which a digital computer or microcontroller manages equipment that can produce a physical effect. Cyberphysical systems are vulnerable to the same types of cyberattacks and exploitation of vulnerabilities as IT systems.

Cybersecurity vulnerabilities are defects in the environment such as software or hardware, which can be exploited by a threat actor (attacker) to produce a cybersecurity impact. For IT systems, this generally means it impacts the administration or the business by affecting systems like email, web servers, or systems such as customer billing or enterprise resource planning (ERP) systems. For OT systems, the impact can be more consequential since it can shut down the core operations of a business. For example, in the case of a pipeline company, a cyberattack on an OT system, such as a ransomware attack, can shut down the operations of the pipeline with cascading effects to customers and others downstream who rely on the timely delivery of product through the pipeline. This exact type or attack occurred in 2021 on Colonial Pipeline.

Consequences

A compromised BMS would allow the attacker to control the BMS and cause effects that are only limited by the physical constraints on the building management systems. These types of effects could render a building unoccupiable for short- or long-term periods, due to the intentional manipulation of building environments like temperature, air quality, and humidity. Power, fire suppression, and other high-impact building services could be made inoperable or used to cause physical damage to a building. These risks are especially high in facilities that have very tight environmental requirements necessary to support the intended use, such as precision manufacturing.

In the case of an office building, reasonable business and operational continuity can be realized through its organization enabling work from home. However, in the case of a precision manufacturing plant, production would cease and goods in the production process may be damaged or spoiled due to the disruption. Likewise, a hotel or hospital would experience significant operational disruption and considerable costs in reaccommodating guests or patients at other facilities.

Threat Intelligence

Cybersecurity threat intelligence informs stakeholders about the extent to which threat actors are actively researching, developing, or exploiting cybersecurity vulnerabilities, and seeks to identify the approximate identity of those threat actors. For example, as a defender, it’s often helpful to know whether the threat actor is a malicious individual, organized crime group, or nation-state actor.

The following are examples of real-world attacks on BMS devices:

  • In 2015, a cybersecurity expert disclosed to me that a nation state had compromised the home thermostat in the expert’s residence. In this case, the threat actor’s objective was to use the thermostat to ensure they retained a presence or persistence in the target’s home network for surveillance purposes rather than to produce a cyberphysical effect.[1]
  • In 2021, unknown attackers took control of many of the BAS devices that controlled the lighting, motion detection system, and other operations in an office park in Germany. The intruders were able to gain control of the devices and set a password that prevented anyone else from accessing them. They also bricked many of the affected devices and it took third-party security consultants quite a while to remedy the situation, and they were only able to do so after managing to retrieve the key that locked the systems down from the memory of one of the devices.
  • In April 2023, Schneider Electric proactively issued a security advisory disclosing that it had become aware of a publicly available exploit targeting the KNX product. This exploit would have enabled an attacker to access admin functionality without a password through a directory traversal attack, or to use a brute-force attack to access the administration panel.

We assess that within the next three years, threat actors will develop ransomware payloads specific to BMS and BAS deployments. Threat actors have already developed ransomware to specifically target medical devices, another type of OT, in hospitals.

Suggested Course or Action for BMS Managers

Following a mature cybersecurity management framework like the US National Institute of Standards’ Cybersecurity Framework (NIST CSF) is a wonderful place to start. There are recommended levels appropriate to organizations with any level of cybersecurity maturity.

The following is advice contained in most frameworks, distilled to eight high-level recommendations:

  1. Know your suppliers and look upstream. Select BMS suppliers with established product security programs who do not build hardware or develop software in hostile countries like the People’s Republic of China (PRC).
  2. Conduct a risk assessment. Once you have identified partners and suppliers, properly assess each product’s cybersecurity posture so that you know the risks they may pose to your organization. Consider where each device or component was manufactured and who exactly did so. Is there a possible backdoor or counterfeit part, or could more common software quality issues result in a breach?
  3. Utilize third-party testing. Hire a third-party firm to test your system and those of your key suppliers, to provide actionable results on what you need to fix first. Ideally this should be performed in multiple steps through the design and build process, with the final security assessment completed as part of the site acceptance test.
  4. Regularly scan and patch all vulnerable systems. Here it is important to have chosen vendors who provide regular security patches. Be careful with older systems, which may experience a denial of service through improper use of automated scanners.
  5. Use strong passwords and credentials. Teach your employees about the importance of using strong passwords and credentials and not reusing them across accounts.
  6. Use multi-factor authentication (MFA). Ensure that your staff has set up secure MFAeverywhere possible. SMS MFA solutions are not adequately secure.
  7. Conduct regular security awareness training. Teach employees how to identify phishing scams, update software, and become more security conscious. This training should focus on those who have privileged access to the BMS.
  8. Harden the security of the devices connected to your networks. Ideally, your suppliers and partners will provide recommendations for device configuration and security best practices.

Key Assessment Considerations

Third-party cybersecurity testing comes with specific risks of production impacts, outages, and other unintended consequences. It is critical to choose experienced, expert team members or third-party assessor organizations. It is absolutely necessary that any assessor or penetration tester working in an OT environment have extensive experience in those environments. OT environments with older programmable logic controllers (PLCs) can experience outages when those devices are pinged or scanned by a tool, due to limited resources or poor TCP/IP software stack implementations. Experienced assessors and OT-aware cybersecurity tools know how to achieve the same testing goals in OT environments using passive analysis methods such as analysis of network packet captures.

Unique OT Considerations

Given the very long useful life of many OT assets, including BMS and BAS deployments, frequently one may find several non-patchable vulnerabilities in that type of environment. Consequently, you should focus on a layered security approach, which makes it very difficult for a threat actor to get to the vulnerable, unpatchable components before they are detected and evicted from the environment. As such, heavy emphasis should be placed on assessing the segmentation, access controls, sensors, and detection capabilities between IT and OT environments.

Supply Chain Considerations

The recent supply chain attacks targeting Hezbollah operatives demonstrate how even organizations with mature counterintelligence capabilities can still fall victim to a supply chain interdiction. This demonstrates the high likelihood of successful attacks against companies and organizations with less defensive capabilities, including BMS users and suppliers. It is critical to buy BMS from trusted suppliers and countries.

Since it is uneconomical to assess every product an organization integrates into their operations, a risk-centric approach should be taken to use the limited resources available to focus on assessing the highest-risk and highest-consequence products and suppliers. Specifically for OT, the focus should be on those suppliers and products that if compromised could bring a complete halt to operations.

Asking a key supplier to show some type of summary of their secure development policies, as well as a summary of the results of an independent cybersecurity validation and verification by a qualified third-party assessor, is a great way to know whether your supplier takes product cybersecurity seriously and assesses the risk exposure their products bring to their customers’ organizations.


[1] Proprietary intelligence. Additional details are withheld to protect those involved and are not necessary for the example.

INSIGHTS | October 29, 2024

Inside IOActive’s Innovative Key Fob Badge for DEF CON 2024’s Car Hacking Village – Part 3/3

This is Part-3 of a 3-Part Series. Check out Part-1 here and Part-2 here.

This is the third in a series of three posts in which I break down the creation of a unique key fob badge for the 2024 Car Hacking Village (CHV). Part 1 is an overview of the project and the major components; I recommend that you begin there. In Part 2 I discussed some of the software aspects and the reasoning behind certain decisions.

Background

Before I discuss how to interact with the key fob using a computer, it’s probably a sensible idea to cover some basics of Passive Entry Passive Start (PEPS).

 PEPS is a system that allows for keyless entry and ignition in vehicles. Here’s how it operates:

Key Fob Communication:

The key fob communicates with the car using radio-frequency (RF) signals. These signals are typically in the low-frequency (LF) range (125 kHz or 134 kHz) for car-to-key-fob communication, and in the ultra-high-frequency (UHF) range (sub-1 GHz) for key-fob-to-car.

Proximity Detection:

As the key fob approaches the car, it exchanges unique access codes with the vehicle. The car’s system measures the distance between the key fob and the vehicle. The LF communication operates in the near field, meaning the key fob must be close to the car to receive the signal. The key fob then responds using UHF, which operates in the far field, allowing it to communicate back to the car from a greater distance.

Passive Entry:

When the door handle is touched, the car verifies the key’s presence and unlocks the doors. This can be done through a triggered system, where the car scans for the key fob when the handle is touched, or a polling system, where the car continuously scans for the key fob’s presence.

Passive Start:

Once inside the vehicle, the engine can be started by pressing the ignition button. The car’s system ensures the key fob is inside the vehicle before allowing the engine to start, adding an extra layer of security.

That all makes sense, right? I always find a good sequence diagram helpful to reinforce things like this, so I made one. The process for unlock and start are fundamentally the same, so to avoid repeating myself, I will focus here on the unlock process.

As illustrated in the sequence diagram, it often starts off as either the driver touching the door handle or the driver approaching the car. For simplicity’s sake, I will again narrow the focus to the driver touching the door handle. In simple steps:

  1. Driver touches the door handle
  2. Vehicle sends a wake message to the key fob
  3. Key fob responds
  4. Vehicle sends a cryptographic challenge to the key fob
  5. Key fob responds with a correct cryptographic response, which is the result of the cryptographic algorithm, the challenge, and the pre-shared key.
  6. Vehicle unlocks.

Anyone with a software-defined radio can capture the UHF signals from the key fob to the car, but capturing the LF signals from the car to the key fob typically require special antenna and a device called an upconverter. This is not intended as a primer on RF or SDR, and if you’d like to learn more, it’s definitely worthwhile to read up on any topics like these that you may be unsure of.

As previously mentioned, the signal from the car to the key fob is LF (125/134KHz) and the key fob to the car is UHF, and interacting with UHF is trivial when using most software-defined radios. However, the LF aspect can be more challenging.

Figure 1 Communication flow from vehicle to key fob in LF and key fob to vehicle in UHF

Remember, all we want to do is interact with the key fob. In Part 2 I discuss how the key fob can receive LF messages and then transmit them after some basic checks via an obfuscation algorithm. In this sequence diagram (Figure 2), you can see how the wake-up message is followed by the key fob’s acknowledgement. Any short message we send to the key fob via LF will be echoed back after obfuscation using UHF.

Figure 2 Sequence Diagram

The question is, how can we send LF (125/134KHz) signals to the key fob from a laptop?

Figure 3. Signal conversion chain

Figure 3 shows a laptop with the left side connected to an SDR and the right side connected to a Digital-to-Analog Converter (DAC), an electronic device that converts digital signals into analog signals.

Here’s a more detailed look at how it works and its applications.

Conversion Process

The input to a DAC is a digital signal, typically in the form of binary numbers (0s and 1s).  The output is an analog signal, which can be a continuous voltage or current that varies smoothly over time.

A DAC takes the digital input and converts it into a corresponding analog value. For example, in audio applications, a DAC converts digital audio files into analog signals that can drive speakers to produce sound.

The conversion involves mapping the digital values to specific voltage or current levels. This process is crucial for applications where precise analog output is needed.

Applications

DACs can convert a variety of digital signals, such as:

  • Audio: DACs are used in music players, smartphones, and other audio devices to convert digital audio files into analog signals that can be heard through speakers or headphones.
  • Video: In televisions and video equipment, DACs convert digital video data into analog signals for display.
  • Instrumentation: DACs are used in various measurement and control systems to generate precise analog signals from digital data.

There are a few key elements to focus on in the above description, besides the tasks a DAC is designed for. The interesting part for any hacker is its use for audio conversion. The LF signals our key fob is sending have quite slow data rates, meaning they actually would be audible if not for the carrier. The carrier frequency is 125 or 134KHz, so we would need a DAC capable of 250KHz+ in order to comply with the Nyquist theorem.

Among other things, the theorem states that to accurately reconstruct a continuous signal from its samples, the sampling rate must be at least twice the highest frequency present in the signal. This minimum rate is known as the Nyquist rate.

So, if our carrier frequency is 125 or 134KHz, we need at least twice that.

If you look to high resolution audio devices, especially DACs, you can often find a maximum conversion rate of 384KHz. This is more than twice the carrier frequency, meaning that we can use readily available high resolution audio DACs such as the device below from iFi.

Figure 4. iFi DAC

The other important aspect for conversion is the coil or antenna. Now that we have an audio device that can generate the signals we need, we need an antenna. In the sentence, “a DAC converts digital audio files into analog signals that can drive speakers to produce sound,” the word “speaker” should jump out.

If you have never dismantled a speaker, it is a relatively easy task, and the result will look something like the following image.

Figure 5. Modified speaker

Now put all this together, and we have something along the lines of Figure 6:

Figure 6. Signal path with modified speaker

The physical aspect is now sorted, so I will move on to the software.

I like to use GNU Radio, which is a fantastic tool and open source as well. It is a powerful signal processing tool, and once you get the hang of it, it’s relatively easy to use to build new flowgraphs – charts representing the blocks through which samples flow.

The following chart encodes a message of 0x550102030405060708, unpacks the 8 bit bytes into a bit stream, then modulates it. I have included both a Wav File Sink, for writing the data to a file for replaying later using a tool like Audacity, and an audio sink, for sending the output directly to a high-resolution audio device like the one mentioned above.

Figure 7. GNU Radio flowgraph

Starting on the left side, we have a message strobe which allows us to send out a message every x milliseconds. I have chosen once per second, or every 1000ms. If you follow the arrows, it should all make sense. The message strobe sends a message to the PDU to Tagged Stream block, where the message is converted into a byte stream. Next, we unpack the bytes and convert them into bits, which is ultimately what we want to send in this case.

The next section of the flow chart repeats the symbol enough times to set our data rate. In this instance we want a data rate of 982 baud, so each bit symbol needs to be repeated 391 times. I have added descriptions under each major block to help you follow along.

After converting the byte type into float (where the purple is converted to brown), the flow chart goes through a voltage controlled oscillator (VCO). A VCO is an electronic oscillator whose frequency of oscillation is controlled by an input voltage. The frequency of the output signal varies in direct proportion to the applied voltage.

However, we are not using voltage, and everything here is digital and binary, all represented by ones and zeros. If you think of a 1 as a voltage and 0 as the absence of a voltage, the VCO will output a frequency based on the sensitivity value.

Those are the basics on how we’re using GNU Radio; if you’d like to dive deeper, I’ve made the flowgraph available on the same GitHub location as the software.

Moving along, after the VCO we have a modulated bit stream that represents the original message shown in hex. You may also notice a multiply block. If the VCO receives a 0, it will not change the output, meaning whatever value it was at previously is where it will stay. This will result in a DC offset; that is, a voltage present when the input is 0. To remove this DC offset, which will ultimately upset the audio DAC and potentially the speaker coil, the multiply block takes the output from the VCO and our bit stream. If there is a 0 in our bit stream, the output of the VCO is multiplied by 0, resulting in a 0, so no DC offset. If there is a 1 in our bitstream, the VCO output is multiplied by 1, so the VCO output is maintained.

When the GNU Radio flowgraph is run, you should see something like the following:

Figure 8. Bit stream modulated and VCO output

The blue signal represents the actual output of the VCO; however, it’s behind the red signal, which is the cleaned output of the VCO. They align perfectly, except for whenever the blue signal has a DC offset, as mentioned earlier.

By enabling the audio sink and connecting the audio DAC, we can now output the LF signals required to energize the LF side of the key fob.

The last piece of the puzzle is to listen to the key fob’s UHF transmissions.

Figure 9. Flowgraph for listening to the fob

A straightforward RF receiver is at work here. The quadrature (I/Q) data gets transformed into amplitude through a complex-to-magnitude block, followed by a low-pass filter. This process results in clear amplitude changes that correspond perfectly with the ones and zeros transmitted by the key fob.

For the RTLSDR Source, change Center Freq (Hz) to the value of freq. Notice how the variable freq is defined as 433.92M. The other change to make in this source really depends on the antenna you are using but for this purpose, try changing RF Gain to 48. Your particular SDR and antenna may warrant either much lower gain or higher gain.

Figure 10. Soapy RTLSDR Source Properties dialog box

Moving to the Low Pass Filter properties, configure Decimation along with Cutoff Freq and Transition Width.

Figure 11. Low Pass Filter Properties dialog box

Finally, configure the GUI Time Sink. There are lots of configuration items to change here so step through them slowly.

First the Number of Points, then Y min and Y max. The configuration values above adjust the default values used in the blocks if you were to draw this flowgraph in GNURadio. As some of the items are not visually present on the flowgraph itself, I have stepped through each of the blocks and changes to the default values that are required to make this flow graph work.

Figure 12. QT GUI Time Sink Properties dialog box

Stepping through the items on this tab change “Number of Points” to 256, “Y min” to -2 and “Y max” to 2. Then click on the Trigger tab and set the Trigger Mode and Trigger Level.

Figure 13. QT GUI Time Sink Trigger Properties dialog box

Once all of those configurations have been set, you should be ready to run the graph.

When the flow chart is run and a button is pressed on the key fob, the trace on screen should look something like the following graph, with the filtered amplitude showing distinct binary states.

Figure 14. Filtered amplitude (RSSI) data showing binary states of signal

You should notice the series of ones and zeros at the start. This is the preamble, as discussed previously. This is followed by two zeros indicating the end of the pre-amble.

And there you go. I could do many more posts on how to demodulate the key fob data and build Python blocks in GNU Radio to remove the obfuscation, but where would be the fun in that? I’ll instead leave that for those who are interested in having a good time.

Stay tuned for other material we will be publishing topics like this, including how-to videos on various software-defined radio techniques.

INSIGHTS | October 25, 2024

Inside IOActive’s Innovative Key Fob Badge for DEF CON 2024’s Car Hacking Village – Part 2/3

This is Part-2 of a 3-Part Series. Check out Part-1 here and Part-3 here.

This is the second in a series of three posts in which I break down the creation of a unique key fob badge for the 2024 Car Hacking Village (CHV). Part 1 is an overview of the project and the major components; I recommend you begin there. In this post, I’ll discuss some of the software aspects and the reasoning behind certain decisions.

This blog covers several high-level subjects, including UHF and LF transmissions, receiving signals, modulation schemes like OOK/ASK, basic signal processing within a microcontroller, and the C programming language. While this post isn’t meant to be a tutorial on any specific technology, protocol, or component, if you’re interested, it’s definitely worthwhile exploring the various topics I’ll touch on here.

Let’s dive in. As mentioned in Part 1, the key fob transmits using UHF and receives using LF. It also supports CAN and features two physical touch sensor buttons. In addition to the discussion that follows, there are intentional bugs in the software, so I recommend checking it out for a closer look.

UHF transmissions

The raw format of a typical UHF frame doesn’t play nicely with standard serial communication, which includes start and stop bits. This made it crucial to choose an interface that matched the physical signaling requirements for transmitting UHF packets.

In Part 1, I mentioned using the MICRF113 UHF transmitter, which operates with OOK (on-off keying). The MICRF113 transmits whatever speed or state is sent to the DATA signal. This meant I needed to send a simple bit stream from the microcontroller to the MICRF113, without start or stop bits or gaps between bytes. For this I chose a Serial Peripheral Interface (SPI).

SPI is a synchronous serial communication protocol used for short-distance communication, primarily in embedded systems. It uses a controller/peripheral architecture with a single controller. The typical physical signals used in SPI include:

Clock (SCLK): Synchronizes the data transmission.

Data Out, or Peripheral In Controller Out (PICO): The line for sending data from the controller to the peripheral.

Data In, or Peripheral Out Controller In (POCI): The line for sending data from the peripheral to the controller.

Chip Select (CS): Selects the peripheral device.

In my case, I only needed the Data Out line to send the bit stream to the MICRF113. This setup allowed for a continuous stream of data without the interruptions caused by start or stop bits, making it ideal for UHF transmission.

Figure 1, courtesy of SparkFun, illustrates the typical physical signals used in SPI:

For more detailed information on SPI, you can refer to the SparkFun SPI Tutorial.

By choosing SPI, I ensured that the physical aspect of signaling aligned perfectly with how I wanted the UHF packet to be transmitted, resulting in a more efficient and reliable communication setup.

One of the great things about most microcontrollers is the flexibility to assign physical pins to specific functions. In this case, we’re dealing with the signals SCK, OUT, IN, and CS.

I took a bit of a creative approach here. Instead of assigning pins to SCK, IN, or CS, I left them unallocated. This means that whenever I write to the SPI bus, the only physical signal coming from the microcontroller is via the OUT pin. This pin toggles high and low according to the bit stream I send out.

This “hacky” method simplifies the setup and ensures that the data stream is transmitted exactly as needed, without any unnecessary signals getting in the way. It’s a neat trick that leverages the microcontroller’s flexibility to achieve a clean and efficient communication setup.

The frame format of the UHF packet is structured as follows: {pre-amble}{frame counter}{length of payload}{variable length data}.

  • Pre-amble: This is a sequence of bits sent at the beginning of the packet to help the receiver synchronize with the incoming data stream.
  • Frame counter: This is a value that increments with each packet sent, helping to keep track of the sequence of packets.

In the code snippet below, you’ll see the pre-amble being set on line 73 as hex values 55, 55, 54. As a bitstream, this translates to 0101 0101 0101 0101 0101 0100. Notice the two zeros at the end, which are crucial for synchronization.

On line 76, the frame counter is added to the byte stream, although it’s calculated earlier on line 65. This counter ensures each packet is uniquely identifiable. Next, on line 79, the length of the payload is appended to the byte stream. This tells the receiver how much data to expect.

Finally, on line 82 through 86, the payload itself is added byte by byte to the output array. This is where the actual data is transmitted. Or is it? This code represents a simple obfuscation. How would you go about recovering the original payload data if you received this UHF transmission?

Take a close look at the stack-defined buffer in this function. Can you spot the calculation error? It’s a great exercise to understand not only how these elements come together to form a complete UHF packet, but also how easily security vulnerabilities can occur in code.

Figure 2. UHF send function utilizing SPI

POST (Power On Self Test)

It’s important to remember that this badge was created for the DEFCON 2024 security conference, and the amazing team at the CHV had to program these badges before they were sold. But how could they ensure the badges were working correctly? The simple answer: a Power On Self Test (POST).

Two key functions of the badge are tested in the code snippet below. First, on line 124, the output on Port B5 is turned on. This signal, as mentioned in the previous blog, is directly connected to the LED on the board via a resistor. This LED serves as a general status indicator.

On line 126, there’s a remark indicating that the loop will run 8 times with a delay, transmitting a test message each time. The test message, defined on line 129, is the word “TESTING”. Immediately following this, on line 130 the send function transmits the test packet. Keep in mind, the send function processes the payload, so you won’t see the actual “TESTING” message being directly transmitted.

Lines 133 through 136 introduce a delay. After the delay, on line 139 the Port B5 output is XORed with itself, meaning it will change state with each loop iteration. This results in the LED flashing while the test UHF transmissions are performed. Finally, after the self-test, on line 146 the LED is turned off by setting the state of Port B5 to 0.

This POST ensures that the badge is functioning correctly and could be tested by the CHV team before it reaches the hands of DEFCON attendees, providing a reliable and engaging experience for all participants.

Figure 3. POST fuction

Main program code

Now we get to the heart of the code. I saved this part for last because it’s more involved (well, nearly last; there is one final aspect, but I will get to that). This isn’t the most elegant piece of code, but as I explained in Part 1, time was a big constraint on the project. The code works, but it was written in a matter of days and definitely has room for improvement.

The main part of the code polls the ADC (line 152), which is connected via an internal op-amp to the LF antenna. The ADC is configured to sample around 250,000 samples per second. Sometimes, due to the nature of the code—like checking button states—the actual sampling rate drops below that, but it’s still high enough to capture the various energized states of the LF antenna.

The LF signal is alternating current (AC), meaning it alternates above and below zero. Components in the schematic shift this offset so the actual received signal at the microcontroller is always within the range of 0V to 3.3V.

On line 157, this AC offset is added back into the sample so that the LF signal is now measured as positive and negative amplitude, and the value is stored in lf_coil_val. Figure 4, courtesy of Wikipedia, shows an example AC signal on the left. The middle schematic is a full-wave bridge rectifier, and the image on the right is the rectified signal.

Figure 4. Signal examples (https://en.m.wikipedia.org/wiki/Rectifier)

This section of the code is crucial for ensuring accurate signal processing, and it demonstrates the complexity and slightly novel approach involved in the project.

How does this apply to our LF signal? If you recall from the previous blog, the LF signal at the antenna looks much like the left image. The reason there’s no bridge rectifier on the PCB is that the signal we’re dealing with is often too small for a bridge rectifier to handle effectively. Instead, I’ve implemented an equivalent in software after amplifying the signal using an onboard op-amp.

A rectifier is an electrical device that converts AC, which periodically reverses direction, to direct current (DC), which flows in only one direction. This process is essential for detecting the amplitude of the LF signal.

To detect the amplitude, which is ultimately what we need, the first step is to rectify the signal on line 158 by multiplying it by itself. This transforms the signal into the image on the right side of Figure 4. From a signal processing perspective, the actual signal is the square of that, so the peaks are much higher.

Remember that all I want to do is determine a high state and a low state, which form the basis of receiving 1s and 0s. This software-based approach allows us to accurately process even the smallest signals, ensuring reliable reception.

Figure 5. Main loop monitoring LF coil via ADC, filtering and rectifying

Line 161 provides a simple IIR averaging filter to the amplitude data we now have following line 158. The variable ‘delta_lf_coil_val’ now contains a filtered signal strength value, which we must convert into binary. Line 163 compares the variable to our threshold value. If it is higher than this, we have a 1, so count the number of cycles in which it remains a 1. The more cycles, the more symbols of 1 we have. Each symbol is many cycles wide, which is how the code determines how many symbols of 1 are present.

If the filtered signal strength variable ‘delta_lf_coil_val’ is less than the threshold, we have a 0. Again, count the number of cycles in which the value stays below the threshold. On every change of state from 1 to 0 and 0 to 1, the code calculates how many of the changed symbols are present. These counts of number of cycles per state are stored on line 171 in the code above, and 188 in the code below.

Figure 6. Detect change of state from high to low and recording number of high cycles

To detect the end of a frame, the code on line 194 looks for a large number of 0 symbols. If sufficient 0 symbols are present, the code can continue with the checks, form a message, and ultimately send a UHF frame on line 225.

Line 224 builds the payload; however, prior to that, line 216 through 220 are where the various counts are converted into the actual 1’s and 0’s.

Figure 7. Detect end of frame and convert into message

What this code essentially does is receive an LF frame. If it passes some basic checks, it then transmits the LF frame using an obfuscation algorithm and a secret value. This code is a simplified version of how some key fob systems work. It’s designed for some fun capture-the-flag opportunities and as a platform for building cool systems. However, it also demonstrates some basic principles used in passive entry and passive start systems.

Now imagine replacing the obfuscation algorithm with a proper encryption algorithm and a pre-shared key known only to the key fob and the vehicle. You’d have a simple yet effective way to determine which key fob is nearby. Extrapolate this to a real-world scenario, and you can start to see the basics behind these technologies.

This code serves as a great starting point for understanding and experimenting with the fundamental concepts of secure communication in key fob systems. It’s a blend of practical application and a bit of playful exploration, perfect for anyone looking to dive into the world of passive entry and start systems.

Touch Buttons

The last aspect of this code I’ll cover here is that for the touch buttons. In Part 1 I discussed the touch buttons implemented using two circular PCB traces. A tiny amount of electricity flows across your skin when you touch them, and the microcontroller detects that electricity, turning it into an on or an off.

To keep the main code running as fast as possible, the button testing code runs periodically. When the code does run, it checks the AN3 analogue input on line 245, it applies a simple filter on line 246, and if the filtered value is above the dynamic  threshold on line 249, it runs the transmit code, where is builds a packet with the payload of “UNLOCK”. This word forms one of the CTF challenges. Remember, the send function obfuscates the payload, so you would have to first work out the obfuscation algorithm and the secret value in order to recover the flags sent by the key fob.

Figure 8. Touch button sensing code and forming message UNLOCK to send via UHF

I hope this was interesting and informative. What I love about this sort of project is solving a problem using a generic microcontroller. To produce these in volume, the design would have to change, but for the purpose of a fun platform the approach really works, providing a means to interact with systems that are usually not so accessible.

That brings us to the end of the second of our three-part look at IOActive’s CHV key fob badge. In Part 3, I’ll address how to interact with the badge using your computer by means of a software-defined radio.

INSIGHTS | October 23, 2024

Inside IOActive’s Innovative Key Fob Badge for DEF CON 2024’s Car Hacking Village – Part 1/3

This is Part-1 of a 3-Part Series. Check out Part-2 here and Part-3 here.

IOActive recently sponsored the DEF CON 2024 Car Hacking Village (CHV) by designing one of the exclusive badges sold at the event. This took the form of a key fob badge that mirrors the functionality of everyday car key fobs, which support keyless entry and keyless start, also known as Passive Entry Passive Start (PEPS).

This post kicks off a three-part series explaining the creation of this unique device. In this first post we’ll explore the hardware. Part 2 will look at the technologies involved and the corresponding code. Finally, in the third part, we’ll dive into using signal processing tools and a software-defined radio tool to interact with the key fob.

IOActive created a design that allows anyone interested to learn what it does and how it does it, and to modify it to do all sorts of interesting things. For our design, we settled on a few key features that would allow the overall design to progress smoothly.

Key Features of the Badge:

  • UHF Transmitter: Enables communication with the vehicle for keyless entry.
  • Touch Buttons: Provides user interaction similar to standard key fobs.
  • LF Receiver: Supports the keyless start functionality.

The fun part is that CHV and IOActive open-sourced this project to inspire both seasoned hackers and those new to the field.

Overcoming Constraints

As with any project, we faced certain constraints, particularly regarding time and budget. Despite these challenges, we’re proud of the innovative and educational badge we’ve created. We hope it sparked curiosity and creativity among all participants at the Defcon Car Hacking Village 2024.

Constraints and considerations included the following categories:

Budget and Time Constraints:

  • Cost Efficiency: Developing a badge that costs $50 per unit to manufacture would yield the project financially impractical DEFCON attendees.
  • Time Management: Extensive design work had to be balanced with our primary responsibility of supporting our clients.

Single Prototype Design:

  • Limited Prototyping: We were allowed only one prototype PCB design before moving to production.
  • Time Constraints: Unlike commercial OEMs, we had limited time to transition from concept to production.

Design Requirements:

  • UHF Transmitter: Designed to be simple with minimal components, ensuring a low bill of materials (BOM) count.
  • LF Receiver: Features a PCB printed coil for reception and integrates touch buttons for user interaction.
  • Microcontroller: Must support CAN bus communications and provide sufficient analog resources.
  • CAN Bus Support: The key fob acts as an ECU, communicating via the CAN bus.

These constraints and considerations guided our design process, ensuring we created an innovative and educational key fob badge for the CHV. Despite the challenges, we were excited to see how this badge would inspire and educate participants.

Technical Bits

Now that you understand the project and what we wanted to achieve, let’s get into the technical details as to how it actually works. Sorry in advance, as this is going to get very techy very quickly.

When it came to selecting the microcontroller for our key fob badge, we had to ensure it met several critical requirements as mentioned earlier:

  • CAN Bus Support: Essential for the key fob to act as an ECU and communicate effectively.
  • Analog Support: Needed for both ADC and OP-AMP functionalities. The OP-AMP amplifies the small signal from the LF coil, while the ADC converts this amplified signal into a digital format for software processing.
  • UHF Transmitter Communication: The UHF transmitter, a Microchip Technology MICRF113, requires a single wire serial input to control the RF output. Using SPI with the CLK and MISO disabled in the microcontroller provides an ideal method for sending messages to the MICRF113.

Given my personal experience with Microchip parts, ranging from the latest ARM cores to legacy signal processors, and the fact that I had the necessary development studios, programmers, and tools already installed, I decided to run with what Microchip had to offer. The next step was to use Microchip’s parametric search tool. By inputting our requirements and sorting by cost, I was able to identify a suitable part.

The Chosen Microcontroller

The microcontroller we selected is the Microchip dsPIC33CK32MP502. This automotive-grade part not only supports CAN bus communication but also includes the necessary analog components to handle the LF receiver and touch buttons.

This choice ensures that our key fob badge is both functional and cost-effective, meeting all of its design requirements while staying within budget and time constraints.

Once I had the basic parts selected, it was time to start drawing the schematic:

Figure 1. Microcontroller

Let’s start off by going through the critical sections of the schematic. If you’re following along, look for the names, pin numbers, etc., and it should all make sense.

LF_ANT Net and Buttons

The LF_ANT net connects to the LF coil (LF_COIL) and is crucial for receiving signals. BUTTON_1 and BUTTON_2 nets on the left side connect to pins 2, 3, and 4, respectively. On the right side, MB_RX and MB_TX, along with DATA, connect to the main badge via the CAN bus. The DATA net serves as an output connection to the MICRF113 UHF transmitter. Additional components include a 2×3 pin header and a programming connector.

Amplification and Gain Control

Note the 47K resistor connecting pin 28 and pin 1 of the microcontroller. This resistor controls the gain or amplification of the LF signal. The gain ratio depends on R3 and R5, with doubling R3 doubling the amplifier gain. Choosing a low gain is essential due to the background electromagnetic energy at DEF CON and CHV events. High gain could lead to a low signal-to-noise ratio, affecting signal detection by the receive software.

LF Antenna Implementation

Figure 2. LF antenna design

The LF antenna design is intriguing and adds to the key fob’s functionality:

  • LF_COIL Net: The LF antenna design is intriguing and enhances the key fob’s functionality. The LF_COIL net represents a track from C1 to C2, with C2 being a 0-ohm resistor chosen after tuning the coil. The actual LF coil is a spiral track on the PCB. Ideally, a dedicated component for the spiral track would be preferable, but time constraints led to this design.
  • C1 and High Pass Filter: C1 forms a high-pass filter, allowing the connection between R1 and R2 to center around half of VDD (1.65V in this case). Analog pins on the microcontroller can handle voltages within the 0V to 3.3V range only.
  • Protection with D1: Component D1 is a rail-to-rail Schottky diode that protects the microcontroller (connected via the LF_ANT net) from strong LF coil signals. When the key fob is close to an LF transmitter in a car, D1 clips the signal outside safe limits. The green LF_ANT signal oscillates between 0V and 3V, while the blue LF_COIL signal oscillates between +12V and -12V.
Figure 3. LT Spice simulation showing a strong LF signal in blue being clipped by D1 in green
  • Signal Strength Variation: As the key fob moves away from the car, the LF signal weakens, and the D1 diode, which acts as a protection component, no longer clips the signal. The LF_ANT signal now oscillates between 0.8V and 2.4V, while the blue LF_COIL signal oscillates between +0.8V and -0.8V.
Figure 4 LT Spice simulation showing typical LF coil AC signal in blue shifted to DC offset for ADC in green

Conclusion

In summary, our key fob badge design balanced functionality, cost-effectiveness, and development constraints. This microcontroller-based design also balanced functionality, protection, and practicality within tight time constraints. With the dsPIC33CK32MP502, we achieved CAN bus support, analogue capabilities, and seamless communication with the UHF transmitter to keep the total number of components to a minimum and speed up the overall development time.

Part 2 of this series will delve into the firmware, including where to find all the resources and how to modify it. The final part of this three-part series will discuss interacting with the key fob using LF, working with LF tools like GNU Radio, capturing UHF transmissions from the fob, and demodulating and interpreting those signals.

INSIGHTS | October 22, 2024

KARMA v1.0 (Key Attribute and Risk Management and Analysis)

KARMA v1.0 (Key Attribute and Risk Management and Analysis) is a risk-rating system developed by IOActive to assess a system’s ability to avoid negative outcomes based on specific key attributes. It uses the expertise of subject matter experts (SMEs) to identify the factors that best predict risks in real-world scenarios. “System” refers to the asset (e.g., application, software, device, or component) evaluated in its likely deployment context.

KARMA has been used for over 20 years and is effective across various security assessments, including web, mobile, infrastructure, embedded systems, code reviews, and design reviews.

KARMA evaluates vulnerabilities based on two factors: likelihood (the probability of an attacker finding and exploiting the vulnerability) and impact (the consequences of exploitation). Ratings are contextualized based on the system’s environment, and the risk score is computed as the product of likelihood and impact, where each range from 1 (informational) to 5 (critical).

Karma stands out as a crucial rating system to use due to its simplicity and elegance. Its design ensures that both technical and non-technical audiences can easily understand and engage with it, making it an accessible tool for a wide range of users.

INSIGHTS, RESEARCH | October 15, 2024

Getting Your SOC SOARing Despite AI

It’s a fact: enterprise security operations centers (SOCs) that are most satisfied with their investments in Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) operate and maintain less than a dozen playbooks. This is something I’ve uncovered in recent years whilst building SIEM+SOAR and autonomous SOC solutions – and it perhaps runs counterintuitive to many security leaders’ visions for SOAR use and value.

SOAR technology is one of those much-touted security silver bullets that have tarnished over time and been subsumed into broader categories of threat detection and incident response (TDIR) solutions, yet it continues to remain a distinct must-have modern SOC capability.

Why do satisfied SOCs run so few playbooks? After all, a core premise of SOAR was that you can automate any (perhaps all) security responses, and your SOC analysts would spend less time on ambient security noise – giving them more time to focus on higher-priority incidents. Surely “success” would include automating as many responses as possible?

Beyond the fact that “security is hard,” the reality is that threat detection and response is as dynamic as the organization you’re trying to protect. New systems, new policies, new business owners, new tools, and a sea of changing security products and API connector updates mean that playbooks must be dynamic and vigilantly maintained, or they become stale, broken, and ineffective.

Every SOC team has at least one playbook covering their phishing response. It’s one of the most common and frequently encountered threats within the enterprise, yet “phishing” covers an amazingly broad range of threats and possible responses, so a playbook-based response to the threat is programmatically very complex and brittle to environmental changes.

From a SOC perspective of automating and orchestrating a response, you would either build a lengthy single if/then/else-stye playbook or craft individual playbooks for each permutation of the threat. Smart SOC operators quickly learn that the former is more maintainable and scalable than the latter. A consequence of this is that you need analysts with more experience to maintain and operate the playbook. Any analyst can knock-up a playbook for a simple or infrequently encountered threat vector, but it takes business knowledge and vigilance to maintain each playbook’s efficacy beyond the short term.

Surely AI and a sprinkle of LLM magic will save the day though, right?

I can’t count the number of security vendors and startups that have materialized over the last couple of years with AI and LLM SOAR capabilities, features, or solutions – all with pitches that suggest playbooks are dead, dying, replaced, accelerated, automatically maintained, dynamically created, managed, open sourced, etc., so the human SOC analyst does less work going forward. I remain hopeful that 10% of any of that eventually becomes true.

For the immediate future, SOC teams should continue to be wary of any AI stories that emphasize making it easier to create playbooks (or their product-specific equivalent of a playbook). More is NOT better. It’s too easy (and all too common) to fall down the rathole of creating a new playbook for an existing threat because it’s too hard to find and maintain an earlier iteration of that threat’s playbook. Instead, focus on a subset of the most common and time-consuming threats that your SOC already faces daily, and nail them using the smallest number of playbooks you can get away with.

With the Rolling Stones’ “(I Can’t Get No) Satisfaction” playing in the background, perhaps you’ll get some modicum of SIEM+SOAR satisfaction by keeping your playbook playlist under a dozen.

— Gunter Ollmann

INSIGHTS, RESEARCH | October 2, 2024

Potential Integrated Circuit Supply Chain Impacts from Hurricane Helene

The damage caused by Hurricane Helene in Spruce Pine will likely cause disruptions at the start of the microchip and integrated circuit (IC) supply chain by preventing the mining and distribution of high purity quartz until the mines and local transportation networks are fully repaired.

BACKGROUND

Hurricane Helene Impacts

In late September 2024, Hurricane Helene impacted the Caribbean, Florida, Georgia, Tennessee, North Carolina and other southeastern states in the United States[1]. Its impacts varied widely depending on location and the associated exposure to the primary effects – wind, rain, and storm surge – of the hurricane. While Florida was primarily affected by wind and storm surge, and Georgia was impacted by wind, Tennessee and western North Carolina faced torrential rainfall from Helene after significant rainfall from another storm system in the prior days.

These rains produced catastrophic flooding in the mountains of Tennessee[2] and North Carolina[3], which caused tremendous damage to road and rail transportation networks in affected areas. In western North Carolina, the state’s Department of Transportation advised, “All roads in Western NC should be considered closed[4].”

Spruce Pine High-quality Quartz Mine

Located in the Blue Ridge Mountains of western North Carolina[5], Spruce Pine is the source of the world’s highest quality (ultra-pure) quartz. Major mine owners and operators in the Spruce Pine area include Sibelco, a Belgium-based corporation[6], and The Quartz Corp, a Norway-based corporation. The quartz from this area is critical in the manufacture of semiconductors, photovoltaic cells, and optical fiber. As Sibelco explains, high purity quartz (HPQ) sands are used to produce “fused quartz[7] (almost pure amorphous silicon dioxide) crucibles used in the Czochralski process[8], and the production of fused quartz tubing and ingots used to create fabricated quartzware for the semiconductor wafer process[9].”

Sibelco has increased production of HPQ by 60% since 2019, and is investing an additional 200-million-USD to increase production by an additional 200-percent in 2024 to meet expected increases in market demand[10]. Unfortunately, operations are currently suspended at the mine due to the hurricane’s disruption of the most basic services, including road and rail links[11]. The CSX Transportation rail line into Spruce Pine is severely damaged, with entire bridges missing[12].

Alternatives for HPQ

There are slower, more expensive processes that make use of lower quality quartz inputs. Brazil, India, and the Russian Federation (Russia) are other sources of HPQ, but not the same in quality or amount[13]. Additional sources of varying quantity and quality exist in Madagascar, Norway, and the People’s Republic of China (PRC).

CONSEQUENCES

Why IOActive Cares

This incident involves key areas of Interest for IOActive – areas in which we have made significant investments to help our clients protect themselves. Specifically, this incident involves impacts to microchips and integrated circuits (ICs)[14], supply chain risks[15], and multimodal transportation and logistics[16].

Potential Supply Chain Impacts

Predictions and forecasts are only ever certain to be wrong, but they provide useful insight into possible future states, which aid decision makers and stakeholders in managing the risks. The key variable for this event is how long operations at the Spruce Pine mines might be suspended due to local impacts (mine-specific operational issues) or regional issues (such as multimodal transportation network disruption) stemming from the effects of Hurricane Helene.

Temporary repairs to bridges can be made in a matter of days, weeks, or months, depending on the level of damage, while more permanent repairs taking many months or years are planned and completed. Unfortunately, these temporary repairs may limit the weight of crossing vehicles until those permanent repairs are completed. The complete loss of several road and rail bridges and the washed-out sections of roads and rail lines will require several years to fully repair and return to full capacity. The extensive damage to the road and rail networks serving the Spruce Pine area will impact the mine operations for some time, but will likely be operating in a reduced, degraded state within a few months, assuming no additional natural disasters.

Cybersecurity Risks

When observing a consequential event such as an accident, storm, or other disaster, it’s helpful to ponder whether those same consequences can be produced from a cyberattack. It can be nearly impossible for anyone to comprehensively understand all the different failure modes of a complex system or system of systems. A convenient shortcut for a cyber threat actor is to look to recreate previous system failure modes rather than search for unique ones. Reviewing the consequences of this incident reveals several vectors that could allow a highly capable, determined threat actor to launch a cyberattack to shut down mining operations in Spruce Pines.

Broadly, attack targets could include locomotives or commercial vehicles operated on the roads, rail line signaling equipment, or mine information technology (IT) and operational technology (IT) systems. This assessment is based on the results of our public research and our confidential commercial work. A successful attack on any of these targets could produce a consequential impact for mining operations, but the duration of the impact is unlikely to be as long as the impacts from Hurricane Helene.

RECOMMENDATIONS

Risk Management

Since the Spruce Pine mines are such an important node in the global supply chain for microchips and ICs, additional all-hazards risk management and mitigation action should be taken at both the state and federal levels to ensure fewer, shorter interruptions and more resilient operations.

Cybersecurity

Strategically important mines such as this should have requirements for strong cybersecurity, including both IT and OT assets, to ensure that there are minimal to no operational disruptions from a cyberattack.

National Security

As the United States confronts the malign activities of the PRC, it should consider restrictions on key inputs to microchips and ICs, including HPQ, in addition to the existing restrictions on high-performance computing resources like GPUs[17] and semiconductor manufacturing equipment like lithography equipment[18].


[1] https://en.wikipedia.org/wiki/Hurricane_Helene
[2] https://www.knoxnews.com/story/weather/2024/09/30/hurricane-helene-deadly-east-tennessee-floods-what-to-know-schools-roads/75447229007/
[3] https://climate.ncsu.edu/blog/2024/09/rapid-reaction-historic-flooding-follows-helene-in-western-nc/
[4] https://x.com/NCDOT/status/1839685402589827554
[5] 7638 South Highway 226, Spruce Pine, NC, 28777, United States
[6] https://www.sibelco.com/en/about-us
[7] https://en.wikipedia.org/wiki/Fused_quartz
[8] https://www.sciencedirect.com/topics/chemistry/czochralski-process
[9] https://www.sibelco.com/en/materials/high-purity-quartz
[10] https://assets-eu-01.kc-usercontent.com/54dbafb3-2008-0172-7e3d-74a0128faac8/64fae543-971f-46f5-9aec-df041f6f50f6/Webcast_H1_2024_Results_final.pdf
[11] https://www.thequartzcorp.com/articles/impact-of-hurricane-helene-on-the-quartz-corp-in-spruce-pine
[12] https://www.freightwaves.com/news/csxs-former-clinchfield-railroad-barely-recognizable-after-historic-flood
[13] http://www.sinosi.com/hotsales/Product/02%20HPQ%20Promotion%20_English%20Version.pdf
[14] https://ioactive.com/service/full-stack-security-assessments/silicon-security/
[15] https://ioactive.com/supply-chain-risks-go-beyond-cyber/
[16] https://ioactive.com/industry/transportation/
[17] https://www.reuters.com/technology/nvidia-may-be-forced-shift-out-some-countries-after-new-us-export-curbs-2023-10-17/
[18] https://www.csis.org/analysis/updated-october-7-semiconductor-export-controls


INSIGHTS, RESEARCH | September 4, 2024

About to Post a Job Opening? Think Again – You May Reveal Sensitive Information Primed for Cybersecurity Attacks

People are always on the move, changing their homes and their workspaces. With increasing frequency, they move from their current jobs to new positions, seeking new challenges, new people and places, to higher salaries.

Time and hard work bring experience and expertise, and these two qualities are what companies look for; they’re looking for skilled workers every single day, on multiple job search and recruiting platforms. However, these job postings might reveal sensitive information about the company that even the most seasoned Human Resources specialists don’t notice.

Job posting websites are a goldmine of information. Inherently, recruiters have to disclose certain data points, such as the technologies used by the company, so that candidates can assess whether they should apply. On the other hand, these data points could be used by malicious actors to profile a specific company and launch more sophisticated targeted attacks against the company and its employees.

To demonstrate this concept, I did research on tens of job postings from the following websites:

Surprisingly, more than 40% of job postings reveal relatively sensitive information, such as the following, which are just a sample of the information obtained from a variety of companies:

As you can see, a variety of information is disclosed inadvertently in these job postings:

  • Exact version of the software used in the backend or by end users
  • Programming languages, frameworks and libraries used
  • Cloud Service Providers where customer data resides
  • Intranet and collaborative software used within the company
  • Antivirus and endpoint security software in use
  • Industry-specific and third-party software used
  • Databases, storage and backup, and recovery platforms used
  • Business relationships with other companies
  • Security controls implemented in the company’s SDLC

Armed with this information, one can simply connect the data dots and infer things like:

  • Whether a company uses proprietary or open-source software, implying the use of other similar proprietary/open-source applications that could be targeted in an attack.
  • Whether a company performs Threat Modeling and follows a secure SDCL, providing an attacker with a vague idea of whether the in-house-developed applications are secure or not.
  • Whether a company has business relationship with other companies, enabling an attacker to target third-party companies in order to use them as pivot to attack the targeted company.

In summary, IOActive strongly encourages recruiters not to include sensitive information other than that required by the job position – in attempting to precisely target the exact candidate for a job, the level of detail you use could be costly.

INSIGHTS, RESEARCH | August 20, 2024

Get Strategic About Cyber Risk Management

With global cybercrime damage costs exceeding $11 trillion last year and moving toward an estimated $20 trillion by 2026, robust cybersecurity risk management has never been more imperative.

The interconnected nature of modern technology means that, by default, even small vulnerabilities can lead to catastrophic losses. And it’s not just about finances. Unmitigated risk raises the specter of eroded customer confidence and tainted brand reputation. In this comprehensive guide, we’ll give enterprise defenders a holistic, methodical, checklist-style approach to cybersecurity risk management. We’ll focus on practical applications, best practices, and ready-to-implement strategies designed to mitigate risks and safeguard digital assets against ever-more numerous—and increasingly capable—threats and adversaries.

What is Cybersecurity Risk Management?

This subspecialty of enterprise risk management describes a systematic approach to identifying, analyzing, evaluating, and addressing cyber threats to an organization’s assets and operations. At its core, it involves a continuous cycle of risk assessment, risk decision-making, and the implementation of risk controls intended to minimize the negative impact of cyber incidents.

A proactive cyber risk mitigation approach helps organizations protect critical digital assets and bolster business continuity, legal compliance, and customer trust. By integrating risk management with the organization’s overall strategic planning, cybersecurity teams can prioritize resources efficiently and align their efforts with the business’s risk appetite and objectives.

Why Has Cyber Risk Management Become So Critical?

Getting control over cyber risk is quickly becoming a core requirement for businesses operating in today’s digital ubiquity. The proliferation of digital information and internet connectivity have paved the way for sophisticated cyber threats that can penetrate many of our most robust defenses. With the digital footprint of businesses expanding exponentially, the potential for data breaches, ransomware attacks, and other forms of cybercrime has escalated dramatically.

These incidents can result in devastating financial losses, legal repercussions, and irreparable damage to an organization’s reputation. Furthermore, as regulatory frameworks around data protection become more stringent, failure to comply can lead to significant penalties. Given these conditions, an aggressive and comprehensive approach to managing cybersecurity risks is crucial for safeguarding an organization’s assets, ensuring operational continuity, and maintaining trust with customers and stakeholders.

Effective Cyber Risk Management: A Framework-Based Approach

Adopting a structured, framework-based approach to cybersecurity risk management lets security teams corral the complexity of digital environments with a methodical, strategic mitigation methodology. For most enterprise applications, there’s no need to reinvent the wheel. There are a myriad of established frameworks that can be modified and customized for effective use in nearly any environment.

Perhaps the best known is the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF), a companion to NIST’s well-tested and widely implemented Cybersecurity Framework (CSF). The NIST RMF offers a structured and systematic approach for integrating security, privacy, and risk management processes into an organization’s system development life cycle.

Such frameworks provide a comprehensive set of guidelines that help identify and assess cyber threats and facilitate the development of effective strategies to mitigate these risks. By standardizing cybersecurity practices, organizations can ensure a consistent and disciplined application of security measures across all departments and operations.

This coherence and uniformity are crucial for effectively addressing vulnerabilities and responding to incidents promptly. Equally important, frameworks incorporate best practices and benchmarks that help guide organizations toward achieving compliance with regulatory requirements, thus minimizing legal risks and enhancing the safeguarding of customer data. In essence, a framework-based approach offers a clear roadmap for managing cyber risk in a way that’s aligned with organizational strategic objectives and industry standards.

What follows is a checklist based on the 7-step RMF process. This is just a starting point. A framework to-do list like this can and should be tweaked to aid in reducing and managing specific cyber risks in your unique enterprise environment.

1. Preparation

In this initial phase, organizations focus on establishing the context and priorities for the Risk Management Framework process. This involves identifying critical assets, defining the boundaries, and codifying a risk management strategy that aligns with the organization’s objectives and resources. This is the foundation upon which a tailored approach to managing cybersecurity risk will ultimately be built throughout the system’s lifecycle.

  • Establish the context for risk management and create a risk management strategy.
  • Define roles and responsibilities across the organization.
  • Develop a taxonomy for categorizing information and information systems.
  • Determine the legal, regulatory, and contractual obligations.
  • Prepare an inventory of system elements, including software and hardware.

2. Systems Categorization

Expanding on the categorization step (above), this phase involves identifying the types of information processed, stored, and transmitted to determine potential impact as measured against the information security CIA triad (confidentiality, integrity, and availability). Organizations can assign appropriate security categories to their systems by leveraging a categorization standard such as the Federal Information Processing Standard (FIPS) 199, ensuring that the protective measures taken are tailored to the specific needs and risks associated with the information being handled. This step is crucial as it lays the groundwork for selecting suitable security controls in the later stages of the risk management process.

  • Identify the types of information processed, stored, and transmitted by the system.
  • Assess the potential impact of loss of Confidentiality, Integrity, and Availability (CIA) associated with each type.
  • Document findings in a formal security categorization statement.

3. Selecting Appropriate Security Controls

This critical step begins the safeguarding of information systems against potential threats and vulnerabilities in earnest. Based on the categorization of the information system, organizations select a baseline of security and privacy controls (NIST Special Publication 800-53 or some equivalent controls standard is a good starting point here), corresponding to the system’s impact level. This baseline acts as the jumping-off point for the security controls, which can be tailored to address the specific risks identified throughout the risk assessment process. Customization involves adding, removing, or modifying controls to ensure a robust defense tailored to the unique requirements and challenges of the organization.

  • Select an appropriate baseline of security controls (NIST SP 800-53 or equivalent).
  • Tailor the baseline controls to address specific organizational needs and identified risks.
  • Document the selected security controls in the system security plan.
  • Develop a strategy for continuously monitoring and maintaining the effectiveness of security controls.

4. Implementing the Selected Controls

Implementing security controls involves the physical and technical application of measures chosen during the previous selection phase. This step requires careful execution to ensure all controls are integrated effectively within the environment, aligning with its architecture and operational practices. Documenting the implementation details is crucial to provide a reference for future assessments and maintenance activities.

  • Implement the security controls as documented in Step 3.
  • Document the security controls and the responsible entities in place.
  • Test thoroughly to ensure compatibility and uninterrupted functionality.
  • Prepare for security assessment by documenting the implementation details.

5. Assessing Controls Performance

Assessing security controls involves evaluating effectiveness and adherence to the security requirements outlined in the overall security plan. This phase is critical for identifying any control deficiencies or weaknesses that could leave the information system vulnerable. Independent reviewers or auditors typically conduct assessments to ensure objectivity and a comprehensive analysis.

  • Develop and implement a plan to assess the security controls.
  • Perform security control assessments as per the plan.
  • Prepare a Security Assessment Report (SAR) detailing the effectiveness of the controls.
  • Determine if additional controls are needed and append the master security plan accordingly.

6. Authorizing the Risk Management Program

The authorization phase is a vital decision-making interval where one or more senior executives evaluate the security controls’ assessment results and decide whether the remaining risks to the information systems are acceptable to the organization. Upon acceptance, authorization is granted to operate the mitigation program for a specific time period, during which its compliance and security posture are continuously monitored. This authorization is formalized through the issuance of what is known as an Authorization to Operate (ATO) in some organizations, particularly in the public sector.

  • Compile the required authorization package, including the master plan, the SAR, and the so-called Plan of Action and Milestones (POA&M).
  • Assess the residual risk against the organizational risk tolerance.
  • Document the authorization decision in an Authorization Decision Document.

7. Monitoring and Measuring Against Performance Metrics

The monitoring phase ensures that all implemented security controls remain effective and compliant over time. Continuous surveillance, reporting, and analysis can promptly address any identified vulnerabilities or changes in the operational environment. This ongoing process supports the kind of flexible, adaptive security posture necessary for dealing with evolving threats while steadfastly maintaining the integrity and availability of the information system.

  • Implement the plan for ongoing monitoring of security controls.
  • Report the system’s security state to designated leaders in the organization.
  • Perform ongoing risk assessments and response actions, updating documentation as necessary.
  • Conduct reviews and updates regularly, in accordance with the organizational timelines, or as significant changes occur.

Conclusion: Formalizing Cyber Risk Mitigation

A solid risk management framework provides a comprehensive guide for enhancing the security and resilience of information systems through a structured process of identifying, implementing, and monitoring security controls.

Sticking to a framework checklist helps ensure a successful, systematic adoption. As noted throughout, engaging stakeholders from across the organization, including IT, security, operations, and compliance, is critical to ensuring a truly comprehensive risk management program. Additionally, periodic training and awareness for team members involved in various phases of the risk management project will contribute to the resilience and security of the organization’s digital assets.

Organizations can effectively safeguard their digital assets and mitigate unacceptable risks by following the outlined steps, tailoring the program to fit specific organizational needs, involving stakeholders, conducting regular training, and adapting to the evolving cybersecurity landscape. Ultimately, this kind of formal, structured cyber risk management fosters a culture of continuous improvement and vigilance in an enterprise, contributing to the overall security posture and the success of the organization.

INSIGHTS, RESEARCH | July 25, 2024

5G vs. Wi-Fi: A Comparative Analysis of Security and Throughput Performance

Introduction

In this blog post we compare the security and throughput performance of 5G cellular to that of WiFi. This work is part of the research IOActive published in a recent whitepaper (https://bit.ly/ioa-report-wifi-5g), which was commissioned by Dell. We used a Dell Latitude 7340 laptop as an end-user wireless device, a Panda Wireless® PAU06 as a WiFi access point, and an Ettus Research™ Universal Software Radio Peripheral (USRP™) B210 as a 5G base station to simulate a typical standalone 5G configuration and three typical WiFi network configurations (home, public, and corporate). Testing was performed between January and February 2024 during which we simulated a number of different attacks against these networks and measured performance-based results for a number of different real-world environments.

Security Tests

We researched known 5G and WiFi attacks and grouped them according to five different goals: user tracking, sensitive data interception, user impersonation, network impersonation, and denial of service. These goals easily map to the classic Confidentiality, Integrity, Availability (CIA) security triad. We then reproduced the attacks against our controlled test networks to better understand their requirements, characteristics, and impact. The results of these investigations are summarized below:


We noted that, in general, the 5G protocol was designed from the ground up to provide several assurances that the WiFi protocol does not provide. Although mitigations are available to address some of the attacks, WiFi is still unable to match the level of assurance provided by 5G.

For example, 5G protects against user tracking by using concealed identifiers. Attacks to bypass these identifiers are easily detectable and require highly skilled and well-funded attackers. In contrast, WiFi does not attempt to protect against user tracking, since MAC addresses are transmitted in plaintext with every packet. Although modern devices have tried to mitigate this risk by introducing MAC address randomization, passive user tracking is still easy to accomplish due to shortcomings in MAC address randomization and probe request analysis.

IOActive also noted that the use of layered security protocols mitigated most sensitive data interception and network impersonation attacks. Although the majority of users do not use a VPN when connecting to the Internet, most websites use TLS, which, when combined with HSTS and browser preload lists, effectively prevents an attacker from intercepting most of the sensitive data a user might access online. However, even multiple layered security protocols cannot protect against vulnerabilities in the underlying radio protocol. For example, a WiFi deauthentication attack would not be affected in any way by the use of TLS or a VPN.

Performance Tests

We conducted performance tests by measuring throughput and latency from a wireless device in a variety of environments, ranging from urban settings with high spectrum noise and many physical obstacles, to rural areas where measurements more closely reflected the attributes of the underlying radio protocol.

Of particular note, we found that a wireless device could maintain a connection to a 5G wireless base over a significant distance, even with substantial interference from buildings and other structures in an urban environment.

In a rural environment, our WiFi testing showed an exponential decay with distance, as was expected, and it was not possible to maintain a connection over the same range as with 5G. We did, however, note significantly higher speeds from WiFi connections at close proximity:


Surprisingly, we did not see significant changes in latency or error rates during our testing.

Conclusions

The following network security spectrum summarizes our findings:


This spectrum provides a high-level overview of network types, from less secure to more secure, based on the characteristics we observed and documented in our whitepaper. The use of layered security mechanisms moves any network towards the more secure end of the spectrum.

Overall, we found that a typical standalone 5G network is more resilient against attacks than a typical WiFi network and that 5G provided a more reliable connection than WiFi, even over significant distances; however, WiFi provided much higher speeds when the wireless device was in close proximity to the wireless access point.

AUTHORS:
– Ethan Shackelford, IOActive Associate Principal Security Consultant
– James Kulikowski, IOActive Senior Security Consultant
– Vince Marcovecchio, IOActive Senior Security Consultant