INSIGHTS, RESEARCH | December 3, 2024

Building Management Systems: Latent Cybersecurity Risk

Manage the Cybersecurity Risks of your BMS

Building management systems (BMS) and building automation systems (BAS) are great innovations, but present latent cybersecurity and operational risks to organizations. The consequences of a cyberattack on a BMS or BAS could result in operational disruption from the denial of use of the building.

Over the past decade, there have been several examples of attacks on BMS and components. Weaponization and operationalization of vulnerabilities in BMS by threat actors with tools such as ransomware is likely to occur in the next three years. BMS owners and managers can act today to reduce the risks and build on established cybersecurity frameworks to protect their assets and operations.

There are unique considerations involved in assessing the cybersecurity of operational technology (OT) like BMS and BAS. Therefore, it is imperative to work with experts and firms with considerable experience in assessing OT systems and the security of your BMS supply chain.

Background

BMS and BAS offer great promise in improved efficiency, comfort, security, and occupant experience. However, these complex cyberphysical systems also expose the building and occupants to new risks and threats, with negative consequences that in the past required physical access to realize. Today it’s possible for a building manager or staff to control these systems from a mobile application or a cloud-hosted application.

Key Concepts and Terminology

Staff responsible for a BMS or BAS may have extensive engineering or facilities experience, but no exposure to cybersecurity beyond general cybersecurity user awareness training. Unfortunately, in today’s threatscape, that leads to the BMS posing a latent risk: a risk that is present but unknown and unmanaged.

A BMS is one type of Operational Technology (OT), digital control systems that manage physical processes. These systems typically consist of the same types of components that you would find in a traditional information technology (IT) or information and communication technology (ICT) environment, such as servers, switches, routers, firewalls, databases, communication protocols, and applications. Generally, these are specialized variants of the components, but over the past decade we have observed a strong trend of convergence between IT and OT components.

Often these types of systems will be referred to as cyberphysical systems and may be abbreviated as CPS. Simply, a cyberphysical system is one in which a digital computer or microcontroller manages equipment that can produce a physical effect. Cyberphysical systems are vulnerable to the same types of cyberattacks and exploitation of vulnerabilities as IT systems.

Cybersecurity vulnerabilities are defects in the environment such as software or hardware, which can be exploited by a threat actor (attacker) to produce a cybersecurity impact. For IT systems, this generally means it impacts the administration or the business by affecting systems like email, web servers, or systems such as customer billing or enterprise resource planning (ERP) systems. For OT systems, the impact can be more consequential since it can shut down the core operations of a business. For example, in the case of a pipeline company, a cyberattack on an OT system, such as a ransomware attack, can shut down the operations of the pipeline with cascading effects to customers and others downstream who rely on the timely delivery of product through the pipeline. This exact type or attack occurred in 2021 on Colonial Pipeline.

Consequences

A compromised BMS would allow the attacker to control the BMS and cause effects that are only limited by the physical constraints on the building management systems. These types of effects could render a building unoccupiable for short- or long-term periods, due to the intentional manipulation of building environments like temperature, air quality, and humidity. Power, fire suppression, and other high-impact building services could be made inoperable or used to cause physical damage to a building. These risks are especially high in facilities that have very tight environmental requirements necessary to support the intended use, such as precision manufacturing.

In the case of an office building, reasonable business and operational continuity can be realized through its organization enabling work from home. However, in the case of a precision manufacturing plant, production would cease and goods in the production process may be damaged or spoiled due to the disruption. Likewise, a hotel or hospital would experience significant operational disruption and considerable costs in reaccommodating guests or patients at other facilities.

Threat Intelligence

Cybersecurity threat intelligence informs stakeholders about the extent to which threat actors are actively researching, developing, or exploiting cybersecurity vulnerabilities, and seeks to identify the approximate identity of those threat actors. For example, as a defender, it’s often helpful to know whether the threat actor is a malicious individual, organized crime group, or nation-state actor.

The following are examples of real-world attacks on BMS devices:

  • In 2015, a cybersecurity expert disclosed to me that a nation state had compromised the home thermostat in the expert’s residence. In this case, the threat actor’s objective was to use the thermostat to ensure they retained a presence or persistence in the target’s home network for surveillance purposes rather than to produce a cyberphysical effect.[1]
  • In 2021, unknown attackers took control of many of the BAS devices that controlled the lighting, motion detection system, and other operations in an office park in Germany. The intruders were able to gain control of the devices and set a password that prevented anyone else from accessing them. They also bricked many of the affected devices and it took third-party security consultants quite a while to remedy the situation, and they were only able to do so after managing to retrieve the key that locked the systems down from the memory of one of the devices.
  • In April 2023, Schneider Electric proactively issued a security advisory disclosing that it had become aware of a publicly available exploit targeting the KNX product. This exploit would have enabled an attacker to access admin functionality without a password through a directory traversal attack, or to use a brute-force attack to access the administration panel.

We assess that within the next three years, threat actors will develop ransomware payloads specific to BMS and BAS deployments. Threat actors have already developed ransomware to specifically target medical devices, another type of OT, in hospitals.

Suggested Course or Action for BMS Managers

Following a mature cybersecurity management framework like the US National Institute of Standards’ Cybersecurity Framework (NIST CSF) is a wonderful place to start. There are recommended levels appropriate to organizations with any level of cybersecurity maturity.

The following is advice contained in most frameworks, distilled to eight high-level recommendations:

  1. Know your suppliers and look upstream. Select BMS suppliers with established product security programs who do not build hardware or develop software in hostile countries like the People’s Republic of China (PRC).
  2. Conduct a risk assessment. Once you have identified partners and suppliers, properly assess each product’s cybersecurity posture so that you know the risks they may pose to your organization. Consider where each device or component was manufactured and who exactly did so. Is there a possible backdoor or counterfeit part, or could more common software quality issues result in a breach?
  3. Utilize third-party testing. Hire a third-party firm to test your system and those of your key suppliers, to provide actionable results on what you need to fix first. Ideally this should be performed in multiple steps through the design and build process, with the final security assessment completed as part of the site acceptance test.
  4. Regularly scan and patch all vulnerable systems. Here it is important to have chosen vendors who provide regular security patches. Be careful with older systems, which may experience a denial of service through improper use of automated scanners.
  5. Use strong passwords and credentials. Teach your employees about the importance of using strong passwords and credentials and not reusing them across accounts.
  6. Use multi-factor authentication (MFA). Ensure that your staff has set up secure MFAeverywhere possible. SMS MFA solutions are not adequately secure.
  7. Conduct regular security awareness training. Teach employees how to identify phishing scams, update software, and become more security conscious. This training should focus on those who have privileged access to the BMS.
  8. Harden the security of the devices connected to your networks. Ideally, your suppliers and partners will provide recommendations for device configuration and security best practices.

Key Assessment Considerations

Third-party cybersecurity testing comes with specific risks of production impacts, outages, and other unintended consequences. It is critical to choose experienced, expert team members or third-party assessor organizations. It is absolutely necessary that any assessor or penetration tester working in an OT environment have extensive experience in those environments. OT environments with older programmable logic controllers (PLCs) can experience outages when those devices are pinged or scanned by a tool, due to limited resources or poor TCP/IP software stack implementations. Experienced assessors and OT-aware cybersecurity tools know how to achieve the same testing goals in OT environments using passive analysis methods such as analysis of network packet captures.

Unique OT Considerations

Given the very long useful life of many OT assets, including BMS and BAS deployments, frequently one may find several non-patchable vulnerabilities in that type of environment. Consequently, you should focus on a layered security approach, which makes it very difficult for a threat actor to get to the vulnerable, unpatchable components before they are detected and evicted from the environment. As such, heavy emphasis should be placed on assessing the segmentation, access controls, sensors, and detection capabilities between IT and OT environments.

Supply Chain Considerations

The recent supply chain attacks targeting Hezbollah operatives demonstrate how even organizations with mature counterintelligence capabilities can still fall victim to a supply chain interdiction. This demonstrates the high likelihood of successful attacks against companies and organizations with less defensive capabilities, including BMS users and suppliers. It is critical to buy BMS from trusted suppliers and countries.

Since it is uneconomical to assess every product an organization integrates into their operations, a risk-centric approach should be taken to use the limited resources available to focus on assessing the highest-risk and highest-consequence products and suppliers. Specifically for OT, the focus should be on those suppliers and products that if compromised could bring a complete halt to operations.

Asking a key supplier to show some type of summary of their secure development policies, as well as a summary of the results of an independent cybersecurity validation and verification by a qualified third-party assessor, is a great way to know whether your supplier takes product cybersecurity seriously and assesses the risk exposure their products bring to their customers’ organizations.


[1] Proprietary intelligence. Additional details are withheld to protect those involved and are not necessary for the example.

INSIGHTS, RESEARCH | October 15, 2024

Getting Your SOC SOARing Despite AI

It’s a fact: enterprise security operations centers (SOCs) that are most satisfied with their investments in Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) operate and maintain less than a dozen playbooks. This is something I’ve uncovered in recent years whilst building SIEM+SOAR and autonomous SOC solutions – and it perhaps runs counterintuitive to many security leaders’ visions for SOAR use and value.

SOAR technology is one of those much-touted security silver bullets that have tarnished over time and been subsumed into broader categories of threat detection and incident response (TDIR) solutions, yet it continues to remain a distinct must-have modern SOC capability.

Why do satisfied SOCs run so few playbooks? After all, a core premise of SOAR was that you can automate any (perhaps all) security responses, and your SOC analysts would spend less time on ambient security noise – giving them more time to focus on higher-priority incidents. Surely “success” would include automating as many responses as possible?

Beyond the fact that “security is hard,” the reality is that threat detection and response is as dynamic as the organization you’re trying to protect. New systems, new policies, new business owners, new tools, and a sea of changing security products and API connector updates mean that playbooks must be dynamic and vigilantly maintained, or they become stale, broken, and ineffective.

Every SOC team has at least one playbook covering their phishing response. It’s one of the most common and frequently encountered threats within the enterprise, yet “phishing” covers an amazingly broad range of threats and possible responses, so a playbook-based response to the threat is programmatically very complex and brittle to environmental changes.

From a SOC perspective of automating and orchestrating a response, you would either build a lengthy single if/then/else-stye playbook or craft individual playbooks for each permutation of the threat. Smart SOC operators quickly learn that the former is more maintainable and scalable than the latter. A consequence of this is that you need analysts with more experience to maintain and operate the playbook. Any analyst can knock-up a playbook for a simple or infrequently encountered threat vector, but it takes business knowledge and vigilance to maintain each playbook’s efficacy beyond the short term.

Surely AI and a sprinkle of LLM magic will save the day though, right?

I can’t count the number of security vendors and startups that have materialized over the last couple of years with AI and LLM SOAR capabilities, features, or solutions – all with pitches that suggest playbooks are dead, dying, replaced, accelerated, automatically maintained, dynamically created, managed, open sourced, etc., so the human SOC analyst does less work going forward. I remain hopeful that 10% of any of that eventually becomes true.

For the immediate future, SOC teams should continue to be wary of any AI stories that emphasize making it easier to create playbooks (or their product-specific equivalent of a playbook). More is NOT better. It’s too easy (and all too common) to fall down the rathole of creating a new playbook for an existing threat because it’s too hard to find and maintain an earlier iteration of that threat’s playbook. Instead, focus on a subset of the most common and time-consuming threats that your SOC already faces daily, and nail them using the smallest number of playbooks you can get away with.

With the Rolling Stones’ “(I Can’t Get No) Satisfaction” playing in the background, perhaps you’ll get some modicum of SIEM+SOAR satisfaction by keeping your playbook playlist under a dozen.

— Gunter Ollmann

INSIGHTS, RESEARCH | October 2, 2024

Potential Integrated Circuit Supply Chain Impacts from Hurricane Helene

The damage caused by Hurricane Helene in Spruce Pine will likely cause disruptions at the start of the microchip and integrated circuit (IC) supply chain by preventing the mining and distribution of high purity quartz until the mines and local transportation networks are fully repaired.

BACKGROUND

Hurricane Helene Impacts

In late September 2024, Hurricane Helene impacted the Caribbean, Florida, Georgia, Tennessee, North Carolina and other southeastern states in the United States[1]. Its impacts varied widely depending on location and the associated exposure to the primary effects – wind, rain, and storm surge – of the hurricane. While Florida was primarily affected by wind and storm surge, and Georgia was impacted by wind, Tennessee and western North Carolina faced torrential rainfall from Helene after significant rainfall from another storm system in the prior days.

These rains produced catastrophic flooding in the mountains of Tennessee[2] and North Carolina[3], which caused tremendous damage to road and rail transportation networks in affected areas. In western North Carolina, the state’s Department of Transportation advised, “All roads in Western NC should be considered closed[4].”

Spruce Pine High-quality Quartz Mine

Located in the Blue Ridge Mountains of western North Carolina[5], Spruce Pine is the source of the world’s highest quality (ultra-pure) quartz. Major mine owners and operators in the Spruce Pine area include Sibelco, a Belgium-based corporation[6], and The Quartz Corp, a Norway-based corporation. The quartz from this area is critical in the manufacture of semiconductors, photovoltaic cells, and optical fiber. As Sibelco explains, high purity quartz (HPQ) sands are used to produce “fused quartz[7] (almost pure amorphous silicon dioxide) crucibles used in the Czochralski process[8], and the production of fused quartz tubing and ingots used to create fabricated quartzware for the semiconductor wafer process[9].”

Sibelco has increased production of HPQ by 60% since 2019, and is investing an additional 200-million-USD to increase production by an additional 200-percent in 2024 to meet expected increases in market demand[10]. Unfortunately, operations are currently suspended at the mine due to the hurricane’s disruption of the most basic services, including road and rail links[11]. The CSX Transportation rail line into Spruce Pine is severely damaged, with entire bridges missing[12].

Alternatives for HPQ

There are slower, more expensive processes that make use of lower quality quartz inputs. Brazil, India, and the Russian Federation (Russia) are other sources of HPQ, but not the same in quality or amount[13]. Additional sources of varying quantity and quality exist in Madagascar, Norway, and the People’s Republic of China (PRC).

CONSEQUENCES

Why IOActive Cares

This incident involves key areas of Interest for IOActive – areas in which we have made significant investments to help our clients protect themselves. Specifically, this incident involves impacts to microchips and integrated circuits (ICs)[14], supply chain risks[15], and multimodal transportation and logistics[16].

Potential Supply Chain Impacts

Predictions and forecasts are only ever certain to be wrong, but they provide useful insight into possible future states, which aid decision makers and stakeholders in managing the risks. The key variable for this event is how long operations at the Spruce Pine mines might be suspended due to local impacts (mine-specific operational issues) or regional issues (such as multimodal transportation network disruption) stemming from the effects of Hurricane Helene.

Temporary repairs to bridges can be made in a matter of days, weeks, or months, depending on the level of damage, while more permanent repairs taking many months or years are planned and completed. Unfortunately, these temporary repairs may limit the weight of crossing vehicles until those permanent repairs are completed. The complete loss of several road and rail bridges and the washed-out sections of roads and rail lines will require several years to fully repair and return to full capacity. The extensive damage to the road and rail networks serving the Spruce Pine area will impact the mine operations for some time, but will likely be operating in a reduced, degraded state within a few months, assuming no additional natural disasters.

Cybersecurity Risks

When observing a consequential event such as an accident, storm, or other disaster, it’s helpful to ponder whether those same consequences can be produced from a cyberattack. It can be nearly impossible for anyone to comprehensively understand all the different failure modes of a complex system or system of systems. A convenient shortcut for a cyber threat actor is to look to recreate previous system failure modes rather than search for unique ones. Reviewing the consequences of this incident reveals several vectors that could allow a highly capable, determined threat actor to launch a cyberattack to shut down mining operations in Spruce Pines.

Broadly, attack targets could include locomotives or commercial vehicles operated on the roads, rail line signaling equipment, or mine information technology (IT) and operational technology (IT) systems. This assessment is based on the results of our public research and our confidential commercial work. A successful attack on any of these targets could produce a consequential impact for mining operations, but the duration of the impact is unlikely to be as long as the impacts from Hurricane Helene.

RECOMMENDATIONS

Risk Management

Since the Spruce Pine mines are such an important node in the global supply chain for microchips and ICs, additional all-hazards risk management and mitigation action should be taken at both the state and federal levels to ensure fewer, shorter interruptions and more resilient operations.

Cybersecurity

Strategically important mines such as this should have requirements for strong cybersecurity, including both IT and OT assets, to ensure that there are minimal to no operational disruptions from a cyberattack.

National Security

As the United States confronts the malign activities of the PRC, it should consider restrictions on key inputs to microchips and ICs, including HPQ, in addition to the existing restrictions on high-performance computing resources like GPUs[17] and semiconductor manufacturing equipment like lithography equipment[18].


[1] https://en.wikipedia.org/wiki/Hurricane_Helene
[2] https://www.knoxnews.com/story/weather/2024/09/30/hurricane-helene-deadly-east-tennessee-floods-what-to-know-schools-roads/75447229007/
[3] https://climate.ncsu.edu/blog/2024/09/rapid-reaction-historic-flooding-follows-helene-in-western-nc/
[4] https://x.com/NCDOT/status/1839685402589827554
[5] 7638 South Highway 226, Spruce Pine, NC, 28777, United States
[6] https://www.sibelco.com/en/about-us
[7] https://en.wikipedia.org/wiki/Fused_quartz
[8] https://www.sciencedirect.com/topics/chemistry/czochralski-process
[9] https://www.sibelco.com/en/materials/high-purity-quartz
[10] https://assets-eu-01.kc-usercontent.com/54dbafb3-2008-0172-7e3d-74a0128faac8/64fae543-971f-46f5-9aec-df041f6f50f6/Webcast_H1_2024_Results_final.pdf
[11] https://www.thequartzcorp.com/articles/impact-of-hurricane-helene-on-the-quartz-corp-in-spruce-pine
[12] https://www.freightwaves.com/news/csxs-former-clinchfield-railroad-barely-recognizable-after-historic-flood
[13] http://www.sinosi.com/hotsales/Product/02%20HPQ%20Promotion%20_English%20Version.pdf
[14] https://ioactive.com/service/full-stack-security-assessments/silicon-security/
[15] https://ioactive.com/supply-chain-risks-go-beyond-cyber/
[16] https://ioactive.com/industry/transportation/
[17] https://www.reuters.com/technology/nvidia-may-be-forced-shift-out-some-countries-after-new-us-export-curbs-2023-10-17/
[18] https://www.csis.org/analysis/updated-october-7-semiconductor-export-controls


INSIGHTS, RESEARCH | September 4, 2024

About to Post a Job Opening? Think Again – You May Reveal Sensitive Information Primed for Cybersecurity Attacks

People are always on the move, changing their homes and their workspaces. With increasing frequency, they move from their current jobs to new positions, seeking new challenges, new people and places, to higher salaries.

Time and hard work bring experience and expertise, and these two qualities are what companies look for; they’re looking for skilled workers every single day, on multiple job search and recruiting platforms. However, these job postings might reveal sensitive information about the company that even the most seasoned Human Resources specialists don’t notice.

Job posting websites are a goldmine of information. Inherently, recruiters have to disclose certain data points, such as the technologies used by the company, so that candidates can assess whether they should apply. On the other hand, these data points could be used by malicious actors to profile a specific company and launch more sophisticated targeted attacks against the company and its employees.

To demonstrate this concept, I did research on tens of job postings from the following websites:

Surprisingly, more than 40% of job postings reveal relatively sensitive information, such as the following, which are just a sample of the information obtained from a variety of companies:

As you can see, a variety of information is disclosed inadvertently in these job postings:

  • Exact version of the software used in the backend or by end users
  • Programming languages, frameworks and libraries used
  • Cloud Service Providers where customer data resides
  • Intranet and collaborative software used within the company
  • Antivirus and endpoint security software in use
  • Industry-specific and third-party software used
  • Databases, storage and backup, and recovery platforms used
  • Business relationships with other companies
  • Security controls implemented in the company’s SDLC

Armed with this information, one can simply connect the data dots and infer things like:

  • Whether a company uses proprietary or open-source software, implying the use of other similar proprietary/open-source applications that could be targeted in an attack.
  • Whether a company performs Threat Modeling and follows a secure SDCL, providing an attacker with a vague idea of whether the in-house-developed applications are secure or not.
  • Whether a company has business relationship with other companies, enabling an attacker to target third-party companies in order to use them as pivot to attack the targeted company.

In summary, IOActive strongly encourages recruiters not to include sensitive information other than that required by the job position – in attempting to precisely target the exact candidate for a job, the level of detail you use could be costly.

INSIGHTS, RESEARCH | August 20, 2024

Get Strategic About Cyber Risk Management

With global cybercrime damage costs exceeding $11 trillion last year and moving toward an estimated $20 trillion by 2026, robust cybersecurity risk management has never been more imperative.

The interconnected nature of modern technology means that, by default, even small vulnerabilities can lead to catastrophic losses. And it’s not just about finances. Unmitigated risk raises the specter of eroded customer confidence and tainted brand reputation. In this comprehensive guide, we’ll give enterprise defenders a holistic, methodical, checklist-style approach to cybersecurity risk management. We’ll focus on practical applications, best practices, and ready-to-implement strategies designed to mitigate risks and safeguard digital assets against ever-more numerous—and increasingly capable—threats and adversaries.

What is Cybersecurity Risk Management?

This subspecialty of enterprise risk management describes a systematic approach to identifying, analyzing, evaluating, and addressing cyber threats to an organization’s assets and operations. At its core, it involves a continuous cycle of risk assessment, risk decision-making, and the implementation of risk controls intended to minimize the negative impact of cyber incidents.

A proactive cyber risk mitigation approach helps organizations protect critical digital assets and bolster business continuity, legal compliance, and customer trust. By integrating risk management with the organization’s overall strategic planning, cybersecurity teams can prioritize resources efficiently and align their efforts with the business’s risk appetite and objectives.

Why Has Cyber Risk Management Become So Critical?

Getting control over cyber risk is quickly becoming a core requirement for businesses operating in today’s digital ubiquity. The proliferation of digital information and internet connectivity have paved the way for sophisticated cyber threats that can penetrate many of our most robust defenses. With the digital footprint of businesses expanding exponentially, the potential for data breaches, ransomware attacks, and other forms of cybercrime has escalated dramatically.

These incidents can result in devastating financial losses, legal repercussions, and irreparable damage to an organization’s reputation. Furthermore, as regulatory frameworks around data protection become more stringent, failure to comply can lead to significant penalties. Given these conditions, an aggressive and comprehensive approach to managing cybersecurity risks is crucial for safeguarding an organization’s assets, ensuring operational continuity, and maintaining trust with customers and stakeholders.

Effective Cyber Risk Management: A Framework-Based Approach

Adopting a structured, framework-based approach to cybersecurity risk management lets security teams corral the complexity of digital environments with a methodical, strategic mitigation methodology. For most enterprise applications, there’s no need to reinvent the wheel. There are a myriad of established frameworks that can be modified and customized for effective use in nearly any environment.

Perhaps the best known is the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF), a companion to NIST’s well-tested and widely implemented Cybersecurity Framework (CSF). The NIST RMF offers a structured and systematic approach for integrating security, privacy, and risk management processes into an organization’s system development life cycle.

Such frameworks provide a comprehensive set of guidelines that help identify and assess cyber threats and facilitate the development of effective strategies to mitigate these risks. By standardizing cybersecurity practices, organizations can ensure a consistent and disciplined application of security measures across all departments and operations.

This coherence and uniformity are crucial for effectively addressing vulnerabilities and responding to incidents promptly. Equally important, frameworks incorporate best practices and benchmarks that help guide organizations toward achieving compliance with regulatory requirements, thus minimizing legal risks and enhancing the safeguarding of customer data. In essence, a framework-based approach offers a clear roadmap for managing cyber risk in a way that’s aligned with organizational strategic objectives and industry standards.

What follows is a checklist based on the 7-step RMF process. This is just a starting point. A framework to-do list like this can and should be tweaked to aid in reducing and managing specific cyber risks in your unique enterprise environment.

1. Preparation

In this initial phase, organizations focus on establishing the context and priorities for the Risk Management Framework process. This involves identifying critical assets, defining the boundaries, and codifying a risk management strategy that aligns with the organization’s objectives and resources. This is the foundation upon which a tailored approach to managing cybersecurity risk will ultimately be built throughout the system’s lifecycle.

  • Establish the context for risk management and create a risk management strategy.
  • Define roles and responsibilities across the organization.
  • Develop a taxonomy for categorizing information and information systems.
  • Determine the legal, regulatory, and contractual obligations.
  • Prepare an inventory of system elements, including software and hardware.

2. Systems Categorization

Expanding on the categorization step (above), this phase involves identifying the types of information processed, stored, and transmitted to determine potential impact as measured against the information security CIA triad (confidentiality, integrity, and availability). Organizations can assign appropriate security categories to their systems by leveraging a categorization standard such as the Federal Information Processing Standard (FIPS) 199, ensuring that the protective measures taken are tailored to the specific needs and risks associated with the information being handled. This step is crucial as it lays the groundwork for selecting suitable security controls in the later stages of the risk management process.

  • Identify the types of information processed, stored, and transmitted by the system.
  • Assess the potential impact of loss of Confidentiality, Integrity, and Availability (CIA) associated with each type.
  • Document findings in a formal security categorization statement.

3. Selecting Appropriate Security Controls

This critical step begins the safeguarding of information systems against potential threats and vulnerabilities in earnest. Based on the categorization of the information system, organizations select a baseline of security and privacy controls (NIST Special Publication 800-53 or some equivalent controls standard is a good starting point here), corresponding to the system’s impact level. This baseline acts as the jumping-off point for the security controls, which can be tailored to address the specific risks identified throughout the risk assessment process. Customization involves adding, removing, or modifying controls to ensure a robust defense tailored to the unique requirements and challenges of the organization.

  • Select an appropriate baseline of security controls (NIST SP 800-53 or equivalent).
  • Tailor the baseline controls to address specific organizational needs and identified risks.
  • Document the selected security controls in the system security plan.
  • Develop a strategy for continuously monitoring and maintaining the effectiveness of security controls.

4. Implementing the Selected Controls

Implementing security controls involves the physical and technical application of measures chosen during the previous selection phase. This step requires careful execution to ensure all controls are integrated effectively within the environment, aligning with its architecture and operational practices. Documenting the implementation details is crucial to provide a reference for future assessments and maintenance activities.

  • Implement the security controls as documented in Step 3.
  • Document the security controls and the responsible entities in place.
  • Test thoroughly to ensure compatibility and uninterrupted functionality.
  • Prepare for security assessment by documenting the implementation details.

5. Assessing Controls Performance

Assessing security controls involves evaluating effectiveness and adherence to the security requirements outlined in the overall security plan. This phase is critical for identifying any control deficiencies or weaknesses that could leave the information system vulnerable. Independent reviewers or auditors typically conduct assessments to ensure objectivity and a comprehensive analysis.

  • Develop and implement a plan to assess the security controls.
  • Perform security control assessments as per the plan.
  • Prepare a Security Assessment Report (SAR) detailing the effectiveness of the controls.
  • Determine if additional controls are needed and append the master security plan accordingly.

6. Authorizing the Risk Management Program

The authorization phase is a vital decision-making interval where one or more senior executives evaluate the security controls’ assessment results and decide whether the remaining risks to the information systems are acceptable to the organization. Upon acceptance, authorization is granted to operate the mitigation program for a specific time period, during which its compliance and security posture are continuously monitored. This authorization is formalized through the issuance of what is known as an Authorization to Operate (ATO) in some organizations, particularly in the public sector.

  • Compile the required authorization package, including the master plan, the SAR, and the so-called Plan of Action and Milestones (POA&M).
  • Assess the residual risk against the organizational risk tolerance.
  • Document the authorization decision in an Authorization Decision Document.

7. Monitoring and Measuring Against Performance Metrics

The monitoring phase ensures that all implemented security controls remain effective and compliant over time. Continuous surveillance, reporting, and analysis can promptly address any identified vulnerabilities or changes in the operational environment. This ongoing process supports the kind of flexible, adaptive security posture necessary for dealing with evolving threats while steadfastly maintaining the integrity and availability of the information system.

  • Implement the plan for ongoing monitoring of security controls.
  • Report the system’s security state to designated leaders in the organization.
  • Perform ongoing risk assessments and response actions, updating documentation as necessary.
  • Conduct reviews and updates regularly, in accordance with the organizational timelines, or as significant changes occur.

Conclusion: Formalizing Cyber Risk Mitigation

A solid risk management framework provides a comprehensive guide for enhancing the security and resilience of information systems through a structured process of identifying, implementing, and monitoring security controls.

Sticking to a framework checklist helps ensure a successful, systematic adoption. As noted throughout, engaging stakeholders from across the organization, including IT, security, operations, and compliance, is critical to ensuring a truly comprehensive risk management program. Additionally, periodic training and awareness for team members involved in various phases of the risk management project will contribute to the resilience and security of the organization’s digital assets.

Organizations can effectively safeguard their digital assets and mitigate unacceptable risks by following the outlined steps, tailoring the program to fit specific organizational needs, involving stakeholders, conducting regular training, and adapting to the evolving cybersecurity landscape. Ultimately, this kind of formal, structured cyber risk management fosters a culture of continuous improvement and vigilance in an enterprise, contributing to the overall security posture and the success of the organization.

INSIGHTS, RESEARCH | July 25, 2024

5G vs. Wi-Fi: A Comparative Analysis of Security and Throughput Performance

Introduction

In this blog post we compare the security and throughput performance of 5G cellular to that of WiFi. This work is part of the research IOActive published in a recent whitepaper (https://bit.ly/ioa-report-wifi-5g), which was commissioned by Dell. We used a Dell Latitude 7340 laptop as an end-user wireless device, a Panda Wireless® PAU06 as a WiFi access point, and an Ettus Research™ Universal Software Radio Peripheral (USRP™) B210 as a 5G base station to simulate a typical standalone 5G configuration and three typical WiFi network configurations (home, public, and corporate). Testing was performed between January and February 2024 during which we simulated a number of different attacks against these networks and measured performance-based results for a number of different real-world environments.

Security Tests

We researched known 5G and WiFi attacks and grouped them according to five different goals: user tracking, sensitive data interception, user impersonation, network impersonation, and denial of service. These goals easily map to the classic Confidentiality, Integrity, Availability (CIA) security triad. We then reproduced the attacks against our controlled test networks to better understand their requirements, characteristics, and impact. The results of these investigations are summarized below:


We noted that, in general, the 5G protocol was designed from the ground up to provide several assurances that the WiFi protocol does not provide. Although mitigations are available to address some of the attacks, WiFi is still unable to match the level of assurance provided by 5G.

For example, 5G protects against user tracking by using concealed identifiers. Attacks to bypass these identifiers are easily detectable and require highly skilled and well-funded attackers. In contrast, WiFi does not attempt to protect against user tracking, since MAC addresses are transmitted in plaintext with every packet. Although modern devices have tried to mitigate this risk by introducing MAC address randomization, passive user tracking is still easy to accomplish due to shortcomings in MAC address randomization and probe request analysis.

IOActive also noted that the use of layered security protocols mitigated most sensitive data interception and network impersonation attacks. Although the majority of users do not use a VPN when connecting to the Internet, most websites use TLS, which, when combined with HSTS and browser preload lists, effectively prevents an attacker from intercepting most of the sensitive data a user might access online. However, even multiple layered security protocols cannot protect against vulnerabilities in the underlying radio protocol. For example, a WiFi deauthentication attack would not be affected in any way by the use of TLS or a VPN.

Performance Tests

We conducted performance tests by measuring throughput and latency from a wireless device in a variety of environments, ranging from urban settings with high spectrum noise and many physical obstacles, to rural areas where measurements more closely reflected the attributes of the underlying radio protocol.

Of particular note, we found that a wireless device could maintain a connection to a 5G wireless base over a significant distance, even with substantial interference from buildings and other structures in an urban environment.

In a rural environment, our WiFi testing showed an exponential decay with distance, as was expected, and it was not possible to maintain a connection over the same range as with 5G. We did, however, note significantly higher speeds from WiFi connections at close proximity:


Surprisingly, we did not see significant changes in latency or error rates during our testing.

Conclusions

The following network security spectrum summarizes our findings:


This spectrum provides a high-level overview of network types, from less secure to more secure, based on the characteristics we observed and documented in our whitepaper. The use of layered security mechanisms moves any network towards the more secure end of the spectrum.

Overall, we found that a typical standalone 5G network is more resilient against attacks than a typical WiFi network and that 5G provided a more reliable connection than WiFi, even over significant distances; however, WiFi provided much higher speeds when the wireless device was in close proximity to the wireless access point.

AUTHORS:
– Ethan Shackelford, IOActive Associate Principal Security Consultant
– James Kulikowski, IOActive Senior Security Consultant
– Vince Marcovecchio, IOActive Senior Security Consultant

INSIGHTS, RESEARCH | July 23, 2024

WiFi and 5G: Security and Performance Characteristics Whitepaper

IOActive compared the security and performance of the WiFi and 5G wireless protocols by simulating several different network types and reproducing attacks from current academic research in a Dell-commissioned study. In total, 536 hours of testing was performed between January and February 2024 comparing each technologies’ susceptibility to five categories of attack: user tracking, sensitive data interception, user impersonation, network impersonation, and denial of service.

IOActive concluded that a typical standalone 5G network is more resilient against the five categories of attack than a typical WiFi network. Attacks against a 5G network generally had higher skill, cost, and effort requirements than equivalent attacks against a WiFi network.

Our performance comparison was based on measuring throughput and latency in several different urban and rural settings. We found that although WiFi supported significantly higher speeds than 5G at close proximity, 5G provided a more reliable connection over greater distances.

AUTHORS:
– Ethan Shackelford, IOActive Associate Principal Security Consultant
– James Kulikowski, IOActive Senior Security Consultant
– Vince Marcovecchio, IOActive Senior Security Consultant

INSIGHTS, RESEARCH | May 30, 2024

The Security Imperative in Artificial Intelligence

Artificial Intelligence (AI) is transforming industries and everyday life, driving innovations once relegated to the realm of science fiction into modern reality. As AI technologies grow more integral to complex systems like autonomous vehicles, healthcare diagnostics, and automated financial trading platforms, the imperative for robust security measures increases exponentially.

Securing AI is not only about safeguarding data but also about ensuring the core systems — in particular, the trained models that really put the “intelligence” in AI — function as intended without malicious interference. Historical lessons from earlier technologies offer some guidance and can be used to inform today’s strategies for securing AI systems. Here, we’ll explore the evolution, current state, and future direction of AI security, with a focus on why it’s essential to learn from the past, secure the present, and plan for a resilient future.

AI: The Newest Crown Jewel

Security in the context of AI is paramount precisely because AI systems increasingly handle sensitive data, make important, autonomous decisions, and operate with limited supervision in critical environments where safety and confidentiality are key. As AI technologies burrow further into sectors like healthcare, finance, and national security, the potential for misuse or harmful consequences due to security shortcomings rises to concerning levels. Several factors drive the criticality of AI security:

  • Data Sensitivity: AI systems process and learn from large volumes of data, including personally identifiable information, proprietary business information, and other sensitive data types. Ensuring the security of enterprise training data as it passes to and through AI models is crucial to maintaining privacy, regulatory compliance, and the integrity of intellectual property.

  • System Integrity: The integrity of AI systems themselves must be well defended in order to prevent malicious alterations or tampering that could lead to bogus outputs and incorrect decisions. In autonomous vehicles or medical diagnosis systems, for example, instructions issued by compromised AI platforms could have life-threatening consequences.

  • Operational Reliability: AI is increasingly finding its way into critical infrastructure and essential services. Therefore, ensuring these systems are secure from attacks is vital for maintaining their reliability and functionality in critical operations.

  • Matters of Trust: For AI to be widely adopted, users and stakeholders must trust that the systems are secure and will function as intended without causing unintended harm. Security breaches or failures can undermine public confidence and hinder the broader adoption of emerging AI technologies over the long haul.

  • Adversarial Activity: AI systems are uniquely susceptible to certain attacks, whereby slight manipulations in inputs — sometimes called prompt hacking — can deceive an AI system into making incorrect decisions or spewing malicious output. Understanding the capabilities of malicious actors and building robust defenses against such prompt-based attacks is crucial for the secure deployment of AI technologies.

In short, security in AI isn’t just about protecting data. It’s also about ensuring safe, reliable, and ethical use of AI technologies across all applications. These inexorably nested requirements continue to drive research and ongoing development of advanced security measures tailored to the unique challenges posed by AI.

Looking Back: Historical Security Pitfalls

We don’t have to turn the clock back very far to witness new, vigorously hyped technology solutions wreaking havoc on the global cybersecurity risk register. Consider the peer-to-peer recordkeeping database mechanism known as blockchain.  When blockchain exploded into the zeitgeist circa 2008 — alongside the equally disruptive concept of cryptocurrency — its introduction brought great excitement thanks to its potential for both decentralization of data management and the promise of enhanced data security. In short order, however, events such as the DAO hack —an exploitation of smart contract vulnerabilities that led to substantial, if temporary, financial losses — demonstrated the risk of adopting new technologies without diligent security vetting.

As a teaching moment, the DAO incident highlights several issues: the complex interplay of software immutability and coding mistakes; and the disastrous consequences of security oversights in decentralized systems. The case study teaches us that with every innovative leap, a thorough understanding of the new security landscape is crucial, especially as we integrate similar technologies into AI-enabled systems.

Historical analysis of other emerging technology failures over the years reveals other common themes, such as overreliance on untested technologies, misjudgment of the security landscape, and underestimation of cyber threats. These pitfalls are exacerbated by hype-cycle-powered rapid adoption that often outstrips current security capacity and capabilities. For AI, these themes underscore the need for a security-first approach in development phases, continuous vulnerability assessments, and the integration of robust security frameworks from the outset.

Current State of AI Security

With AI solutions now pervasive, each use case introduces unique security challenges. Be it predictive analytics in finance, real-time decision-making systems in manufacturing systems, or something else entirely,  each application requires a tailored security approach that takes into account the specific data types and operational environments involved. It’s a complex landscape where rapid technological advancements run headlong into evolving security concerns. Key features of this challenging  infosec environment include:

  • Advanced Threats: AI systems face a range of sophisticated threats, including data poisoning, which can skew an AI’s learning and reinforcement processes, leading to flawed outputs; model theft, in which proprietary intellectual property is exposed; and other adversarial actions that can manipulate AI perceptions and decisions in unexpected and harmful ways. These threats are unique to AI and demand specialized security responses that go beyond traditional cybersecurity controls.

  • Regulatory and Compliance Issues: With statutes such as GDPR in Europe, CCPA in the U.S., and similar data security and privacy mandates worldwide, technology purveyors and end users alike are under increased pressure to prioritize safe data handling and processing. On top of existing privacy rules, the Biden administration in the U.S. issued a comprehensive executive order last October establishing new standards for AI safety and security. In Europe, meanwhile, the EU’s newly adopted Artificial Intelligence Act provides granular guidelines for dealing with AI-related risk. This spate of new rules can often clash with AI-enabled applications that demand more and more access to data without much regard for its origin or sensitivity.

  • Integration Challenges: As AI becomes more integrated into critical systems across a wide swath of vertical industries, ensuring security coherence across different platforms and blended technologies remains a significant challenge. Rapid adoption and integration expose modern AI systems to traditional threats and legacy network vulnerabilities, compounding the risk landscape.

  • Explainability: As adoption grows, the matter of AI explainability  — or the ability to understand and interpret the decisions made by AI systems — becomes increasingly important. This concept is crucial in building trust, particularly in sensitive fields like healthcare where decisions can have profound impacts on human lives.Consider an AI system used to diagnose disease from medical imaging. If such a system identifies potential tumors in a scan, clinicians and patients must be able to understand the basis of these conclusions to trust in their reliability and accuracy. Without clear explanations, hesitation to accept the AI’s recommendations ensues, leading to delays in treatment or disregard of useful AI-driven insights. Explainability not only enhances trust, it also ensures AI tools can be effectively integrated into clinical workflows, providing clear guidance that healthcare professionals can evaluate alongside their own expertise.

Addressing such risks requires a deep understanding of AI operations and the development of specialized security techniques such as differential privacy, federated learning, and robust adversarial training methods. The good news here: In response to AI’s risk profile, the field of AI security research and development is on a steady growth trajectory. Over the past 18 months the industry has witnessed  increased investment aimed at developing new methods to secure AI systems, such as encryption of AI models, robustness testing, and intrusion detection tailored to AI-specific operations.

At the same time, there’s also rising awareness of AI security needs beyond the boundaries of cybersecurity organizations and infosec teams. That’s led to better education and training for application developers and users, for example, on the potential risks and best practices for securing A-powered systems.

Overall,  enterprises at large have made substantial progress in identifying and addressing AI-specific risk, but significant challenges remain, requiring ongoing vigilance, innovation, and adaptation in AI defensive strategies.

Data Classification and AI Security

One area getting a fair bit of attention in the context of safeguarding AI-capable environments is effective data classification. The ability to earmark data (public, proprietary, confidential, etc.) is essential for good AI security practice. Data classification ensures that sensitive information is handled appropriately within AI systems. Proper classification aids in compliance with regulations and prevents sensitive data from being used — intentionally or unintentionally — in training datasets that can be targets for attack and compromise.

The inadvertent inclusion of personally identifiable information (PII) in model training data, for example, is a hallmark of poor data management in an AI environment. A breach in such systems not only compromises privacy but exposes organizations to profound legal and reputational damage as well. Organizations in the business of adopting AI to further their business strategies must be ever aware of the need for stringent data management protocols and advanced data anonymization techniques before data enters the AI processing pipeline.

The Future of AI Security: Navigating New Horizons

As AI continues to evolve and tunnel its way further into every facet of human existence, securing these systems from potential threats, both current and future, becomes increasingly critical. Peering into AI’s future, it’s clear that any promising new developments in AI capabilities must be accompanied by robust strategies to safeguard systems and data against the sophisticated threats of tomorrow.

The future of AI security will depend heavily on our ability to anticipate potential security issues and tackle them proactively before they escalate. Here are some ways security practitioners can prevent future AI-related security shortcomings:

  • Continuous Learning and Adaptation: AI systems can be designed to learn from past attacks and adapt to prevent similar vulnerabilities in the future. This involves using machine learning algorithms that evolve continuously, enhancing their detection capabilities over time.

  • Enhanced Data Privacy Techniques: As data is the lifeblood of AI, employing advanced and emerging data privacy technologies such as differential privacy and homomorphic encryption will ensure that data can be used for training without exposing sensitive information.

  • Robust Security Protocols: Establishing rigorous security standards and protocols from the initial phases of AI development will be crucial. This includes implementing secure coding practices, regular security audits, and vulnerability assessments throughout the AI lifecycle.

  • Cross-Domain Collaboration: Sharing knowledge and strategies across industries and domains can lead to a more robust understanding of AI threats and mitigation strategies, fostering a community approach to AI security.

Looking Further Ahead

Beyond the immediate horizon, the field of AI security is set to witness several meaningful advancements:

  • Autonomous Security: AI systems capable of self-monitoring and self-defending against potential threats will soon become a reality. These systems will autonomously detect, analyze, and respond to threats in real time, greatly reducing the window for attacks.

  • Predictive Security Models: Leveraging big data and predictive analytics, AI can forecast potential security threats before they manifest. This proactive approach will allow organizations to implement defensive measures in advance.

  • AI in Cybersecurity Operations: AI will increasingly become both weapon and shield. AI is already being used to enhance cybersecurity operations, providing the ability to sift through massive amounts of data for threat detection and response at a speed and accuracy unmatchable by humans. The technology and its underlying methodologies will only get better with time. This ability for AI to remove the so-called “human speed bump” in incident detection and response will take on greater importance as the adversaries themselves increasingly leverage AI to generate malicious attacks that are at once faster, deeper, and potentially more damaging than ever before.

  • Decentralized AI Security Frameworks: With the rise of blockchain technology, decentralized approaches to AI security will likely develop. These frameworks can provide transparent and tamper-proof systems for managing AI operations securely.

  • Ethical AI Development: As part of securing AI, strong initiatives are gaining momentum to ensure that AI systems are developed with ethical considerations in mind will prevent biases and ensure fairness, thus enhancing security by aligning AI operations with human values.

As with any rapidly evolving technology, the journey toward a secure AI-driven future is complex and fraught with challenges. But with concerted effort and prudent innovation, it’s entirely within our grasp to anticipate and mitigate these risks effectively. As we advance, the integration of sophisticated AI security controls will not only protect against potential threats, it will foster trust and promote broader adoption of this transformative technology. The future of AI security is not just about defense but about creating a resilient, reliable foundation for the growth of AI across all sectors.

Charting a Path Forward in AI Security

Few technologies in the past generation have held the promise for world-altering innovation in the way AI has. Few would quibble with AI’s immense potential to disrupt and benefit human pursuits from healthcare to finance, from manufacturing to national security and beyond. Yes, Artificial Intelligence is revolutionary. But it’s not without cost. AI comes with its own inherent collection of vulnerabilities that require vigilant, innovative defenses tailored to their unique operational contexts.

As we’ve discussed, embracing sophisticated, proactive, ethical, collaborative AI security and privacy measures is the only way to ensure we’re not only safeguarding against potential threats but also fostering trust to promote the broader adoption of what most believe is a brilliantly transformative technology.

The journey towards a secure AI-driven future is indeed complex and fraught with obstacles. However, with concerted effort, continuous innovation, and a commitment to ethical practices, successfully navigating these impediments is well within our grasp. As AI continues to evolve, so too must our strategies for defending it. 

INSIGHTS, RESEARCH | May 16, 2024

Field-Programmable Chips (FPGAs) in Critical Applications – What are the Risks?

What is an FPGA?

Field-Programmable Gate Arrays (FPGAs) are a type of Integrated Circuit (IC) that can be programmed or reprogrammed after manufacturing. They consist of an array of logic blocks and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as telecommunications, automotive, aerospace, and industrial sectors.

FPGAs are often found in products that are low volume or demand short turnaround time because they can be purchased off the shelf and programmed as needed without the setup and manufacturing costs and long lead times associated with Application-Specific Integrated Circuits (ASICs). FPGAs are also popular for military and aerospace applications due to the long lifespan of such hardware as compared to typical consumer electronics. The ability to update deployed systems to meet new mission requirements or implement new cryptographic algorithms—without replacing expensive hardware—is valuable.

These benefits come at a cost, however: additional hardware is required to enable reprogramming. An FPGA-based design will use many more transistors than the same design implemented as an ASIC, increasing power consumption and per-device costs.

Implementing a circuit with an FPGA vs an ASIC can also come with security concerns. FPGA designs are compiled to a “bitstream,” a digital representation of the circuit netlist, which must be loaded into the FPGA for it to function. While bitstream formats are generally undocumented by the manufacturer, several projects are working towards open-source toolchains (e.g., Project X-Ray for the Xilinx 7 series) and have reverse engineered the bitstream formats for various devices.

FPGA bitstreams can be loaded in many ways, depending on the FPGA family and application requirements:

  • Serial or parallel interfaces from an external processor
  • JTAG from a debug cable attached to a PC
  • Serial or parallel interfaces to an external flash memory
  • Separate stacked-die flash memory within the same package as the FPGA
  • Flash or One-Time-Programmable (OTP) memory integrated into the FPGA silicon itself

In this post, we will focus on Xilinx because it was the first company to make a commercially viable FPGA back in 1985 and continues to be a world leader in the field. AMD acquired Xilinx in 2020 in a deal worth $60 billion, and today they control over 50% of the world’s programmable logic chips.

The Spartan™ 6 family of Xilinx FPGAs offers low-cost and low-power solutions for high-volume applications like displays, military/emergency/civil telecommunications equipment, and wireless routers. Spartan 6 was released in 2009, so these chips are relatively low-tech compared with their successors, the Spartan 7 and Xilinx’s higher end solutions (e.g., the Zynq and Versal families).

The Spartan 6 bitstream format is not publicly known, but IOActive is aware of at least one research group with unreleased tooling for it. These devices do not contain internal memory, so the bitstream must be provided on external pins each time power is applied and is thus accessible for an attacker to intercept and potentially reverse engineer.

FPGA vendors are, of course, aware of this risk and provide security features, such as allowing bitstreams to be encrypted on external flash and decrypted on the FPGA. In the case of the Spartan 6 family, the bitstream can be encrypted with AES-256 in CBC mode. The key can be stored in either OTP eFuses or battery-backed Static Random Access Memory (SRAM), which enables a self-destruct function where the FPGA can erase the key if it detects tampering.

The AES block used in the Spartan 6, however, is vulnerable to power analysis, and a team of German researchers developed a successful attack against it: “On the Portability of Side-Channel Attacks – An Analysis of the Xilinx Virtex 4, Virtex 5, and Spartan 6 Bitstream Encryption Mechanism.”

An example of a Xilinx Spartan 6 application is the Russian military radio R-187-P1 made by Angstrem, so we decided to use this as our test case.

AZART R-187-P1 Product Overview

Since its release in 2012, several researchers have discovered that the radio provides built-in protocols to allow communication across multiple standards, including older analogue Russian radios, UAV commands, and even TETRA. While the advertised frequency range is 27 to 520 MHz, recent firmware updates enabled a lower range of frequencies down to 100 kHz with AM.  

The features of this radio are fairly well known. The following was posted by @SomeGumul, a Polish researcher, on Twitter/X:

The trophy in the form of the Russian radio station “Azart”, announced by the Russians as a “native, Russian” sixth-generation device for conducting encrypted conversations, works on American radio components. The basis of the encryption system is, in particular, the Spartan®-6 FPGA (field-programmable gate array) system.  It is produced by the American company XILINX (AMD) in Taiwan.

Ironically, despite being advertised as the forefront of Russia’s military technical prowess, the heart of this device was revealed by Ukrainian serviceman Serhii Flash (Сергей Флэш) to be powered by the Spartan 6 FPGA. This FPGA is what enables the handheld radio’s capabilities to be redefined by mere software updates, allowing its users to speak over virtually any radio standard required—past, present, or future. While most of the currently implemented protocols are unencrypted to allow backward compatibility with other older, active service equipment, communication between two AZART radios enables frequency hopping up to 20,000 frequencies per second. This high rate of frequency hopping creates challenges for eavesdropping and position triangulation. The radio’s implementation of TETRA also supports encryption with the inclusion of a supporting radio trunk, where the radio is referred to as being in Trunked Mode Operation (TMO). Otherwise, while in Direct Mode Operation (DMO), the radio only supports voice scrambling in the time and frequency domains.   

20,000 frequency hops per second is quite a feat for a radio. Extremely precise timing is required for two or more radios to sync across hops and still communicate clearly. This timing source is gained wirelessly from GPS and GLONASS. As such, this advanced feature can be disabled simply by jamming GPS frequencies.

While this attack may be sufficient, GPS signals are ubiquitous and neutral sources of precise timing that are often required by both sides of any conflict. So, while jamming the frequencies may work in a pinch, it would be cleaner to find a solution to track this high rate of frequency hopping without having to jam a useful signal. To find this solution, we must investigate the proprietary Angstrem algorithm that drives the pseudo-random frequency hopping. To do this, we begin by looking at the likely driver: the Spartan 6 FPGA.

Chipset Overview

It is currently unknown if the Spartan 6 on the AZART is utilizing an encrypted bitstream; however, due to its wartime purpose, it must not be ruled out. While waiting for the procurement of a functioning radio, IOActive began a preliminary investigation into the functioning of the Spartan 6 with a specific focus on areas related to encryption and decryption of the bitstream.

Mainboard of AZART Highlighting the Spartan 6

At the time of writing this post, the XC6SLX75-3CSG484I sells for around $227 from authorized US distributors; however, it can be obtained for much lower prices in Asian markets, with sellers on AliExpress listing them for as low as $8.47. While counterfeits are prevalent in these supply chains, legitimate parts are not difficult to obtain with a bit of luck.

In addition to the FPGA, one other notable component visible on the board is the Analog Devices TxDAC AD9747, a dual 16-bit 250 Msps Digital-to-Analog Converter (DAC) intended for SDR transmitters. Assuming this is being used to transmit I/Q waveforms, we can conclude that the theoretical maximum instantaneous bandwidth of the radio is 250 MHz, with the actual bandwidth likely being closer to 200 MHz to minimize aliasing artifacts.

Device Analysis

IOActive procured several Spartan 6 FPGAs from a respected supplier for a preliminary silicon teardown to gain insight into how the chip handles encrypted bitstreams and identify any other interesting features. As a standalone package, the Spartan chip looks like this:

The CSG484 package that contains the Spartan 6 is a wire-bonded Ball-Grid Array (BGA) consisting of the IC die itself (face up), attached by copper ball bonds to a four-layer FR4 PCB substrate and overmolded in an epoxy-glass composite. The substrate has a grid of 22×22 solder balls at 0.8 mm pitch, as can be seen in the following cross section. The solder balls are an RoHS-compliant SAC305 alloy, unlike the “defense-grade” XQ6SLX75, which uses Sn-Pb solder balls. The choice of a consumer-grade FPGA for a military application is interesting, and may have been driven by cost or component availability issues (as the XQ series parts are produced in much lower volume and are not common in overseas supply chains).

Spartan 6 Cross Section Material Analysis

The sample was then imaged in an SEM to gain insight into the metal layers for to order to perform refined deprocessing later. The chip logic exists on the surface of the silicon die, as outlined by the red square in the following figure.

Close-up of FPGA Metal Layers on Silicon Die

The XC6SLX75 is manufactured by Samsung on a 45 nm process with nine metal layers. A second FPGA was stripped of its packaging for a top-down analysis, starting with metal layer nine.

Optical Overview Image of Decapsulated Die Xilinx XC6SLX75

Looking at the top layer, not much structure is visible, as the entire device is covered by a dense grid of power/ground routing. Wire bond pads around the perimeter for power/ground and I/O pins can clearly be seen. Four identical regions, two at the top and two at the bottom, have a very different appearance from the rest of the device. These are pairs of multi-gigabit serial transceivers, GTPs in Xilinx terminology, capable of operation at up to 3.2 Gbps. The transceivers are only bonded out in the XC6SLX75T; the non –T version used in the radio does not connect them, so we can ignore them for the purposes of this analysis.

The metal layers were then etched off to expose the silicon substrate layer, which provides better insight into chip layout, as shown in the following figure.

Optical Overview Image of Floorplan of Die Xilinx XC6SLX75

After etching off the metal and dielectric layers, a much clearer view of the device floorplan becomes visible. We can see that the northwest GTP tile has a block of logic just south of it. This is a PCIe gen1 controller, which is not used in the non –T version of the FPGA and can be ignored.

The remainder of the FPGA is roughly structured as columns of identical blocks running north-south, although some columns are shorter due to the presence of the GTPs and PCIe blocks. The non-square logic array of Spartan 6 led to poor performance as user circuitry had to be shoehorned awkwardly around the GTP area. Newer-generation Xilinx parts place the GTPs in a rectangular area along one side of the device, eliminating this issue.

The light-colored column at the center contains clock distribution buffers as well as Phase-Locked Loops (PLLs) and Digital Clock Managers (DCMs) for multiplying or dividing clocks to create different frequencies. Smaller, horizontal clock distribution areas can be seen as light-colored rows throughout the rest of the FPGA.

There are four columns of RAM containing a total of 172 tiles of 18 kb, and three columns of DSP blocks containing a total of 132 tiles, each consisting of an 18×18 integer multiplier and some other logic useful for digital signal processing. The remaining columns contain Configurable Logic Blocks (CLBs), which are general purpose logic resources.

The entire perimeter of the device contains I/O pins and related logic. Four light-colored regions of standard cell logic can be seen in the I/O area, two on each side. These are the integrated DDR/DDR2/DDR3 Memory Controller Blocks (MCBs).

The bottom right contains two larger regions of standard cell logic, which appear related to the boot and configuration process. We expect the eFuse and Battery-Backed SRAM (BBRAM), which likely contain the secrets required to decrypt the bitstream, to be found in this area. As such, this region was scanned in high resolution with the SEM for later analysis.

SEM Substrate Image of Boot/AES Logic Block 1

Utilizing advanced silicon deprocessing and netlist extraction techniques, IOActive hopes to refine methodologies for extracting the configured AES keys required to decrypt the bitstream that drives the Spartan 6 FPGA.

Once this is complete, there is a high probability that the unencrypted bitstream that configures the AZART can be obtained from a live radio and analyzed to potentially enumerate the secret encryption and frequency hopping algorithms that protect the current generation of AZART communications. We suspect that we could also apply this technique to previous generations of AZART, as well as other FPGA-based SDRs like those commonly in use by law enforcement, emergency services, and military operations around the world.

INSIGHTS, RESEARCH | May 15, 2024

Evolving Cyber Threatscape: What’s Ahead and How to Defend

The digital world is a dangerous place. And by all accounts, it’s not getting a whole lot better.

Damages from cybercrime will top a staggering $8 trillion this year, up from an already troubling $1 trillion just five years ago and rocketing toward $14 trillion by 2028. Supply chains are becoming juicier targets, vulnerabilities are proliferating, and criminals with nation-state support are growing more active and more sophisticated. Ransomware, cryptojacking, cloud compromises, and AI-powered shenanigans are all on a hockey-stick growth trajectory.

Looking ahead, there are few sure things in infosec other than the stone-cold, lead-pipe lock that systems will be hacked, data will be compromised, money will be purloined, and bad actors will keep acting badly.

Your organization needn’t be a victim, however. Forewarned is forearmed, after all.

Here’s what to expect in the evolving cyber threatscape over the next 12 to 18 months along with some steps every security team can take to stay secure in this increasingly hostile world.

The Weaponization of AI

The Threat: The coming year promises to be a big one for exploiting the ability of AI (artificial intelligence) and Large Language Models (LLMs) to spew misinformation, overwhelm authentication controls, automate malicious coding, and spawn intelligent malware that proactively targets vulnerabilities and evades detection. Generative AI promises to empower any attacker — even those with limited experience or modest resources — with malicious abilities previously limited to experienced users of frameworks like Cobalt Strike or Metasploit.

Expect to see at least some of these new, nefarious generative AI tools offered as a service through underground criminal syndicates, broadening the global cabal of troublesome threat actors while expanding both the population and the richness of available targets. The steady increase in Ransomware-as-a-Service is the clearest indicator to date that such criminal collaboratives are already on the rise.

Particularly ripe for AI-enabled abuse are social engineering-based operations like phishing, business email compromise, and so-called “pig butchering” investment, confidence and romance scams. Generative AI is eerily adept at turning out convincing, persuasive text, audio, and video content with none of the spelling, grammar, or cultural errors that traditionally made such hack attempts easy to spot. Add the LLMs ability to ingest legitimate business communications content for repurposing and translation and it’s easy to see how AI will soon be helping criminals craft super-effective global attacks on an unprecedented scale.

The Response: On a positive note, AI, as it turns out, can play well on both sides of the ball: offense and defense.

AI is already proving its worth, bolstering intelligent detection, response, and mitigation tools. AI-powered security platforms can analyze, model, learn, adapt, and act with greater speed and capacity than any human corps of security analysts ever could. Security professionals need to skill-up now on the techniques used to develop AI-powered attacks with the goal of creating equally innovative and effective mitigations and controls.

And because this new generation of smart malware will make the targeting of unmitigated vulnerabilities far more efficient, the basic infosec blocking and tackling — diligent asset inventory, configuration management, patching — will be more critical than ever.

Clouds Spawn Emerging Threats

The Threat: Business adoption of cloud computing technology has been on a steady rise for more than a decade. The current macroeconomic climate, with all of its challenges and uncertainty, promises to accelerate that trend for at least the next few years. Today, more than four in ten enterprises say they are increasing their use of cloud-based products and services and about one-third plan to continue migrating from legacy software to cloud-based tools this year. A similar share is moving on-premises workloads in the same direction.

Good for business. However, the cloud transformation revolution is not without its security pitfalls.

The cloud’s key benefits — reduced up-front costs, operational vs. capital expenditure, improved scalability and efficiency, faster deployment, and streamlined management — are counterbalanced by cloud-centric security concerns. The threatscape in the era of cloud is dotted with speed bumps like misconfigurations, poor coding practices, loose identity and access controls, and a pronounced lack of detailed environmental visibility. All of this is compounded by a general dearth of cloud-specific security expertise on most security teams.

One area to watch going forward: Better than half of enterprise IT decision-makers now describe their cloud strategy as primarily hybrid cloud or primarily multi-cloud. Three-quarters use multiple cloud vendors. Criminals are taking note. Attacks targeting complex hybrid and multi-cloud environments — with their generous attack surface and multiple points of entry — are poised to spike.

The recent example of a zero-day exploited by Chinese hackers that allowed rogue code execution on guest virtual machines (VMs) shows that attacks in this realm are getting more mature and potentially more damaging. Threat actors are targeting hybrid and multi-cloud infrastructure, looking for ways to capitalize on misconfigurations and lapses in controls in order to move laterally across different cloud systems.

Another area of concern is the increased prevalence of serverless infrastructure in the cloud. The same characteristics that make serverless systems attractive to developers — flexibility, scalability, automated deployment — also make them irresistible to attackers. Already there’s been an uptick in instances of crypto miners surreptitiously deployed on serverless infrastructure. Though serverless generally presents a smaller attack surface than hybrid and multi-cloud infrastructure, giving up visibility and turning over control of the constituent parts of the infrastructure to the cloud service provider (CSP) raises its own set of security problems. Looking ahead, nation-state backed threat actors will almost certainly ramp up their targeting of serverless environments, looking to take advantage of insecure code, broken authentication, misconfigured assets, over-privileged functions, abusable API gateways, and improperly secured endpoints.

The Response: The best advice on securing modern cloud environments in an evolving threatscape begins with diligent adherence to a proven framework like the Center for Internet Studies’ Critical Security Controls (CIS Controls V8) and the CIS’s companion Cloud Security Guide. These prioritized safeguards, regularly updated by an engaged cadre of security community members, offer clear guidance on mitigating the most prevalent cyber-attacks against cloud-based systems and cloud-resident data. As a bonus, the CIS Controls are judiciously mapped to several other important legal, regulatory, and policy frameworks.

Beyond that fundamental approach, some steps cloud defenders can take to safeguard the emerging iterations of cloud infrastructure include:

  • Embracing the chaos: The big challenge for security teams today is less about current configurations and more about unwinding the sins of the past. Run a network visualization and get arms around the existing mess of poorly managed connections and policy violations. It’s a critical first step toward addressing critical vulnerabilities that put the company and its digital assets at risk.
  • Skilling Up: Most organizations rely on their existing networking and security teams to manage their expanding multi-cloud and hybrid IT environments. It’s a tall order to expect experts in more traditional IT to adapt to the arcana of multi-cloud without specific instruction and ongoing training. The Cloud Security Alliance offers a wealth of vendor-agnostic training sessions in areas ranging from cloud fundamentals, to architecture, auditing and compliance.
  • Taking Your Share of Shared Responsibility: The hierarchy of jurisdiction for security controls in a cloud environment can be ambiguous at best and confusing at worst. Add more clouds to the mix, and the lines of responsibility blur even further. While all major cloud providers deliver some basic default configurations aimed at hardening the environment, that’s pretty much where their burden ends. The client is on the hook for securing their share of the system and its data assets. This is especially true in multi-cloud and hybrid environments where the client organization alone must protect all of the points where platforms from various providers intersect. Most experts agree the best answer is a third-party security platform that offers centralized, consolidated visibility into configurations and performance.
  • Rethinking networking connections: Refactoring can move the needle on security while conserving the performance and capabilities benefits of the cloud. Consider the “minimum viable network” approach, a nod to how cloud can, in practice, turn a packet-switched network into a circuit-switched one. The cloud network only moves packets where users say they can move. Everything else gets dropped. Leveraging this eliminates many security issues like sniffing, ARP cache poisoning, etc. This application-aware schema calls for simply plugging one asset into another, mapping only the communications required for that particular stack, obviating the need for host-based firewalls or network zones.

    Once defenders get comfortable with the minimum viable network concept, they can achieve adequate security in even the most challenging hybrid environments. The trick is to start simple and focus on reducing network connections down to the absolute minimum of virtual wires.

Supply Chains in the Crosshairs

The Threat: Because they’re such a central component in a wide variety of business operations — and because they feature complex layers of vendor, supplier, and service provider relationships — supply chains remain a tempting target for attackers. And those attackers are poised to grow more prolific and more sophisticated, adding to their prominence in the overall threatscape.

As global businesses become more dependent on interconnected digital supply chains, the compromise of a single, trusted software component in that ecosystem can quickly cascade into mayhem. Credential theft, malicious code injection, and firmware tampering are all part of the evolving supply-chain threat model. Once a trusted third party is compromised, the result is most often data theft, data wiping, or loss of systems availability via ransomware or denial of service attack. Or all of the above.

Prime examples include the 2020 SolarWinds hack, in which government-backed Russian attackers inserted malicious code into a software update for SolarWinds popular Orion IT monitoring and management platform. The compromise went undetected for more than a year, even as SolarWinds and its partners continued serving up malicious code to some 30,000 private companies and government agencies. Many of those victims saw their data, systems and networks compromised by the backdoor buried in the bad update before the incident was finally detected and mitigated.

More recently, in the summer of 2023, attackers leveraged a flaw in Progress Software’s widely used MOVEit file transfer client, exposing the data of thousands of organizations and nearly 80 million users. In arguably the largest supply-chain hack ever recorded, a Russian ransomware crew known as Clop leveraged a zero-day vulnerability in MOVEit to steal data from business and government organizations worldwide. Victims ranged from New York City’s public school system, the state of Maine, and a UK-based HR firm serving major clients such as British Airways and the BBC.

The Response: Given the trajectory, it’s reasonable to assume that supply chain and third-party attacks like the MOVEit hack will grow in both frequency and intensity as part of the threatscape’s inexorable evolution. This puts the integrity and resilience of the entire interconnected digital ecosystem at grave and continuing risk.

To fight back, vendor due diligence (especially in the form of formal vendor risk profiles) is key. Defenders will need to take a proactive stance that combines judicious and ongoing assessment of the security posture of all the suppliers and third-party service providers they deal with. Couple that with strong security controls — compensating ones, if necessary — and proven incident detection and response plans focused on parts of the environment most susceptible to third-party compromise.

These types of attacks will happen again. As the examples above illustrate, leveraging relevant threat intelligence on attack vectors, attacker techniques, and emerging threats should feature prominently in any scalable, adaptable supply chain security strategy.

Dishonorable Mentions

The cybersecurity threatscape isn’t limited to a handful of hot-button topics. Watching the threat environment grow and change over time means keeping abreast of many dozens of evolving risk factors, attack vectors, hacker techniques and general digital entropy. Some of the other issues defenders should stay on top of in this dynamic threat environment include:

  • Shifting DDos targets: Distributed denial-of-service attacks are as old as the internet itself. What’s new is the size and complexity of emerging attacks which are now more frequently targeting mobile networks, IoT systems and Operational Technology/Industrial Control Systems (OT/ICS) that lie at the heart of much critical infrastructure.
  • Disinfo, misinfo and “deep fakes”: A product of the proliferation of AI, bad-faith actors (and bots that model them) will churn out increasing volumes of disingenuous data aimed at everything from election interference to market manipulation.
  • Rising hacktivism: Conflicts in Ukraine and Israel illustrate how hacker collabs with a stated political purpose are ramping up their use of DDoS attacks, Web defacements and data leaks. The more hacktivism cyber attacks proliferate — and the more effective they appear — the more likely nation-states will jump in the fray to wreak havoc on targets both civilian and military.
  • Modernizing malware code: C/C++ has long been the lingua franca of malware. But that is changing. Looking to harness big libraries, easier integration, and a more streamlined programming experience, the new breed of malware developers is turning to languages like Rust and Go. Not only are hackers able to churn their malicious code faster to evade detection and outpace signatures, but the malware they create can be much more difficult for researchers to reverse engineer as well.
  • Emerging quantum risk: As the quantum computing revolution inches ever closer, defenders can look forward to some significant improvements to their security toolkit. Like AI, quantum computing promises to deliver unprecedented new capabilities for threat intelligence gathering, vulnerability management, and DFIR. But also like AI, quantum has a dark side. Quantum computers can brute force their way through most known cryptographic algorithms. Present-day encryption and password-based protections are likely to prove woefully ineffective in the face of a quantum-powered attack.

Taking Stock of a Changing Threatscape

Yes, the digital world is a dangerous place, and the risks on the horizon of the threatscape can seem daunting. Navigating this challenging terrain forces security leaders to prioritize strong, scalable defenses, the kind that can adapt to emerging technology threats and evolving attack techniques all at once. It’s a multi-pronged approach.

What does it take? Adherence to solid security frameworks, judicious use of threat intelligence, updated response plans, and even tactical efforts like mock drills, penetration tests, and red team exercises can play a role in firming up security posture for an uncertain future.

Perhaps most importantly, fostering a culture of security awareness and training within the organization can be vital for preventing common compromises, from phishing and malware attacks to insider threats, inadvertent data leaks, and more.

Surviving in the evolving cyber threatscape comes down to vigilance, adaptability, and a commitment to constant learning. It’s a daunting task, but with a comprehensive, forward-thinking strategy, it’s possible to stay ahead of the curve.