INSIGHTS | March 20, 2026

The Evolution of AI-Powered Security Consultants

In my fourteen years of security assessments with IOActive, our shared mission has always been defined by a single commitment: stay ahead. Stay ahead of the threats clients face today, and stay ahead of the techniques that will define how we find those threats tomorrow.

That responsibility has driven every meaningful evolution in how our consultants work.

When fuzzing was still a research curiosity, the consultants who built their own frameworks and integrated it into live engagements found entire vulnerability classes that manual reviews missed. When static analysis tools were primitive and noisy, the teams that wrote custom rule sets and built triage pipelines around them separated signal from noise faster than anyone relying on defaults. In each case, the competitive edge went to those consultants who built the infrastructure to use new capabilities correctly before the rest of the industry caught up.

I’ve watched that pattern repeat with every major shift in the offensive landscape. However, none of them compare to what’s happening now with the capabilities of the large language models (LLMs) that fuel AI.

What follows are the decisions shaping how security assessments are built today: from the architectural constraints that dictate where AI can and can’t be used, to the strategies that determine whether it actually finds what matters.

The Model as Both Tool and Risk

The question is already coming up in kickoff calls. On recent engagements, clients have asked directly whether our AI assessment tooling sends their information to external infrastructure. One CISO put it clearly: if it leaves the engagement perimeter, it’s a breach of our NDA.

That posture is what’s driving consultants to build local LLM infrastructures. The reason is simple: the AI tools gaining traction today require sending client artifacts, such as source code, firmware, configuration files and protocol captures, to infrastructure you don’t control.

Hold that thought for a second – there’s an asymmetry there worth mentioning: the codebase and architecture we assess have almost certainly already traveled through cloud AI infrastructure during development. Copilot suggestions, ChatGPT-assisted debugging, Cursor completions, and online forum questions that include function signatures are all common today. Developers share code with online AI constantly, and at scale. That’s the reality of how software is written and architecture designed today.

That asymmetry cuts both ways, but while developers share code to build software, security assessments produce something far more sensitive: a complete map of how that software breaks. Vulnerability findings, exploitation chains, and architectural weaknesses – these are not debugging prompts. Losing control of that output is an entirely different category of risk.

The Model Is the Least Interesting Part

The open-weight code model landscape changes on a quarterly basis (or faster). But for an effective security testing infrastructure: the model should be the most swappable component.

What matters when selecting a model for a new engagement cycle is straightforward:

  • A context window sufficient for the artifact sizes we work with (code, Nmap scan results, etc.)
  • Demonstrated performance on the analysis tasks relevant to the engagement surface
  • Evaluation against the specific requirements the targets demand

Public benchmarks do already exist for evaluating LLMs on vulnerability detection. Most of them use a curated set of functions with known vulnerabilities, where the model must identify the bug, locate the exact source, and explain the exploitation path, resulting in a binary pass/fail. Usually, a model that finds the bug but misidentifies the root cause scores the same as one that misses it entirely.

In addition, the largest model is rarely the best model for a specific security analysis task. A general-purpose frontier model with hundreds of billions of parameters will score well on generic coding benchmarks, but a focused model a fraction of that size, fine-tuned on vulnerability data from domains we actually assess, will outperform it on the task that matters. Domain specialization beats raw scale when the domain is narrow enough and the training data is good enough.

Models rotate on someone else’s schedule, and are shipped, improved, and deprecated by labs we don’t control. That’s fine, as long as the model is the part that’s designed to be swapped.

The Input Changes. The Scaffolding Doesn’t.

If the model is swappable, the scaffolding is the multiplier force across engagement types. What changes between an application security assessment and a firmware audit is the ingestion format, not the pipeline architecture. Source code, decompiled binaries, ICS configurations, protocol captures, Active Directory enumeration output – each is a different artifact, but they all flow through the same stages: ingest, prioritize, analyze, triage. The scaffolding adapts at the ingestion layer. Everything downstream stays the same.

The critical architectural decision is where inference begins and where deterministic processing ends. Not every stage in the pipeline benefits from a language model. Parsing an AST to extract function signatures is deterministic. Matching a known CVE pattern against a dependency manifest is a lookup. Normalizing binary sections for analysis is a format transformation. These operations are faster, cheaper, and more reliable as traditional code. Sending them through an LLM adds latency and unpredictability for zero gain.

The model earns its place at the points where human-like reasoning is required: tracing a data path across service boundaries to determine reachability, assessing whether a configuration combination that appears safe creates an exploitable condition in context, or deciding if a code pattern that resembles a known vulnerability class is actually exploitable given the surrounding control flow. Those are judgment calls. Everything else is plumbing, and plumbing should be deterministic.

The scaffolding bridges both sides. It runs the deterministic stages as code, assembles the structured context the model needs to reason effectively, provides tool access so the model can query additional information during analysis, and routes the model’s output back into the deterministic pipeline for triage and reporting. The result is a system where the LLM operates within a well-defined scope, reasoning over prepared context, rather than being asked to do everything, including the work that doesn’t require intelligence.

A recent firmware engagement made this concrete. The deterministic layer extracted and normalized the binary, identified function boundaries, and reconstructed the call graph. Standard reversing infrastructure. The model received the decompiled output with full cross-reference context and flagged a conditional path in the bootloader’s recovery mode that bypassed signature verification before loading an update image. The logic was split across multiple functions, no single function looked wrong in isolation. An analyst would have reached it eventually, but the overnight batch surfaced it before anyone had started manually reversing that code path. That’s the division of labor working as designed: deterministic tooling prepares the context, the model reasons over it, and the analyst’s first morning starts with the right target.

Teaching the Model What We Know

A base model is a generalist. It knows code. It doesn’t know how our team hunts, what we’ve found before, or what the current target looks like from the inside. Closing that gap requires two complementary strategies: baking knowledge into the model’s weights, and feeding it context at inference time.

Fine-Tuning: What Vulnerable Code Looks Like

Base instruction-tuned models are not optimized for security analysis. The gap between a capable general-purpose coder model and one that has internalized what vulnerable code actually looks like across thousands of real examples is measurable in engagement quality.

There are many approaches to closing that gap: full fine-tuning, reinforcement learning from human feedback on security-specific tasks, synthetic data generation, distillation from larger models, and others. LoRA-based fine-tuning on curated real-world CVE data offers the fastest path to measurable improvement with the lowest infrastructure overhead. The critical distinction is what we train on: not just vulnerable code paired with its fix, but the analytical context around how the vulnerability is identified and exploited: root cause, affected data flow, and exploitation conditions. That context-rich approach produces measurably better results than training on code pairs alone.

The Engagement Wiki: What We Know That Training Data Doesn’t

Fine-tuning teaches the model what vulnerable code looks like. What it doesn’t teach is how our team hunts for it, what we’ve found manually, or what the current target looks like from the inside. IOActive has spent years building and refining internal methodologies across every engagement surface we assess. That accumulated knowledge is exactly what a retrieval layer can operationalize.

A structured internal wiki that feeds directly into the LLM’s context window via retrieval-augmented generation can operate at two layers:

Methodology and technique library:

This includes attack trees, enumeration checklists, and exploitation chains organized by engagement surface. When the model analyzes a Modbus configuration, it retrieves our team’s documented approach for ICS protocol assessment, not a generic prompt. When it reviews Android IPC handlers, it pulls the methodology our mobile team has refined over a decade of engagements. The library grows with every assessment cycle.

Per-engagement context:

At the start of every assessment, we populate a target-specific context layer: technology stack, architecture constraints, scope boundaries, threat model assumptions. The model doesn’t analyze in a vacuum; it reasons with the same situational awareness a human analyst builds during the first days of an engagement.

Reporting Is Also a Technical Challenge

Finding a vulnerability is half the job. The other half is communicating it in a way that produces a response proportional to its severity, and that audience is rarely homogeneous.

A typical engagement report lands in front of at least three different readers with fundamentally different needs. The development team wants precise reproduction steps and code-level context. The CISO wants risk framing, business impact, and remediation priority. The board or executive sponsor wants to understand exposure in terms they can act on without a security background. Until recently, producing documentation that served all three required either multiple document versions, expensive in consultant time, or a dedicated graphics and communications team that most consultancies can’t staff on every engagement.

Before LLM infrastructure, the practical ceiling on report comprehensiveness was analyst time. With a local LLM pipeline producing the first draft of each finding’s documentation, that ceiling lifts. Attack path diagrams that would have required a dedicated graphics pass are generated programmatically. The report becomes a more complete representation of what the assessment actually found, not a triage-filtered subset of it.

The same infrastructure can generate audience-specific derivatives of each finding. An attack path showing how an externally controllable input reaches a deserialization sink three service hops downstream is significantly more persuasive to an engineering leadership team than a paragraph describing the same chain. The diagrams are not just decorative. They’re structural.

This matters because impact is a function of communication, not just discovery. A finding that doesn’t produce a remediation response is a finding that didn’t land. Consultants who can close that gap, and those who can make a complex technical issue comprehensible to the people who control the budget to fix it, deliver more value in the same assessment. AI doesn’t just make reports faster to produce. It makes them more thorough, more detailed, and more actionable than what was previously feasible within the engagement window.

The Commitment

Security assessments are almost always time-boxed. The consultant makes depth tradeoffs every day: which functions to reverse manually, which attack paths to pursue, which custom tooling to build or skip. With LLM-augmented analysis, those tradeoffs change. Binary analysis that previously consumed a full day of manual effort compresses into hours. Custom tooling that would have been out of scope for a two-week engagement becomes feasible to build and deploy within the assessment window. The constraint hasn’t disappeared, but the frontier of what’s reachable within it has moved significantly, enabling you to go deeper on the same clock.

The AI landscape moves faster than any fine-tuning cycle, but security consulting has always rewarded teams that built the infrastructure to use new capabilities correctly rather than waiting for a packaged solution. The teams that figured out fuzzing before it was mainstream, or built custom static analysis pipelines when off-the-shelf options didn’t exist: those teams found more vulnerabilities, found them faster, and became harder to replace.

There’s a dimension to this that goes beyond tooling. Every security consultant brings a distinct set of methodology and analytical instinct to an engagement. A local AI infrastructure should reflect that. Rather than standardizing every analyst’s workflow through a single cloud tool with fixed behavior, a sovereign setup lets each consultant’s accumulated knowledge and methodological preferences shape how the model operates, amplifying individual expertise rather than flattening it.

That’s the commitment: not to just use the latest technology, but to evaluate it seriously, understand its limits, and operationalize it when it earns its place in the workflow. Local LLMs have earned theirs.

INSIGHTS, RESEARCH | February 23, 2026

Reversing the RADIO – AES CCM Link in the nRF family

For the past few weeks, I’ve been working on a research project that includes radio frequency (RF) nodes with a proprietary protocol running on top of Nordic Semiconductor (Nordic)[1] chips, specifically nrf52840. While it’s been quite challenging (no strings at all and of course no symbols), it’s been interesting and satisfying at the same time. As part of this work, I uncovered the code that handles encryption and decryption of RF packets. I wanted to share my findings in the hope that it will serve as a guide to anyone working on a similar project.

Let’s jump straight to the technical part. These chips use a couple of different crypto peripherals depending on the protocol they implement. In this case, the code used the AESCCM[2][3] engine, which is quite simple. It just takes three main parameters, key /nonce, input buffer, and output buffer. Once these are configured, the system can encrypt or decrypt data.

One nice feature of these chips is the ability to chain peripherals using the Programmable Peripheral Interconnect (PPI)[4] subsystem. This allows embedded developers to connect the RADIO peripheral and the crypto engine, causing the AESCCM to immediately trigger encryption or decryption, as can be seen in Figure 1 and Figure 2.

Figure 1. AESCCM Encryption Setup: from https://docs.nordicsemi.com/bundle/ps_nrf52840/page/ccm.html

Figure 2. AESCCM Decryption Setup: https://docs.nordicsemi.com/bundle/ps_nrf52840/page/ccm.html

With this information in mind, let’s jump into the disassembled binary and see what the code looks like. All of the function names, variables, and registers are my best guess based on the reverse-engineered functionality.

Figure 3. Function to Configure AESCCM Engine

The first function, shown in Figure 3, configures the AESCCM registers. As you know, all the peripherals are memory mapped, so each register instructs the peripheral about what needs to be done or configured. Figure 3 is an example of a register for this specific peripheral. It’s a good starting point for reverse engineering the functionality, since all of the code references in these areas are related to the actual peripheral and its features.

Figure 4. Datasheet Excerpt with AESCCM registers

The specific function in Figure 3 sets argument a2 to the output buffer, a3 to the input buffer, and a1 to the configuration register that points to the key and nonce. This information will assist in further reverse engineering, since we now know which buffer goes to which register.

Later we will see how, depending on whether the code is decrypting or encrypting, it will be used to hold the ciphertext or the plaintext.

Figure 5. Code to Connect and Handle Incoming RF Packets

As seen in Figure 5, the firmware includes an option where the device does not implement encryption, which is executed in the else statement. This option just sets the buffer to the corresponding RADIO pointer so the peripheral writes the packets as soon as they are demodulated.

However, we are focusing on the first part of the branch where the code deals with the decryption of the RADIO stream. The code sets the encrypted_packet buffer to both the RADIO packet and PACKETPTR (using the set_PACKETPTR function) and, using the function in Figure 3, sets encrypted_packet to INPTR and unencrypted_pkt to OUTPTR, which is the buffer where the AESCCM module will write the decrypted block.

As part of the process, the AESCCM engine needs some additional values (e.g. the key and nonce) in order to correctly decrypt the stream. These are passed as the first argument, as discussed earlier.

Once everything is set, the code instructs the SoC to begin operating using the defined RADIO-AESCCM relationship. This is achieved through further configuration that tells the SoC to make the RADIO and EES peripheral work together using the PPI subsystem.

Similar to the decryption path, the firmware implements the ability to send RADIO packets; Figure 6 shows how this is configured. As its counterpart, the code can work with or without encryption/decryption enabled, which is also dealt with in the branch.

Figure 6. Code to Connect and Handle Outgoing RF Rackets

Focusing on the second part where the key/nonce parameters are set, the output and input buffers are swapped, meaning that the RADIO packet points to the encrypted packet (which corresponds to the OUTPTR) and the plaintext is the INPTR.

The firmware very clearly shows how these features are configured and what kind of patterns we can look for when reverse engineering similar functionality on the same chips or others supporting similar peripherals, since these operations are typically quite similar across different vendors.


[1] https://www.nordicsemi.com/About-us

[2] https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-38c.pdf

[3] https://docs.nordicsemi.com/bundle/ps_nrf52840/page/ccm.html

[4] https://docs.nordicsemi.com/bundle/ps_nrf52840/page/ppi.html

INSIGHTS | March 3, 2025

Preparing for Downstream Attacks on AI Systems

The tech industry must manage AI security threats with the same eagerness it has for adopting the new technologies.

New Technologies Bring Old and New Risks

AI technologies are new and exciting, making new use cases possible. But despite the enthusiasm with which organizations are adopting AI, the supply chain and build pipeline for AI infrastructure are not yet sufficiently secure. Business, IT, and cybersecurity leaders have considerable work to do to identify the issues and resolve them, even as they help their organizations streamline adoption in a complex global environment with conflicting regulatory requirements.

Background

As AI technologies become integrated into critical business operations and systems, they become increasingly attractive targets for malicious threat actors who may discover and exploit the new vulnerabilities present in the new technologies. That should reasonably concern any CISO or CIO.

Attackers certainly have plenty of opportunity due to the rapid adoption of AI capabilities. Today businesses use AI to improve their operations, and a Forbes survey notes there is extensive adoption in customer service, customer relationship management (CRM), and inventory management. More use cases are on the way as products mature. In January 2024, global startup data platform Tracxn reported there were 67,199 AI and machine learning startups in the market, joining numerous mature AI companies.

The swift uptick in AI adoption means these new systems have capabilities and vulnerabilities yet to be discovered and managed, which serves as a significant source of latent risk to an organization, particularly when the applications touch so much of an organization’s data.

AI infrastructure encompasses several components and systems that support models’ development, deployment, and operation. These include data sources (such as datasets and data storage), development tools, computational resources (such as GPUs, TPUs, IPUs, cloud services, and APIs), and deployment pipelines. Most organizations source most of the hardware elements from external vendors, who in turn become critical links in the supply chain.

Naturally, anything a business depends on needs to be protected, and security has to be built in. Risks and mitigation options should be identified early across the full stack of hardware, software, and services supply chain to manage risks as well as anticipate and defend against threats.

Also, any new foundational elements in an organization’s infrastructure create new complexities; as Scotty pointed out in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

The Frailties in the AI Build Pipeline

To coin another apt movie quote, “With great power comes great responsibility.” AI offers tremendous power – accompanied by new security concerns, many yet to be identified.

It shouldn’t only be up to technical staff to uncover the risks associated with integrating AI solutions; both new and familiar steps should be taken to address the risks inherent in AI systems. The build pipeline for AI typically involves several stages, often in iteration, similar to DevOps and CI/CD pipelines. The AI world includes new deployment teams: AIOps, MLOps, LLMOps, and more. While these new teams and processes may have different names, they perform common core functions.

Broadly, attack vectors can be found in four major areas:

  • Development: Data scientists and developers write and test code for model libraries. Data is collected, cleaned, and prepared for training. Some data may come from third parties or be generated for a vertical market. Applications are built based on these models, with the goal of improving data analysis so that people can make better decisions.
  • Training: The AI models are trained using the collected datasets, which depend on complex algorithms and use substantial computational power. The organization or its external provider validates and tests Large Language Models (LLMs) and others to meet quality and performance criteria.
  • Deployment: The organization deploys the application and data models to production environments. This may involve several DevOps practices, such as containerization (such as Docker), orchestration (such as Kubernetes), and application integration schemes (such as APIs and microservices).
  • Monitoring and maintenance: As with any other enterprise system, the software supply chain for AI systems requires performance monitoring and the standard complement of updates and patches. AI systems add more to the list, such as model performance monitoring.

What Could Possibly Go Wrong?

What Couldn’t?

Security professionals are trained to see the weak points in any system, and the AI supply chain and build pipeline are no exception. Attack surface is present at each step in the AI build pipeline, adding to the usual areas of concern in software development and deployment.

Poisoning the Data

The most exposed element is the data itself.“Garbage in, garbage out” is an old tenet of computer science that describes no amount of processing can turn garbage data into useful information. Worse outcomes are a consequence of an intentional effort to degrade the dataset, especially when that degradation is surreptitious, subtle, and impactful. Malicious data injected into training datasets can corrupt or bias AI models to create misleading outputs, intentionally generating incorrect predictions or decisions. Over time, malicious actors will be motivated to develop more sophisticated techniques to evade detection and to poison larger datasets, including the third-party data on which many IT systems rely.

An attacker who could gain from compromising model integrity might inject corrupted data into a training database or hack the data collection tools to insert biased data intentionally. They may craft malicious prompts to mislead LLMs into suggesting inaccurate outputs, comparable to the way Twitter bots affect trending topics.

While the term “poisoning” might suggest a deliberate intent to manipulate data and affect the model’s output, much like an intentional backdoor coded into a program, bias can also be introduced by accident, like an unintentional coding error that results in a bug that could be exploited by a threat actor. IOActive previously identified bias resulting from poor training data set composition in facial recognition in commercially available mobile devices. The presence of these unintentional biases makes the detection and response to poisoning more complex and resource intensive.

Many LLMs are trained on massive data oceans culled from the public internet, and there is no realistic way to separate the signal from the noise in those datasets. Scraping the internet is a simple and efficient way to access a large dataset, but it also carries the risk of data poisoning, whether deliberate or incidental.

While unintended poisoning is a known and accepted problem – compounded by the fact that LLMs trained on public datasets are now ingesting their own output, some of which is incorrect or nonsensical – deliberate data poisoning is much more complicated. The use of public datasets enables anyone, including malicious actors, to contribute to them and poison them in any number of ways, and there’s not much that LLM designers can do about it. In some cases, this recursive training with generated data can result in model collapse, which offers an intriguing new attack impact for malicious threat actors.

This will, at minimum, add to the burden of the AI training process. Database, LLM, and application testing needs to expand beyond “Does it work?” and “Is its performance acceptable?” to “Is it safe?” and “How can we be sure of that?”

Example: DeepSeek’s Purposeful Ideological Bias

In some cases, there is obvious ideological bias purposefully introduced into models to comply with local regulations that further the ideological and public relations goals of the controlling authority. Companies operating under repressing regimes have no choice but to produce intentionally flawed LLMs that are politically indoctrinated to comply with the local legal requirements and worldview.

Many companies and investors experienced shock, when news of DeepSeek’s training and inference costs were widely disseminated in January 2025. As numerous people evaluated the DeepSeek model, it became clear that it adhered to the People’s Republic of China (PRC) propaganda talking points, which come directly from the carefully cultivated worldview of the Chinese Communist Party (CCP). DeepSeek had no choice in falsifying the facts related to events like the Tiananmen Square Massacre, repression of the Uyghurs, the coronavirus pandemic, and the Russia Federation’s invasion of Ukraine.

While these examples of censorship may not seem like an immediate security concern, organizations integrating LLMs into critical workflows should consider the risks of relying on models controlled by entities with heavy-handed content restrictions.

If an AI system’s outputs are influenced by ideological filtering, it could impact decision-making, risk assessments, or even regulatory compliance in unforeseen ways. Dependence on such models introduces a layer of opaque external control, which could become a security or operational risk over time.

Failing to Apply Access Controls

Not every security issue is due to ill intent, but while ignorance is more common than malice, it can be just as dangerous.

Imagine a scenario where a global organization builds an internal AI solution that handles confidential data. For instance, the tool might enable staff to query its internal email traffic or summarize incoming emails. Building such a system requires fine-tuned access control. Otherwise, with a bit of clever prompt engineering or random dumb luck, the AI model would cheerfully display inappropriate emails to the wrong people, such as personal employee data or the CEO’s discussion of a possible acquisition.

That’s absolutely a privacy and security vulnerability.

Risks From AI-enabled Third-party Products

Most application and cloud service products – including many security products – now include some form of AI features, from a simple chatbot on a SaaS platform to an XDR solution backed by deep AI analysis. These features and their associated attack surface are present even if they add zero value and an unwanted by the customer.

While AI-based features potentially offer greater efficiency and insights for security teams, the downside is that customers have little or no insight into the functioning, risks, and impacts from AI systems, LLMs, and the foundational models those products incorporate. The opacity of these products is a new risk factor that enterprises need to be aware of and take into account when assessing whether to implement a given solution.

While quantifying that risk is difficult, if not impossible, it’s vital that enterprise teams perform risk assessments and get as much information from vendors as possible about their AI systems.

Compromising the Deployment Itself

Malicious threat actors can target software dependencies, hardware components, or integration points. Attacks on CI/CD pipelines can introduce malicious code or alter AI models to generate backdoors in production systems. Misconfigured cloud services and insecure APIs can expose AI models and data to unauthorized access or manipulation.

These are largely commonplace issues across any enterprise application system. However, given the newness of AI software libraries, for instance, it is unwise to rely on the sturdiness of any component. There’s a difference between “nobody has broken in yet” and “nobody has tried.” Eventually, someone will try and succeed.

A malicious AI attack could also be achieved through unauthorized API access. It isn’t new for bad actors to exploit API vulnerabilities, and in the context of AI applications, attacks like SQL injection can wreak widespread havoc.

These are just a few of the possibilities. Additional vulnerabilities to consider:

  • Model Extraction[1][2][3][4]
  • Reimplementing an AI model using reverse engineering
  • Conducting a privacy attack to extract sensitive information by analyzing the model outputs and inferring the training data based on those results
  • Embedding a backdoor in the training data, which is triggered after deployment

While it’s difficult to know which attack vectors to worry about most urgently today, unfortunately, bad actors are as innovative as any developers.

Devising an AI Infrastructure Security Plan

To address these potential issues, organizations should focus on understanding and mitigating the attack surfaces, just as they do with any other at-risk assets.

Two major tools for better securing the AI supply chain are MITRE ATLAS and AI Red Teaming. These tools can work in combination with other evolving resources, including the US National Institute of Standards (NIST) Artificial Intelligence Risk Management Framework (AI RMF) and supporting resources.

MITRE ATLAS

The non-profit organization MITRE offers an extension of its MITRE ATT&CK framework, the Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS). ATLAS includes a comprehensive knowledgebase of the tactics, techniques, and procedures (TTPs) that adversaries might use to compromise AI systems. These offer guidance in threat modeling, security planning, and training and awareness. The newest version boasts enhanced detection guidance, expanded scope for industrial control system assets, and mobile structured detections.

ATLAS maps out potential attack vectors specific to AI systems like those mentioned in this post, such as data poisoning, model inversion, and other adversarial examples. The framework aids in identifying vulnerabilities within the AI models, training data, and deployment environments. It’s also an educational tool for security professionals, developers, and business leaders, providing a framework to understand AI systems’ unique threats and how to mitigate them.

ATLAS is also a practical tool for secure development and operations. Its guidance includes securing data pipelines, enhancing model robustness, and ensuring proper deployment environment configuration. It also outlines detection practices and procedures for incident response should an attack occur.

AI Red Teaming

AI Red Team exercises can simulate attacks on AI systems to identify vulnerabilities and weaknesses before malicious actors can exploit them. In their simulations, Red Teams use techniques similar to real attackers’, such as data poisoning, model manipulation, and exploitation of vulnerabilities in deployment pipelines.

These simulated attacks can uncover weaknesses that may not be evident through other testing methods. Thus, AI Red Teaming can enable organizations to strengthen their defenses by implementing better data validation processes, securing CI/CD pipelines, strengthening access controls, and similar measures.

Regular Red Team exercises provide ongoing feedback, allowing organizations to continuously improve their security posture and adapt to evolving threats in the AI landscape. It’s also a valuable training tool for security teams, helping them improve their overall readiness to respond to real incidents.

Facing the Evolving Threat

As AI/ML technology continues to evolve and is used in new applications, new attack vectors, vulnerabilities, and risks will be identified and exploited. Organizations who are directly or indirectly exposed to these threats must expend effort to identify and manage these risks, working to mitigate the potential impact from the exploitation of this new technology.


[1] https://paperswithcode.com/task/model-extraction/codeless

[2] https://dl.acm.org/doi/fullHtml/10.1145/3485832.3485838

[3] https://arxiv.org/pdf/2312.05386

[4] https://people.duke.edu/~zg70/courses/AML/Lecture14.pdf

ADVISORIES | June 21, 2024

IOActive Security Advisory | MásMóvil Comtrend Router –  Multiple Vulnerabilities

Affected Products

  1. MásMóvil Comtrend Router – Version: ES_WLD71-T1_v2.0.201820
    1. HW Version: GRG-4280us
    1. FW Version: QR51S404
    1. SW Version: MMV-C04_R10

Timeline

  • 2023-08-24: IOActive discovers vulnerability
  • 2023-09-12: IOActive begins vulnerability disclosure with affected parties
  • 2024-06-10: The corresponding CNA released the CVEs to public domain.
  • 2024-06-21: IOActive advisory published
EDITORIAL, INSIGHTS | May 22, 2024

Transportation Electrification Cybersecurity Threatscape

The global push to meet rising EV adoption with sufficient EV smart charger infrastructure is astoundingly challenging. Bloomberg estimates the global charging infrastructure market opportunity to be $1.9T between 2022 and 2050. That opportunity will be seized upon by a host of organizations large and small, public and private. From EV fleet depots to fast charging stations along highwaysparking garagessmart chargers for employees, and home chargers, EV supply equipment (EVSE) are already becoming a common sight.  The graph below depicts the world-wide cumulative global public charging connections:

World-wide trends of transition and adoption of EVs is due to climate control and carbon pollution-free electricity sector goals and policies that are being mandated over the coming years around the world, such as:

  • In the USA, Executive Order 14057[1] restricts all government agencies’ new acquisitions of light-duty vehicles to only EVs by 2027 and mid- and heavy-duty vehicle acquisitions to only EVs by 2035.
  • In California, Executive Order N-79-20[2], ends sales of ICE passenger vehicles and trucks by 2035[3].
  • The EU and UK have banned sales[4] of new combustion engine cars from 2035.

The Battery Electric Vehicle (BEV) and charging infrastructure landscape is rapidly evolving technologically and operationally in a market where cost and time-to-market are prioritized higher than security[5]. Technologies used to build the BEV ecosystem suffer from well-known cybersecurity issues, which expose vulnerabilities and risk. Current charging stations are operated as build-and-forget devices that are highly exposed and network connected, with cyber and physical vulnerabilities which pose a great challenge to the ecosystem, including bulk electric and distribution system stability, with limited current threat mitigation.

Securing such an advanced, fully connected, and heterogeneous supply grid will take a similar effort to the Information and Communication Technology (ICT) sectors that secure webservers and cloud infrastructure, and this would also include mitigations around the cyberphysical vulnerabilities unique to the BEV ecosystem.

HPC standards for the Megawatt Charging System (MCS) are being developed by the CharIN (Charging Interface Initiative eV.) international standards organization[6].

Modern electrified transportation vehicles will require a HPC infrastructure. Cybersecurity vulnerabilities in HPC systems operating at very high levels of power pose a serious cyberphysical threat to the new electric vehicles and supporting infrastructure, but also to the electrical grid (bulk and distribution) that supplies power to the HPC systems. These cyberphysical vulnerabilities will require focused, skillful mitigation.  

The potential consequences of a successful skillful attack on a BEV or ESVE system could produce remote code execution on BEVs or EVSEs, physically damaged vehicles or chargers, local or regional power outages, and larger coupling effects across countries from induced cascading failures.

IOActive’s Vehicle Cybersecurity Vulnerability Findings

In-vehicle technology is a top selling point for today’s car buyers[7]. What was once simply a “connected vehicle” is now increasingly more feature-rich, with software systems like self-driving and driver assist, complex infotainment systems, vehicle-to-other communication and integration with external AI. More than ever, all of this exciting technology turns modern vehicles into targets for malicious cyberattacks such as ransomware. It is imperative that automotive manufacturers take additional action now to infuse cybersecurity into their vehicles and mitigate potential threats. Moreover, EVSE manufacturers and utilities need to increase efforts to manage their highly impactful risks.

IOActive’s pioneering vehicle cybersecurity research began with the ground-breaking 2015 Jeep hack[8] that evolved into our ongoing vehicle research that has included commercial trucks, EVSE, and autonomous vehicles.

For over a decade, IOActive has been publishing original research blogs and papers:

  • Remote Exploitation of an Unaltered Passenger Vehicle (2015): This IOActive research paper outlined the research into performing a remote attack against an unaltered 2014 Jeep Cherokee and similar vehicles. IOActive reprogrammed a gateway chip in the head unit to allow it to send arbitrary CAN messages and control physical aspects of the car such as steering and braking. This testing forced a recall of 1.4 million vehicles by FCA and mandated changes to the Sprint carrier network. https://www.ioactive.com/pdfs/IOActive_Remote_Car_Hacking.pdf
  • Uncovering Unencrypted Car Data in Luxury Car Connected App (2020): IOActive conducted research to determine whether a luxury car used encrypted data for its connected apps. Unencrypted data was found in the app that could be used to stalk or track someone’s destination, including identification of the exact vehicle and its location. IOActive used Responsible Disclosure channels and the manufacturer implemented encryption to protect the sensitive data via key management. https://labs.ioactive.com/2020/09/uncovering-unencrypted-car-data-in-bmw.html
  • Commonalities in Vehicle Vulnerabilities (2016, 2018, 2023): With automotive cybersecurity research is growing, IOActive has been on the leading edge, amassing a decade of real-world vulnerability data illustrating the general issues and potential solutions to the cybersecurity issues facing today’s vehicles. The paper describes automotive vulnerability data that illustrates issues and potential solutions from 2016 to 2023.
    https://www.ioactive.com/ioactive-commonalities-vehicle-vulnerabilities-22update/ IOActive Commonalities in Vehicle Vulnerabilities
  • NFC Relay Attack on the Tesla Model Y (2022): IOActive reverse-engineered the Near Field Communications (NFC) protocol used by an EV automaker between the NFC card and vehicle. Created custom firmware modifications that allowed the device to relay NFC communications over Bluetooth/WiFi using a BlueShark module.  It was possible to perform the attack via Bluetooth from several meters away (as well as via WiFi from much greater distances) https://www.ioactive.com/wp-content/uploads/2025/05/NFC-relay-TESLA_JRodriguez.pdf

EVSE Cybersecurity Incidents Are Increasing

The growing popularity of Electric Vehicles (EVs) attracts not only gas-conscious consumers but also cybercriminals interested in using EV charging stations to conduct large-scale cyberattacks for monetization purposes, espionage attacks, politically motivated attacks, theft of private/sensitive data (e.g., drivers’ data), falsifying EV ranges, and more. EVSEs, whether in a private garage or on a public parking lot, are connected IoT devices, running software that interacts with payment systems, maintenance systems, OEM back-end systems, telecommunications, and the smart grid. Therefore, charging stations pose significant cybersecurity risks.

Early incidents of cyberattacks on charging stations include the following:

EVSE cybersecurity incidents are on the increase. Links to information on several other cybersecurity hacks, as well as further reading regarding EVSE cybersecurity, are listed at the end of this blog post.

EVSE cybersecurity risk and threat scenarios include a wide variety of potential issues:

  • EVSE malware attacks threatening the integrity the electric grid/transportation network, leading to widespread disruptions in power supply and electric grid load balancing concerns
  • Ransomware attacks
  • Leakage/manipulation of sensitive data (e.g., PII, credentials, and payment card data)
  • Physical attacks to disable EVSEs, steal power, or and infect EVSEs with malware via accessible USB ports
  • Authentication RFID, NFC, or credit card chip attacks that could deny EVSE charging sessions or perform false billing
  • EVSE or grid Denial of Service attacks, impacting drivers’ ability to recharge during a hurricane or wildfire evacuation
  • Firmware/software update attacks, causing access disruption to the necessary cloud services for payment processing
  • Bypassing bootloader protections, which can allow attackers with physical access to gain root access into EVSEs to launch attacks on the backend infrastructure while appearing as a trusted device
  • An EVSE attack through the charging cable could compromise an EV, causing fire or other damage

IOActive’s Electric Vehicle Charging Infrastructure Vulnerability Findings

Over the past five years, IOActive has conducted several EVSE cybersecurity penetration testing engagements for automotive and commercial truck OEMs/suppliers and EVSE vendors. Examples of IOActive’s electrification penetration testing include assessments of Level 2 EVSEs, DC Fast Chargers (DCFCs), Open Charge Point Protocol (OCPP)/cloud services, front-end/back-end web applications, onsite network configuration reviews, and EV vans.

For the past year, IOActive has led an international EVSE standards working group which has developed a public EVSE Threat Model White Paper that identifies EVSE risks, vulnerabilities, and design flaws.  The paper also includes threat scenarios ranked based on magnitude, duration, recovery effort, safety costs, effect and confidence/reputation damage. This White Paper can be shared with industry members upon request.

IOActive Welcomes Future EVSE Cybersecurity Discussions with Industry

We would like to continue to support the key industries impacted by the transition to electrified vehicles. Much of the most detailed work that we have done cannot be shared publicly. We welcome those with a need to know about the risks of and mitigations for BEVs and EVSEs to engage with us for a briefing on example extant vulnerabilities, technical threat models, threat actors, consequences of operationalized attacks, and other threat intelligence topics, as well as potential mitigations and best practices.

If you are interested in hosting IOActive for a briefing, and/or would like copies of the aforementioned presentations or white paper please contact us.

EVSE Cybersecurity Incident References:

Suggested Reading:


[1]https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2021/12/08/executive-order-on-catalyzing-clean-energy-industries-and-jobs-through-federal-sustainability/
[2]https://ww2.arb.ca.gov/resources/fact-sheets/governor-newsoms-zero-emission-2035-executive-order-n-79-20
[3]https://www.gov.ca.gov/wp-content/uploads/2020/09/9.23.20-EO-N-79-20-Climate.pdf
[4]https://www.europarl.europa.eu/topics/en/article/20221019STO44572/eu-ban-on-sale-of-new-petrol-and-diesel-cars-from-2035-explained
[5]https://www.iea.org/reports/global-ev-outlook-2023/trends-in-charging-infrastructure
[6]https://www.charin.global/
[7]https://finance.yahoo.com/news/connected-vehicle-technology-becoming-key-140000573.html
[8]https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

INSIGHTS, RESEARCH | May 16, 2024

Field-Programmable Chips (FPGAs) in Critical Applications – What are the Risks?

What is an FPGA?

Field-Programmable Gate Arrays (FPGAs) are a type of Integrated Circuit (IC) that can be programmed or reprogrammed after manufacturing. They consist of an array of logic blocks and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as telecommunications, automotive, aerospace, and industrial sectors.

FPGAs are often found in products that are low volume or demand short turnaround time because they can be purchased off the shelf and programmed as needed without the setup and manufacturing costs and long lead times associated with Application-Specific Integrated Circuits (ASICs). FPGAs are also popular for military and aerospace applications due to the long lifespan of such hardware as compared to typical consumer electronics. The ability to update deployed systems to meet new mission requirements or implement new cryptographic algorithms—without replacing expensive hardware—is valuable.

These benefits come at a cost, however: additional hardware is required to enable reprogramming. An FPGA-based design will use many more transistors than the same design implemented as an ASIC, increasing power consumption and per-device costs.

Implementing a circuit with an FPGA vs an ASIC can also come with security concerns. FPGA designs are compiled to a “bitstream,” a digital representation of the circuit netlist, which must be loaded into the FPGA for it to function. While bitstream formats are generally undocumented by the manufacturer, several projects are working towards open-source toolchains (e.g., Project X-Ray for the Xilinx 7 series) and have reverse engineered the bitstream formats for various devices.

FPGA bitstreams can be loaded in many ways, depending on the FPGA family and application requirements:

  • Serial or parallel interfaces from an external processor
  • JTAG from a debug cable attached to a PC
  • Serial or parallel interfaces to an external flash memory
  • Separate stacked-die flash memory within the same package as the FPGA
  • Flash or One-Time-Programmable (OTP) memory integrated into the FPGA silicon itself

In this post, we will focus on Xilinx because it was the first company to make a commercially viable FPGA back in 1985 and continues to be a world leader in the field. AMD acquired Xilinx in 2020 in a deal worth $60 billion, and today they control over 50% of the world’s programmable logic chips.

The Spartan™ 6 family of Xilinx FPGAs offers low-cost and low-power solutions for high-volume applications like displays, military/emergency/civil telecommunications equipment, and wireless routers. Spartan 6 was released in 2009, so these chips are relatively low-tech compared with their successors, the Spartan 7 and Xilinx’s higher end solutions (e.g., the Zynq and Versal families).

The Spartan 6 bitstream format is not publicly known, but IOActive is aware of at least one research group with unreleased tooling for it. These devices do not contain internal memory, so the bitstream must be provided on external pins each time power is applied and is thus accessible for an attacker to intercept and potentially reverse engineer.

FPGA vendors are, of course, aware of this risk and provide security features, such as allowing bitstreams to be encrypted on external flash and decrypted on the FPGA. In the case of the Spartan 6 family, the bitstream can be encrypted with AES-256 in CBC mode. The key can be stored in either OTP eFuses or battery-backed Static Random Access Memory (SRAM), which enables a self-destruct function where the FPGA can erase the key if it detects tampering.

The AES block used in the Spartan 6, however, is vulnerable to power analysis, and a team of German researchers developed a successful attack against it: “On the Portability of Side-Channel Attacks – An Analysis of the Xilinx Virtex 4, Virtex 5, and Spartan 6 Bitstream Encryption Mechanism.”

An example of a Xilinx Spartan 6 application is the Russian military radio R-187-P1 made by Angstrem, so we decided to use this as our test case.

AZART R-187-P1 Product Overview

Since its release in 2012, several researchers have discovered that the radio provides built-in protocols to allow communication across multiple standards, including older analogue Russian radios, UAV commands, and even TETRA. While the advertised frequency range is 27 to 520 MHz, recent firmware updates enabled a lower range of frequencies down to 100 kHz with AM.  

The features of this radio are fairly well known. The following was posted by @SomeGumul, a Polish researcher, on Twitter/X:

The trophy in the form of the Russian radio station “Azart”, announced by the Russians as a “native, Russian” sixth-generation device for conducting encrypted conversations, works on American radio components. The basis of the encryption system is, in particular, the Spartan®-6 FPGA (field-programmable gate array) system.  It is produced by the American company XILINX (AMD) in Taiwan.

Ironically, despite being advertised as the forefront of Russia’s military technical prowess, the heart of this device was revealed by Ukrainian serviceman Serhii Flash (Сергей Флэш) to be powered by the Spartan 6 FPGA. This FPGA is what enables the handheld radio’s capabilities to be redefined by mere software updates, allowing its users to speak over virtually any radio standard required—past, present, or future. While most of the currently implemented protocols are unencrypted to allow backward compatibility with other older, active service equipment, communication between two AZART radios enables frequency hopping up to 20,000 frequencies per second. This high rate of frequency hopping creates challenges for eavesdropping and position triangulation. The radio’s implementation of TETRA also supports encryption with the inclusion of a supporting radio trunk, where the radio is referred to as being in Trunked Mode Operation (TMO). Otherwise, while in Direct Mode Operation (DMO), the radio only supports voice scrambling in the time and frequency domains.   

20,000 frequency hops per second is quite a feat for a radio. Extremely precise timing is required for two or more radios to sync across hops and still communicate clearly. This timing source is gained wirelessly from GPS and GLONASS. As such, this advanced feature can be disabled simply by jamming GPS frequencies.

While this attack may be sufficient, GPS signals are ubiquitous and neutral sources of precise timing that are often required by both sides of any conflict. So, while jamming the frequencies may work in a pinch, it would be cleaner to find a solution to track this high rate of frequency hopping without having to jam a useful signal. To find this solution, we must investigate the proprietary Angstrem algorithm that drives the pseudo-random frequency hopping. To do this, we begin by looking at the likely driver: the Spartan 6 FPGA.

Chipset Overview

It is currently unknown if the Spartan 6 on the AZART is utilizing an encrypted bitstream; however, due to its wartime purpose, it must not be ruled out. While waiting for the procurement of a functioning radio, IOActive began a preliminary investigation into the functioning of the Spartan 6 with a specific focus on areas related to encryption and decryption of the bitstream.

Mainboard of AZART Highlighting the Spartan 6

At the time of writing this post, the XC6SLX75-3CSG484I sells for around $227 from authorized US distributors; however, it can be obtained for much lower prices in Asian markets, with sellers on AliExpress listing them for as low as $8.47. While counterfeits are prevalent in these supply chains, legitimate parts are not difficult to obtain with a bit of luck.

In addition to the FPGA, one other notable component visible on the board is the Analog Devices TxDAC AD9747, a dual 16-bit 250 Msps Digital-to-Analog Converter (DAC) intended for SDR transmitters. Assuming this is being used to transmit I/Q waveforms, we can conclude that the theoretical maximum instantaneous bandwidth of the radio is 250 MHz, with the actual bandwidth likely being closer to 200 MHz to minimize aliasing artifacts.

Device Analysis

IOActive procured several Spartan 6 FPGAs from a respected supplier for a preliminary silicon teardown to gain insight into how the chip handles encrypted bitstreams and identify any other interesting features. As a standalone package, the Spartan chip looks like this:

The CSG484 package that contains the Spartan 6 is a wire-bonded Ball-Grid Array (BGA) consisting of the IC die itself (face up), attached by copper ball bonds to a four-layer FR4 PCB substrate and overmolded in an epoxy-glass composite. The substrate has a grid of 22×22 solder balls at 0.8 mm pitch, as can be seen in the following cross section. The solder balls are an RoHS-compliant SAC305 alloy, unlike the “defense-grade” XQ6SLX75, which uses Sn-Pb solder balls. The choice of a consumer-grade FPGA for a military application is interesting, and may have been driven by cost or component availability issues (as the XQ series parts are produced in much lower volume and are not common in overseas supply chains).

Spartan 6 Cross Section Material Analysis

The sample was then imaged in an SEM to gain insight into the metal layers for to order to perform refined deprocessing later. The chip logic exists on the surface of the silicon die, as outlined by the red square in the following figure.

Close-up of FPGA Metal Layers on Silicon Die

The XC6SLX75 is manufactured by Samsung on a 45 nm process with nine metal layers. A second FPGA was stripped of its packaging for a top-down analysis, starting with metal layer nine.

Optical Overview Image of Decapsulated Die Xilinx XC6SLX75

Looking at the top layer, not much structure is visible, as the entire device is covered by a dense grid of power/ground routing. Wire bond pads around the perimeter for power/ground and I/O pins can clearly be seen. Four identical regions, two at the top and two at the bottom, have a very different appearance from the rest of the device. These are pairs of multi-gigabit serial transceivers, GTPs in Xilinx terminology, capable of operation at up to 3.2 Gbps. The transceivers are only bonded out in the XC6SLX75T; the non –T version used in the radio does not connect them, so we can ignore them for the purposes of this analysis.

The metal layers were then etched off to expose the silicon substrate layer, which provides better insight into chip layout, as shown in the following figure.

Optical Overview Image of Floorplan of Die Xilinx XC6SLX75

After etching off the metal and dielectric layers, a much clearer view of the device floorplan becomes visible. We can see that the northwest GTP tile has a block of logic just south of it. This is a PCIe gen1 controller, which is not used in the non –T version of the FPGA and can be ignored.

The remainder of the FPGA is roughly structured as columns of identical blocks running north-south, although some columns are shorter due to the presence of the GTPs and PCIe blocks. The non-square logic array of Spartan 6 led to poor performance as user circuitry had to be shoehorned awkwardly around the GTP area. Newer-generation Xilinx parts place the GTPs in a rectangular area along one side of the device, eliminating this issue.

The light-colored column at the center contains clock distribution buffers as well as Phase-Locked Loops (PLLs) and Digital Clock Managers (DCMs) for multiplying or dividing clocks to create different frequencies. Smaller, horizontal clock distribution areas can be seen as light-colored rows throughout the rest of the FPGA.

There are four columns of RAM containing a total of 172 tiles of 18 kb, and three columns of DSP blocks containing a total of 132 tiles, each consisting of an 18×18 integer multiplier and some other logic useful for digital signal processing. The remaining columns contain Configurable Logic Blocks (CLBs), which are general purpose logic resources.

The entire perimeter of the device contains I/O pins and related logic. Four light-colored regions of standard cell logic can be seen in the I/O area, two on each side. These are the integrated DDR/DDR2/DDR3 Memory Controller Blocks (MCBs).

The bottom right contains two larger regions of standard cell logic, which appear related to the boot and configuration process. We expect the eFuse and Battery-Backed SRAM (BBRAM), which likely contain the secrets required to decrypt the bitstream, to be found in this area. As such, this region was scanned in high resolution with the SEM for later analysis.

SEM Substrate Image of Boot/AES Logic Block 1

Utilizing advanced silicon deprocessing and netlist extraction techniques, IOActive hopes to refine methodologies for extracting the configured AES keys required to decrypt the bitstream that drives the Spartan 6 FPGA.

Once this is complete, there is a high probability that the unencrypted bitstream that configures the AZART can be obtained from a live radio and analyzed to potentially enumerate the secret encryption and frequency hopping algorithms that protect the current generation of AZART communications. We suspect that we could also apply this technique to previous generations of AZART, as well as other FPGA-based SDRs like those commonly in use by law enforcement, emergency services, and military operations around the world.

INSIGHTS, RESEARCH | May 15, 2024

Evolving Cyber Threatscape: What’s Ahead and How to Defend

The digital world is a dangerous place. And by all accounts, it’s not getting a whole lot better.

Damages from cybercrime will top a staggering $8 trillion this year, up from an already troubling $1 trillion just five years ago and rocketing toward $14 trillion by 2028. Supply chains are becoming juicier targets, vulnerabilities are proliferating, and criminals with nation-state support are growing more active and more sophisticated. Ransomware, cryptojacking, cloud compromises, and AI-powered shenanigans are all on a hockey-stick growth trajectory.

Looking ahead, there are few sure things in infosec other than the stone-cold, lead-pipe lock that systems will be hacked, data will be compromised, money will be purloined, and bad actors will keep acting badly.

Your organization needn’t be a victim, however. Forewarned is forearmed, after all.

Here’s what to expect in the evolving cyber threatscape over the next 12 to 18 months along with some steps every security team can take to stay secure in this increasingly hostile world.

The Weaponization of AI

The Threat: The coming year promises to be a big one for exploiting the ability of AI (artificial intelligence) and Large Language Models (LLMs) to spew misinformation, overwhelm authentication controls, automate malicious coding, and spawn intelligent malware that proactively targets vulnerabilities and evades detection. Generative AI promises to empower any attacker — even those with limited experience or modest resources — with malicious abilities previously limited to experienced users of frameworks like Cobalt Strike or Metasploit.

Expect to see at least some of these new, nefarious generative AI tools offered as a service through underground criminal syndicates, broadening the global cabal of troublesome threat actors while expanding both the population and the richness of available targets. The steady increase in Ransomware-as-a-Service is the clearest indicator to date that such criminal collaboratives are already on the rise.

Particularly ripe for AI-enabled abuse are social engineering-based operations like phishing, business email compromise, and so-called “pig butchering” investment, confidence and romance scams. Generative AI is eerily adept at turning out convincing, persuasive text, audio, and video content with none of the spelling, grammar, or cultural errors that traditionally made such hack attempts easy to spot. Add the LLMs ability to ingest legitimate business communications content for repurposing and translation and it’s easy to see how AI will soon be helping criminals craft super-effective global attacks on an unprecedented scale.

The Response: On a positive note, AI, as it turns out, can play well on both sides of the ball: offense and defense.

AI is already proving its worth, bolstering intelligent detection, response, and mitigation tools. AI-powered security platforms can analyze, model, learn, adapt, and act with greater speed and capacity than any human corps of security analysts ever could. Security professionals need to skill-up now on the techniques used to develop AI-powered attacks with the goal of creating equally innovative and effective mitigations and controls.

And because this new generation of smart malware will make the targeting of unmitigated vulnerabilities far more efficient, the basic infosec blocking and tackling — diligent asset inventory, configuration management, patching — will be more critical than ever.

Clouds Spawn Emerging Threats

The Threat: Business adoption of cloud computing technology has been on a steady rise for more than a decade. The current macroeconomic climate, with all of its challenges and uncertainty, promises to accelerate that trend for at least the next few years. Today, more than four in ten enterprises say they are increasing their use of cloud-based products and services and about one-third plan to continue migrating from legacy software to cloud-based tools this year. A similar share is moving on-premises workloads in the same direction.

Good for business. However, the cloud transformation revolution is not without its security pitfalls.

The cloud’s key benefits — reduced up-front costs, operational vs. capital expenditure, improved scalability and efficiency, faster deployment, and streamlined management — are counterbalanced by cloud-centric security concerns. The threatscape in the era of cloud is dotted with speed bumps like misconfigurations, poor coding practices, loose identity and access controls, and a pronounced lack of detailed environmental visibility. All of this is compounded by a general dearth of cloud-specific security expertise on most security teams.

One area to watch going forward: Better than half of enterprise IT decision-makers now describe their cloud strategy as primarily hybrid cloud or primarily multi-cloud. Three-quarters use multiple cloud vendors. Criminals are taking note. Attacks targeting complex hybrid and multi-cloud environments — with their generous attack surface and multiple points of entry — are poised to spike.

The recent example of a zero-day exploited by Chinese hackers that allowed rogue code execution on guest virtual machines (VMs) shows that attacks in this realm are getting more mature and potentially more damaging. Threat actors are targeting hybrid and multi-cloud infrastructure, looking for ways to capitalize on misconfigurations and lapses in controls in order to move laterally across different cloud systems.

Another area of concern is the increased prevalence of serverless infrastructure in the cloud. The same characteristics that make serverless systems attractive to developers — flexibility, scalability, automated deployment — also make them irresistible to attackers. Already there’s been an uptick in instances of crypto miners surreptitiously deployed on serverless infrastructure. Though serverless generally presents a smaller attack surface than hybrid and multi-cloud infrastructure, giving up visibility and turning over control of the constituent parts of the infrastructure to the cloud service provider (CSP) raises its own set of security problems. Looking ahead, nation-state backed threat actors will almost certainly ramp up their targeting of serverless environments, looking to take advantage of insecure code, broken authentication, misconfigured assets, over-privileged functions, abusable API gateways, and improperly secured endpoints.

The Response: The best advice on securing modern cloud environments in an evolving threatscape begins with diligent adherence to a proven framework like the Center for Internet Studies’ Critical Security Controls (CIS Controls V8) and the CIS’s companion Cloud Security Guide. These prioritized safeguards, regularly updated by an engaged cadre of security community members, offer clear guidance on mitigating the most prevalent cyber-attacks against cloud-based systems and cloud-resident data. As a bonus, the CIS Controls are judiciously mapped to several other important legal, regulatory, and policy frameworks.

Beyond that fundamental approach, some steps cloud defenders can take to safeguard the emerging iterations of cloud infrastructure include:

  • Embracing the chaos: The big challenge for security teams today is less about current configurations and more about unwinding the sins of the past. Run a network visualization and get arms around the existing mess of poorly managed connections and policy violations. It’s a critical first step toward addressing critical vulnerabilities that put the company and its digital assets at risk.
  • Skilling Up: Most organizations rely on their existing networking and security teams to manage their expanding multi-cloud and hybrid IT environments. It’s a tall order to expect experts in more traditional IT to adapt to the arcana of multi-cloud without specific instruction and ongoing training. The Cloud Security Alliance offers a wealth of vendor-agnostic training sessions in areas ranging from cloud fundamentals, to architecture, auditing and compliance.
  • Taking Your Share of Shared Responsibility: The hierarchy of jurisdiction for security controls in a cloud environment can be ambiguous at best and confusing at worst. Add more clouds to the mix, and the lines of responsibility blur even further. While all major cloud providers deliver some basic default configurations aimed at hardening the environment, that’s pretty much where their burden ends. The client is on the hook for securing their share of the system and its data assets. This is especially true in multi-cloud and hybrid environments where the client organization alone must protect all of the points where platforms from various providers intersect. Most experts agree the best answer is a third-party security platform that offers centralized, consolidated visibility into configurations and performance.
  • Rethinking networking connections: Refactoring can move the needle on security while conserving the performance and capabilities benefits of the cloud. Consider the “minimum viable network” approach, a nod to how cloud can, in practice, turn a packet-switched network into a circuit-switched one. The cloud network only moves packets where users say they can move. Everything else gets dropped. Leveraging this eliminates many security issues like sniffing, ARP cache poisoning, etc. This application-aware schema calls for simply plugging one asset into another, mapping only the communications required for that particular stack, obviating the need for host-based firewalls or network zones.

    Once defenders get comfortable with the minimum viable network concept, they can achieve adequate security in even the most challenging hybrid environments. The trick is to start simple and focus on reducing network connections down to the absolute minimum of virtual wires.

Supply Chains in the Crosshairs

The Threat: Because they’re such a central component in a wide variety of business operations — and because they feature complex layers of vendor, supplier, and service provider relationships — supply chains remain a tempting target for attackers. And those attackers are poised to grow more prolific and more sophisticated, adding to their prominence in the overall threatscape.

As global businesses become more dependent on interconnected digital supply chains, the compromise of a single, trusted software component in that ecosystem can quickly cascade into mayhem. Credential theft, malicious code injection, and firmware tampering are all part of the evolving supply-chain threat model. Once a trusted third party is compromised, the result is most often data theft, data wiping, or loss of systems availability via ransomware or denial of service attack. Or all of the above.

Prime examples include the 2020 SolarWinds hack, in which government-backed Russian attackers inserted malicious code into a software update for SolarWinds popular Orion IT monitoring and management platform. The compromise went undetected for more than a year, even as SolarWinds and its partners continued serving up malicious code to some 30,000 private companies and government agencies. Many of those victims saw their data, systems and networks compromised by the backdoor buried in the bad update before the incident was finally detected and mitigated.

More recently, in the summer of 2023, attackers leveraged a flaw in Progress Software’s widely used MOVEit file transfer client, exposing the data of thousands of organizations and nearly 80 million users. In arguably the largest supply-chain hack ever recorded, a Russian ransomware crew known as Clop leveraged a zero-day vulnerability in MOVEit to steal data from business and government organizations worldwide. Victims ranged from New York City’s public school system, the state of Maine, and a UK-based HR firm serving major clients such as British Airways and the BBC.

The Response: Given the trajectory, it’s reasonable to assume that supply chain and third-party attacks like the MOVEit hack will grow in both frequency and intensity as part of the threatscape’s inexorable evolution. This puts the integrity and resilience of the entire interconnected digital ecosystem at grave and continuing risk.

To fight back, vendor due diligence (especially in the form of formal vendor risk profiles) is key. Defenders will need to take a proactive stance that combines judicious and ongoing assessment of the security posture of all the suppliers and third-party service providers they deal with. Couple that with strong security controls — compensating ones, if necessary — and proven incident detection and response plans focused on parts of the environment most susceptible to third-party compromise.

These types of attacks will happen again. As the examples above illustrate, leveraging relevant threat intelligence on attack vectors, attacker techniques, and emerging threats should feature prominently in any scalable, adaptable supply chain security strategy.

Dishonorable Mentions

The cybersecurity threatscape isn’t limited to a handful of hot-button topics. Watching the threat environment grow and change over time means keeping abreast of many dozens of evolving risk factors, attack vectors, hacker techniques and general digital entropy. Some of the other issues defenders should stay on top of in this dynamic threat environment include:

  • Shifting DDos targets: Distributed denial-of-service attacks are as old as the internet itself. What’s new is the size and complexity of emerging attacks which are now more frequently targeting mobile networks, IoT systems and Operational Technology/Industrial Control Systems (OT/ICS) that lie at the heart of much critical infrastructure.
  • Disinfo, misinfo and “deep fakes”: A product of the proliferation of AI, bad-faith actors (and bots that model them) will churn out increasing volumes of disingenuous data aimed at everything from election interference to market manipulation.
  • Rising hacktivism: Conflicts in Ukraine and Israel illustrate how hacker collabs with a stated political purpose are ramping up their use of DDoS attacks, Web defacements and data leaks. The more hacktivism cyber attacks proliferate — and the more effective they appear — the more likely nation-states will jump in the fray to wreak havoc on targets both civilian and military.
  • Modernizing malware code: C/C++ has long been the lingua franca of malware. But that is changing. Looking to harness big libraries, easier integration, and a more streamlined programming experience, the new breed of malware developers is turning to languages like Rust and Go. Not only are hackers able to churn their malicious code faster to evade detection and outpace signatures, but the malware they create can be much more difficult for researchers to reverse engineer as well.
  • Emerging quantum risk: As the quantum computing revolution inches ever closer, defenders can look forward to some significant improvements to their security toolkit. Like AI, quantum computing promises to deliver unprecedented new capabilities for threat intelligence gathering, vulnerability management, and DFIR. But also like AI, quantum has a dark side. Quantum computers can brute force their way through most known cryptographic algorithms. Present-day encryption and password-based protections are likely to prove woefully ineffective in the face of a quantum-powered attack.

Taking Stock of a Changing Threatscape

Yes, the digital world is a dangerous place, and the risks on the horizon of the threatscape can seem daunting. Navigating this challenging terrain forces security leaders to prioritize strong, scalable defenses, the kind that can adapt to emerging technology threats and evolving attack techniques all at once. It’s a multi-pronged approach.

What does it take? Adherence to solid security frameworks, judicious use of threat intelligence, updated response plans, and even tactical efforts like mock drills, penetration tests, and red team exercises can play a role in firming up security posture for an uncertain future.

Perhaps most importantly, fostering a culture of security awareness and training within the organization can be vital for preventing common compromises, from phishing and malware attacks to insider threats, inadvertent data leaks, and more.

Surviving in the evolving cyber threatscape comes down to vigilance, adaptability, and a commitment to constant learning. It’s a daunting task, but with a comprehensive, forward-thinking strategy, it’s possible to stay ahead of the curve.

INSIGHTS | May 9, 2024

Always Updated Awards 2024 Blog

We are excited to announce that IOActive received multiple prestigious awards wins this year! Keep this blog bookmarked to always stay up-to-date on the company’s accomplishments throughout 2024.

Last updated September 30, 2024

IOActive was honored for its ability to maximize security investments and enhance clients’ overall security posture and business resilience. Unlike many organizations that default to defensive strategies, we at IOActive go beyond standard penetration testing to provide clients with red and purple team services that exceed typical assessments. We prioritize a comprehensive understanding of cyber adversaries through custom adversary emulation and ethical real-world attack simulations to develop robust, secure frameworks.

“We’re delighted to win awards in multiple categories throughout 2024,” said Jennifer Sunshine Steffens, CEO at IOActive. “These awards emphasize our nearly 30 years of leadership providing unique ‘attacker’s perspective’ methodologies that drive our research-fueled approach to security services trusted by Fortune 1000 companies worldwide.”

IOActive sets itself apart from the competition by bringing a unique attacker’s perspective to every client engagement, in order to maximize security investments and improve client’s overall security posture and business resiliency.

While many organizations focus on defense by ‘default,’ IOActive’s approach encourages secure-by-design practices and policies by introducing businesses to  an attacker’s mindset so that they can better understand the threat landscape.

Understanding our adversaries is crucial. Therefore, our penetration teams surpass standard penetration testing to offer our clients a range of red and blue team solutions that go beyond traditional approaches.

Check out our secured awards below:

2024 Corporate Excellence Awards

Best Research-Led Security Services Provider 2024 – USA

IOActive is a proud winner of this year’s ‘Best Research-Led Security Services Provider 2024 – USA’ through the implementation of up-to-date research embedded in the delivery of services. The Corporate Excellence Awards ‘showcase the companies and individuals that are committed to innovation, business growth, and providing the very best products and services to clients across a wide range of industries.’

2024 Cyber Security Excellence Awards

Pen Test Team of the Year

IOActive’s penetration testing team sets itself apart from the competition by bringing a unique attacker’s perspective to every client engagement in order to maximize security investments and improve client’s overall security posture and business resiliency.

Cybersecurity Team of the Year

At IOActive, our team provides more than traditional penetration testing. We freely share our security expertise through a range of offerings including red and purple team exercises, attack simulations, security consultancy, and our highly specialized technical and programmatic services.

In addition, our leaders and consultants, spearheaded by CEO Jennifer Sunshine Steffens, have

served long tenures in the cybersecurity field and are highly skilled in research, strategic security services, risk management, quality assurance, and regulatory requirements.

Cybersecurity Provider of the Year

IOActive is a worldwide leader in research-fueled security services implementing unique “attacker’s perspective” methodologies that form the foundation of the company’s global service offerings.

Whether our customers need guidance, on-the-ground penetration testing, or the assistance of a virtual CISO, we are committed to assuring client satisfaction.

We constantly strive to develop new ways to assist our customers in handling today’s complex threatscape and long component lifecycles. Every client engagement is tailored to maximize security investments and improve overall security postures and business resiliency.

2024 Global Infosec Awards

Trailblazing Cybersecurity Service Provider

Our team has conducted groundbreaking research within a variety of industries, including research into the Boeing airplane’s network, uncovering vehicle vulnerabilities by hacking into a Jeep, a card shuffler machine and much more.

Our security services, spanning across the silicon and hardware-based levels to real-world attack simulations, demonstrate our expertise in ensuring organizations achieve security resilience.

Just as each cyberattacker and threat is different, we ensure our services are tailored to the needs of our clients – and we take pride in exceeding expectations, every time.

Trailblazing Cybersecurity Research

Our diverse cybersecurity team, with a presence in over 30 countries worldwide, combines decades of experience with cutting-edge research to develop innovative security solutions suitable for a broad range of industries and companies.

We count Fortune 1000 organizations among our customers, and we provide research-backed services across industries including automotive, medical devices, aviation, and satellite communications. Overall, we are deeply committed to offering unrelenting value and support internationally and to all of our customers.

INSIGHTS, RESEARCH | May 2, 2024

Untested Is Untrusted: Penetration Tests and Red Teaming Key to Mature Security Strategy

Organizations need to know how well their defenses can withstand a targeted attack. Red team exercises and penetration tests fit the bill, but which is right for your organization?

Information security at even well-defended enterprises is often a complex mesh of controls, policies, people, and point solutions dispersed across critical systems both inside and outside the corporate perimeter. Managing that murky situation can be challenging for security teams, many of whom are understaffed and forced to simply check as many of the boxes as they can on the organization’s framework of choice and hope for the best.

Even in a known hostile climate replete with ransomware, sophisticated bad actors, and costly data breaches, security teams are often pressured to deploy tools, coordinate with disparate IT teams, then left to stand guard: monitoring, analyzing, patching, responding, and recovering.

This largely reactive posture is table stakes for most defenders, but on its own, it leaves one important question hanging. How well will all these defenses work when bad guys come calling? Like an orchestra of talented musicians that have never had a dress rehearsal, or a well-conditioned team of athletes that have never scrimmaged, it’s difficult to know just how well the group will perform under real-world conditions. In information security in particular, organizations are often unsure if their defenses will hold in an increasingly hostile world–a world with endless vulnerabilities, devastating exploits, and evolving attackers with powerful tools and expanding capabilities.

Security’s Testing Imperative

At its heart, effective security infrastructure is a finely engineered system. Optimizing and maintaining that system can benefit greatly from the typical engineer’s inclination to both build and test.  From bird feeders to bridges, sewing machines to skyscrapers, no industrial product survives the journey from design to production without being pushed to its limits – and beyond – to see how it will fare in actual use. Tensile strength, compressive parameters, shear forces, thermal capacity, points of failure, every potential weakness is fair game. The concept of stress testing is common in every engineering discipline. Security should be no exception.

Security systems aren’t subjected to blistering heat, abrasive friction, or crushing weight, of course. But the best ones are regularly probed, prodded, and pushed to their technical limits. To accomplish this, organizations turn to one of two core testing methodologies: the traditional penetration test, and the more robust red team exercise. Both penetration testing and red teaming are proven, well-documented approaches for establishing the effectiveness of an organization’s defenses,

Determining which one is best for a particular organization comes down to understanding how penetration tests and red team exercises work and how they differ in practice, core purpose, and scope.

Penetration Testing: Going Beyond Vulnerability Assessment

Penetration Tests (“pentests” for short) are a proactive form of application and infrastructure security evaluation in which an ethical hacker is authorized to scan an organization’s systems to discover weaknesses that could lead to compromise or a data breach.   The pentester’s objectives are to identify vulnerabilities in the client environment, exploit them to demonstrate the vulnerability’s impact, and document the findings.

Penetration testing is generally considered the next step up from traditional vulnerability assessments. Vulnerability assessments – usually the product of software-driven, automated scanning and reporting – expose many unaddressed weaknesses by cross-referencing the client’s systems and software with public lists of known vulnerabilities. Penetration testing takes the discipline a step further, adding the expert human element in order to recreate the steps a real cybercriminal might take to compromise systems. Techniques such as vulnerability scanning, brute-force password attacks, web app exploitation, and social engineering can be included in the test’s stated parameters.

Penetration tests are more targeted and deliver a more accurate list of vulnerabilities present than a vulnerability assessment. Because exploitation is often included, the pentest shows client organizations which vulnerabilities pose the biggest risk of damage, helping to prioritize mitigation efforts. Penetration tests are usually contracted with strict guidelines for time and scope — and because internal stakeholders are generally aware the pentest is taking place — provide little value for measuring detection and response and provide no visibility into the security posture of IT assets outside the scope of the examination.

Penetration Testing in Action

Traditional penetration tests are a go-to approach for organizations that want to immediately address exploitable vulnerabilities and upgrade their approach beyond static vulnerability scanning. Pentests provide valuable benefits in use cases such as:

  • Unearthing hidden risk: Penetration tests identify critical weaknesses in a single system, app or network that automated scanning tools often miss. As a bonus, pentests weed out the false positives from machine scanning that can waste valuable security team resources.
  • Validating security measures: Penetration testing can help validate the effectiveness of security controls, policies, and procedures, ensuring they work as intended.
  • Governance and compliance: Penetration testing allows an organization to check and prove that security policies, regulations and other related mandates are being met, including those that explicitly require regular pentests.
  • Security training: The reported outcome of a penetration testmakes for a valuable training tool for both security teams and end users, helping them understand how vulnerabilities can impact their organization.

Business continuity planning: Penetration testing also supports the organization’s business continuity plan, identifying potential threats and vulnerabilities that could result in system downtime and data loss.

Red Team Exercises: Laser Focus Attacks, Big-Picture Results

Red Teams take a more holistic — and more aggressive — approach to testing an organization’s overall security under real-world conditions. Groups of expert ethical hackers simulate persistent adversarial attempts to compromise the target’s systems, data, corporate offices, and people.

Red team exercises focus on the same tactics, tools, and procedures (TTPs) used by real-world adversaries. Where penetration tests aim to uncover a comprehensive list of vulnerabilities, red teams emulate attacks that focus more on the damage a real adversary could inflict. Weak spots are leveraged to gain initial access, move laterally, escalate privileges, exfiltrate data, and avoid detection. The goal of the red team is really to compromise an organization’s most critical digital assets, its crown jewels. Because the red team’s activities are stealthy and known only to select client executives (and sometimes dedicated “blue team” defenders from the organization’s own security team), the methodology is able to provide far more comprehensive visibility into the organization’s security readiness and ability to stand up against a real malicious attack. More than simply a roster of vulnerabilities, it’s a detailed report card on defenses, attack detection, and incident response that enterprises can use to make substantive changes to their programs and level-up their security maturity.

Red Team Exercises in Action

Red team exercises take security assessments to the next level, challenging more mature organizations to examine points of entry within their attack surface a malicious actor may exploit as well as their detection response capabilities. Red teaming proves its mettle through:

  • Real-world attack preparation: Red team exercises emulate attacks that can help organizations prepare for the real thing, exposing flaws in security infrastructure, policy, process and more.
  • Testing incident response: Red team exercises excel at testing a client’s incident response strategies, showing how quickly and effectively the internal team can detect and mitigate the threat.
  • Assessing employee awareness: In addition to grading the security team,red teaming is also used to measure the security awareness among employees. Through approaches like spear phishing, business email compromise and on-site impersonation, red teams highlight areas where additional employee training is needed.
  • Evaluating physical security: Red teams go beyond basic cyberthreats, assessing the effectiveness of physical security measures — locks, card readers, biometrics, access policies, and employee behaviors — at the client’s various locations.

Decision support for security budgets: Finally, red team exercises provide solid, quantifiable evidence to support hiring, purchasing and other security-related budget initiatives aimed at bolstering a client’s security posture and maturity

Stress Test Shootout: Red Teams and Penetration Tests Compared

When choosing between penetration tests and red team exercises, comparing and contrasting key attributes is helpful in determining which is best for the organization given its current situation and its goals:

Penetration testsRed team exercises
ObjectiveIdentify vulnerabilities en masse and strengthen securitySimulate real-world attacks and test incident response
ScopeTightly defined and agreed upon before testing beginsGoal oriented often encompassing the entire organization’s technical, physical, and human assets
DurationTypically shorter, ranging from a few days to a few weeksLonger, ranging from several weeks to a few months
RealismMay not faithfully simulate real-world threatsDesigned to closely mimic real-world attack scenarios
TargetsSpecific systems or applicationsEntire organization, including human, physical, and digital layers
NotificationTeams are notified and aware the test is taking placeUnannounced to mimic real attacks and to test responses
Best for…Firms just getting started with proactive testing or those that perform limited tests on a regular cycleOrgs with mature security postures that want to put their defenses the test

It’s also instructive to see how each testing methodology might work in a realistic scenario.

Scenario 1: Pentesting a healthcare organization

Hospitals typically feature a web of interconnected systems and devices, from patient records and research databases to Internet-capable smart medical equipment. Failure to secure any aspect can result in data compromise and catastrophic system downtime that violates patient privacy and disrupts vital services. A penetration test helps unearth a broad array of security weak spots, enabling the hospital to maintain systems availability, data integrity, patient confidentiality and regulatory compliance under mandates such as the Health Insurance Portability and Accountability Act (HIPAA).

A pentest for a healthcare org might focus on specific areas of the hospital’s network or critical applications used to track and treat patients. If there are concerns around network-connected medical equipment and potential impact to patient care, a hardware pentest can uncover critical vulnerabilities an attacker could exploit to gain access, modify medication dosage, and maintain a network foothold. The results from the pentest helps identify high risk issues and prioritize remediation but does little in the way of determining if an organization is ready and capable of responding to a breach.

Scenario 2: Red teaming a healthcare organization

While the pentest is more targeted and limited in scope, a red team exercise against the same healthcare organization includes not only all of the networks and applications, but also the employees and physical locations. Here, red team exercises focus on bypassing the hospital’s defenses to provide valuable insights into how the organization might fare against sophisticated, real-world attackers. These exercises expose technical weaknesses, risky employee behaviors, and process shortcomings, helping the hospital continually bolster its resilience.

The red team performs reconnaissance initially to profile the employees, offices, and external attack surface looking for potential avenues for exploitation and initial access. An unmonitored side entrance, someone in scrubs tailgating a nurse into a secure area, or a harmless-looking spearphish, a red team will exploit any weakness necessary to reach its goals and act on its objectives. The goal may be to access a specific fake patient record and modify the patient’s contact information or the team is expected to exfiltrate data to test the hospital’s network monitoring capabilities. In the end, the healthcare organization will have a better understanding of its readiness to withstand a sophisticated attack and where to improve its defenses and ability to respond effectively.

Simulated Attacks, Authentic Results

In security, as in any other kind of engineered system, without testing there can be no trust. Testing approaches like penetration tests and red team exercises are paramount for modern, digital-centric organizations operating in a hostile cyber environment.

These simulated attack techniques help to identify and rectify technical as well as procedural vulnerabilities, enhancing the client’s overall cybersecurity posture. Taken together, regular penetration tests and red team exercises should be considered integral components of a robust and mature cybersecurity strategy. Most organizations will start with penetration testing to improve the security of specific applications and areas of their network, then graduate up to red team exercises that measure the effectiveness of its security defenses along with detection and response capabilities.

Organizations that prioritize such testing methods will be better equipped to defend against threats, reduce risks, and maintain the trust of their users and customers in today’s challenging digital threatscape.

INSIGHTS | April 19, 2024

Lessons Learned and S.A.F.E. Facts Shared During Lisbon’s OCP Regional Summit

I don’t recall precisely what year the change happened, but at some point, the public cloud became critical infrastructure with corresponding high national security stakes. That reality brought rapid maturity and accompanying regulatory controls for securing and protecting the infrastructure and services of cloud service providers (CSPs).

Next week at the 2024 OCP Regional Summit in Lisbon, teams will be sharing new security success stories and diving deeper into the technical elements and latest learnings in securing current generation cloud infrastructure devices. IOActive will be present throughout the event, delivering new insights related to OCP S.A.F.E. and beyond.

First thing Thursday morning (April 25, 8:50am – 9:10am | Floor 1 – Auditorium IV | Security and Data Protection track), our suave Director of Services, Alfredo Pironti, and rockstar senior security consultant and researcher, Sean Rivera, will present “Recent and Upcoming Security Trends in Cloud Low-level Hardware Devices,” where they dive deep into a new survey of real-world security issues and flaws that IOActive has encountered over recent years.

I’m lucky enough to have had a preview of the talk, and I’m confident it will open attendees’ eyes to the types of systemic vulnerabilities specialized security testing can uncover. Sean and Alfredo share these new insights on the threats associated with NVMe-based SSD disks and SR-IOC enabled cards before covering recommendations on improving secure development processes and proposing new testing scope improvements.

Shortly afterwards on Thursday (April 25, 9:55am – 10:15am | Floor 1 – Auditorium IV | Security and Data Protection track), Alfredo Pironti will be back on stage for a panel session focused on “OCP S.A.F.E. Updates” where he, along with Alex Tzonkov of AMD and Eric Eilertson of Microsoft, will discuss the latest progress and innovations behind the OCP S.A.F.E. program. I think a key component of the panel discussion will be the learnings and takeaways from the firsthand experiences of early adopters. You know how it goes…the difference between theory and practice.

I’m pretty sure both sessions will be recorded, so folks that can’t make it to lovely Lisbon this time round should be able to watch these IOActive stars present and share their knowledge and insights in the days or weeks following the OCP summit. You’ll find more information about OCP S.A.F.E. and how IOActive has been turning theory into practice on our OCP S.A.F.E. Certified Assessor page.

Both Alfredo and Sean, along with a handful of other IOActive folks, will be present throughout the Lisbon summit. Don’t be shy, say hi!