RESEARCH | December 6, 2021

Cracking the Snapcode

Daniel Moder, IOActive Security Consultant, explores the world of the ever-increasing forms of bar codes, specifically, cracking Snapcodes.

Snapcode is a proprietary 2D barcode system that can trigger a variety of actions when scanned in the Snapchat app. Unlike some bar code systems, there is no public documentation about how the Snapcode system works. Daniel delves in to discover the inner workings of Snapcode to answer the following questions: 

  1. What data do Snapcodes encode?
  2. How do Snapcodes encode data?
  3. What actions can be triggered when these codes are scanned?


GUEST BLOG | October 6, 2021

The Risk of Cross-Domain Sharing with Google Cloud’s IAM Policies | Chris Cuevas and Erik Gomez, SADA

We’re part of the security resources at SADA, a leading Google Cloud Premier Partner. With our backgrounds being notably diverse, we appreciate the need for visibility of your core access controls.

If you’re involved in securing your enterprise’s Google Cloud Platform (GCP) environment, ideally, the organization policy for Domain Restricted Sharing (DRS) is well-regarded in your security toolbox. In the event DRS hasn’t made its way into your arsenal, after reading this post, please take a moment and review these docs.

While we’re not covering DRS in-depth here, we will be discussing related concepts. We believe it is crucial for an enterprise to maintain full visibility into which identities have access to its GCP resources. DRS is intended to prevent external or non-enterprise managed identities from obtaining or being provided Identity Access Management (IAM) role bindings within your GCP environment.

If we take this one step further, we believe an enterprise should maintain visibility of the use of its managed identities within external GCP environments. This is the basis of the post where we’ll raise a number of concerns.

The SADA security team has found a feature of IAM that presents challenges with detection and mitigation. We’ll refer to this IAM feature as Cross-Domain Sharing (XDS).

Introduction to XDS

Today, external parties with GCP environments can provide IAM role bindings to your enterprise’s managed identities. These IAM policies can be set and made effective without your knowledge or awareness, resulting in GCP resources being accessed beyond the boundaries of your enterprise. While we agree there are a number of valid use cases for these XDS IAM policies, we are not comfortable with the lack of enterprise visibility.

Malicious actors are constantly seeking new avenues to gain any type of foothold within a targeted organization. Targeting Cloud DevOps and SREs with social engineering attacks yields high rewards as these organizational employees have more elevated privileges and trusted relationships. 

Acknowledging this mindset, let’s consider the following:

Alice (alice@nullexternal.org) views internal.org as a prime target for a social engineering campaign combined with her newly discovered XDS bug. She quickly spins up a new GCP project called “Production Secrets” and adds a GCP IAM role binding to it for Bob (bob@nullinternal.org) (see the diagram below).

Alice then initiates a social engineering campaign targeting Bob, informing him of the new “Production Secrets” project. As Alice is not part of the internal.org organization, the “Production Secrets” project is presented in Bob’s list of available GCP Projects without an organization association. And, if Bob searches for “Production Secrets” using the search bar of the GCP cloud console, the project will again be presented with no clear indicators it’s not actually affiliated with the internal.org GCP organization. With Bob not wanting to miss any team deadlines related to adopting the new “Production Secrets” project, he migrates secrets over and begins creating new ones within the “Production Secrets” project. Alice rejoices as internal.org’s secrets are now fully disclosed and available for additional attacks.

cross domain sharing (XDS) example

If your organization’s identities are being used externally, would you be able to prevent, or even detect, this type of activity? If Bob connects to this external project, what other attacks could he be vulnerable to in this scenario?

Keeping in mind Google Cloud’s IAM identities or “members” in IAM Policies can include users, groups, and domains, bad actors can easily increase their target scope from a single user identity to your entire enterprise. Once the nefarious GCP Project “Production Secrets” is in place and accessible by everyone in your enterprise with GCP environment access, the bad actors can wait for unintended or accidental access while developing more advanced phishing ruses.

Now, the good news!

The team at Google Cloud have been hard at work, and they recently released a new GCP Organization Policy constraint specifically to address this concern. The Organization Policy constraint “constraints/resourcemanager.accessBoundaries” once enabled, removes this concern as a broad phishing vector by not presenting external and no-organization GCP Projects within the Cloud Console and associated APIs. While this approach does not address all risks related to XDS, it does reduce the effective target scope.

Before you run and enable this constraint, remember there are valid use cases for XDS, and we recommend identifying all XDS projects and assessing if they are valid, or if they may be adversely affecting your enterprise’s managed identities. This exercise may help you identify external organizations that are contractors, vendors, partners, etc. and should be included in the Organization Policy constraint.

To further reduce the chances of successful exfiltration of your enterprise’s sensitive data from existing GCP resources via XDS abuse, consider also implementing Google Cloud’s VPC Service Controls (VPC-SC).

Is your GCP environment at risk, or do you have security questions about your GCP environment? SADA and IOActive are here to help. Contact SADA for a Cloud Security Assessment and IOActive for a Cloud Penetration Test.

Chris Cuevas, Sr Security Engineer, SADA
Erik Gomez, Associate CTO, SADA


Note: This concern has been responsibly reported to the Google Cloud Security team.

EDITORIAL | August 3, 2021

Counterproliferation: Doing Our Part

IOActive has always done its part in preventing the misuse of our work.

IOActive’s mission is to make the world a safer and more secure place. In the past, we’ve worked to innovate in the responsible disclosure process, with the most visible and memorable example being Dan Kaminsky’s research into DNS.[1] This involved one of the first uses of widespread, multiparty coordinated responsible disclosure, which quickly became the gold standard as referenced in CERT’s Guide to Responsible Disclosure.[2]

We don’t always talk publicly about our non-technical innovations, since they frequently aren’t as interesting as the groundbreaking cybersecurity research our team delivers. However, a couple recent events have prompted us to speak a bit about some of these less glamorous, but nonetheless extremely important innovations. First, we were deeply saddened by the passing of Dan Kaminsky, and would like to share how we’re building upon his legacy of non-technical innovation in vulnerability research. Second, a significant disclosure covered by global media organizations regarding the misuse of weaponized mobile phone vulnerabilities, packaged with surveillance tools, to target journalists and others for political purposes, rather than for lawful purposes consistent with basic human rights.

What We’re Doing

There are three primary elements to our policies that prevent the misuse of the vulnerabilities we discover.

Responsible Disclosure

IOActive has always had a policy of responsible disclosure. We transparently publish our policy on our website for everyone to see.[3] Over time, we’ve taken additional innovative steps to enhance this disclosure process.

We’ve built upon Dan’s innovation in responsible disclosure by sharing our research with impacted industries through multinational Information Sharing and Analysis Centers (ISACs).[4] Likewise, we’ve worked to confidentially disclose more of our pre-release research to our clients when it may impact them. As our consultants and researchers find new and innovative ways to break things, we’ll find new and innovative ways to disclose their work and associated consequences, with the goal of facilitating the best outcomes for all stakeholders.

Policy on the Sale of Vulnerabilities

IOActive is very clear on this simple policy, both publicly and with our clients: we do not sell vulnerabilities.

A well-developed market for vulnerabilities has existed for some time.[5] Unfortunately, other cybersecurity firms do sell vulnerabilities, and may not have the necessary ethical compartmentalization and required policies in place to safeguard the security and other interests of their clients and the public at large.

While we support the bug bounty concept, which can help reduce the likelihood of vulnerability sales and support the independent research community, as a commercial service bug bounties do not adequately address concerns such as personnel vetting or testing of resources only available when onsite at a client.

Contractual Responsible Disclosure Requirement

As a standard practice in our commercial work, we require the ability to report vulnerabilities we discover in third-party products externally only to the affected manufacturers, in addition to the client, to ensure that an identified defect can be properly corrected. IOActive offers to coordinate this disclosure process to the manufacturers on behalf of our clients.

This normally leads to a virtuous cycle of improved security for everyone through our commercial work. Any vulnerability discovery benefits not only the client, but the entire ecosystem, both of whom in turn benefit from the vulnerability discovery work we do for other clients.

Every person reading this post has benefited from better security in the products and services they and their organizations use every day, due to the combination of our fantastic consultants and clients who support doing the right thing for the ecosystem.

Fundamentally, when a vulnerability is corrected, that risk is retired for everyone who updates to the secure version and prevents the weaponization of the vulnerability. When those fixes are pushed out through an automated update process, the benefits accrue without any active effort on the part of end users or system maintainers.

How to Help

Make it Easy to Receive Disclosures

As a prolific vulnerability discloser, we see a wide spectrum of maturity in receiving and handling vulnerability disclosures. We must often resort to creative and time-intensive efforts to locate a contact who will respond to our attempts to disclose a vulnerability. Occasionally, we run into a dead end and are unable to make productive contact with organizations.

Here’s a short list of actions that will help make it easy to collect vulnerability information your organization really needs:

  1. Run a Vulnerability Disclosure Program. A vulnerability disclosure management program provides bidirectional, secure communication between the discloser and the impacted organization in a formal, operationalized manner. You can run such a program with internal resources or outsource it to a commercial firm providing managed vulnerability disclosure program services.
  2. Be Easy to Find. It should be simple and effortless for a researcher to find details on the disclosure process for any organization. A good test is to search for “<Your Organization Name> Vulnerability Disclosure” or “<Your Organization Name> Vulnerability Report” in a search engine. Ideally, your public disclosure page should appear in the first page or two of results.

Cesar Cerrudo, CTO of IOActive Labs, has a more in-depth post discussing how to get the best outcomes from working with researchers during the vulnerability disclosure process in his post, 10 Laws of Disclosure.[6]

Working with Security Firms

When you’re selecting a security firm for vulnerability discovery work, you should know what they will do with any vulnerabilities they find. Here are a few core questions for which any firm should have detailed, clear answers:

  • Does the company have a responsible disclosure policy?
  • What is the company’s policy regarding the sale of vulnerabilities?
  • Does the company require responsible disclosure of the vulnerabilities it discovers during client work?
  • How does the company handle third-party responsible disclosure for its clients?

Participate in the Discussion

The global norms around the sale and weaponization of cybersecurity vulnerabilities, as well as their integration into surveillance tools, are being established today. More constructive, thoughtful public debate today can prevent the current deleterious conduct from becoming a standard of global behavior with its associated dystopic outcomes through inattention and inaction.


References

[1] https://www.cnet.com/tech/services-and-software/security-bites-107-dan-kaminsky-talks-about-responsible-vulnerability-disclosure/
[2] https://resources.sei.cmu.edu/asset_files/SpecialReport/2017_003_001_503340.pdf
[3] https://ioactive.com/disclosure-policy/
[4] https://www.nationalisacs.org/
[5] https://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR610/RAND_RR610.pdf
[6] https://ioactive.com/10-laws-of-disclosure/

RESEARCH | July 30, 2021

Breaking Protocol (Buffers): Reverse Engineering gRPC Binaries

gRPC is an open-source RPC framework from Google which leverages automatic code generation to allow easy integration to a number of languages. Architecturally, it follows the standard seen in many other RPC frameworks: services are defined which determine the available RPCs. It uses HTTP version 2 as its transport, and supports plain HTTP as well as HTTPS for secure communication. Services and messages, which act as the structures passed to and returned by defined RPCs, are defined as protocol buffers. Protocol buffers are a common serialization solution, also designed by Google.

INSIGHTS | July 19, 2021

Techspective Podcast – The Value of Red and Purple Team Engagements

Episode 070. Tony Bradley of Techspective, chats with John Sawyer, IOActive Director of Services, Red Team, on the wide-ranging effects of alert fatigue, COVID-19 pandemic, physical security and more – directly affecting cybersecurity resiliency and the efficacy/benefits of red/purple team and pen-testing services.

GUEST BLOG | June 9, 2021

Cybersecurity Alert Fatigue: Why It Happens, Why It Sucks, and What We Can Do About It | Andrew Morris, GreyNoise

Introduction

“Although alert fatigue is blamed for high override rates in contemporary clinical decision support systems, the concept of alert fatigue is poorly defined. We tested hypotheses arising from two possible alert fatigue mechanisms: (A) cognitive overload associated with amount of work, complexity of work, and effort distinguishing informative from uninformative alerts, and (B) desensitization from repeated exposure to the same alert over time.”

Ancker, Jessica S., et al. “Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system.” BMC Medical Informatics and Decision Making, vol. 17, no. 1, 2017.

My name is Andrew Morris, and I’m the founder of GreyNoise, a company devoted to understanding the internet and making security professionals more efficient. I’ve probably had a thousand conversations with Security Operations Center (SOC) analysts over the past five years. These professionals come from many different walks of life and a diverse array of technical backgrounds and experiences, but they all have something in common: they know that false positives are the bane of their jobs, and that alert fatigue sucks.

The excerpt above is from a medical journal focused on drug alerts in a hospital, not a cybersecurity publication. What’s strangely refreshing about seeing these issues in industries outside of cybersecurity is being reminded that alert fatigue has numerous and challenging causes. The reality is that alert fatigue occurs across a broad range of industries and situations, from healthcare facilities to construction sites and manufacturing plants to oil rigs, subway trains, air traffic control towers, and nuclear plants.

I think there may be some lessons we can learn from these other industries. For example, while there are well over 200 warning and caution situations for Boeing aircraft pilots, the company has carefully prioritized their alert system to reduce distraction and keep pilots focused on the most important issues to keep the plane in the air during emergencies.

Many cybersecurity companies cannot say the same. Often these security vendors will oversimplify the issue and claim to solve alert fatigue, but frequently make it worse. The good news is that these false-positive and alert fatigue problems are neither novel nor unique to our industry.

In this article, I’ll cover what I believe are the main contributing factors to alert fatigue for cybersecurity practitioners, why alert fatigue sucks, and what we can do about it.

Contributing Factors

Alarm fatigue or alert fatigue occurs when one is exposed to a large number of frequent alarms (alerts) and consequently becomes desensitized to them. Desensitization can lead to longer response times or missing important alarms.

https://en.wikipedia.org/wiki/Alarm_fatigue

Technical Causes of Alert Fatigue

Overmatched, misleading or outdated indicator telemetry

Low-fidelity alerts are the most obvious and common contributor to alert fatigue. This results in over-alerting on events with a low probability of being malicious, or matching on activity that is actually benign.

One good example of this is low-quality IP block lists – these lists identify “known-bad IP addresses,” which should be blocked by a firewall or other filtering mechanism. Unfortunately, these lists are often under-curated or completely uncurated output from dynamic malware sandboxes.

Here’s an example of how a “known-good” IP address can get onto a “known-bad” list: A malicious binary being detonated in a sandbox attempts to check for an Internet connection by pinging Google’s public DNS server (8.8.8.8). This connection attempt might get mischaracterized as command-and-control communications, with the IP address incorrectly added to the known-bad list. These lists are then bought and sold by security vendors and bundled with security products that incorrectly label traffic to or from these IP addresses as “malicious.”

Low-fidelity alerts can also be generated when a reputable source releases technical indicators that can be misleading without additional context. Take, for instance, the data accompanying the United States Cybersecurity and Infrastructure Security Agency (CISA)’s otherwise excellent 2016 Grizzly Steppe report. The CSV/STIX files contained a list of 876 IP addresses, including 44 Tor exit nodes and four Yahoo mail servers, which if loaded blindly into a security product, would raise alerts every time the organization’s network attempted to route an email to a Yahoo email address. As Kevin Poulsen noted in his Daily Beast article calling out the authors of the report, “Yahoo servers, the Tor network, and other targets of the DHS list generate reams of legitimate traffic, and an alarm system that’s always ringing is no alarm system at all.”

Another type of a low fidelity alert is the overmatch or over-sensitive heuristic, as seen below:

Alert: “Attack detected from remote IP address 1.2.3.4: IP address detected attempting to brute-force RDP service.”
Reality: A user came back from vacation and got their password wrong three times.

Alert: “Ransomware detected on WIN-FILESERVER-01.”
Reality: The file server ran a scheduled backup job.

Alert: “TLS downgrade attack detected by remote IP address: 5.6.7.8.”
Reality: A user with a very old web browser attempted to use the website.

It can be challenging to security engineering teams to construct correlation and alerting rules that accurately identify attacks without triggering false positives due to overly sensitive criteria.

Legitimate computer programs do weird things

Before I founded GreyNoise, I worked on the research and development team at Endgame, an endpoint security company later acquired by Elastic. One of the most illuminating realizations I had while working on that product was just how many software applications are programmed to do malware-y looking thingsI discovered that tons of popular software applications were shipped with unsigned binaries and kernel drivers, or sketchy-looking software packers and crypters.

These are all examples of a type of supply chain integrity risk, but unlike SolarWinds, which shipped compromised software, these companies are delivering software built using sloppy or negligent software components.

Another discovery I made during my time at Endgame was how common it is for antivirus software to inject code into other processes. In a vacuum, this behavior should (and would) raise all kinds of alerts to a host-based security product. However, upon investigation by an analyst, this was often determined to be expected application behavior: a false positive.

Poor security product UX

For all the talent that security product companies employ in the fields of operating systems, programming, networking, and systems architecture, they often lack skills in user-experience and design. This results in security products often piling on dozens—or even hundreds—of duplicate alert notifications, leaving the user with no choice but to manually click through and dismiss each one. If we think back to the Boeing aviation example at the beginning of this article, security product UIs are often the equivalent of trying to accept 100 alert popup boxes while landing a plane in a strong crosswind at night in a rainstorm. We need to do a better job with human factors and user experience.

Expected network behavior is a moving target

Anomaly detection is a strategy commonly used to identify “badness” in a network. The theory is to establish a baseline of expected network and host behavior, then investigate any unplanned deviations from this baseline. While this strategy makes sense conceptually, corporate networks are filled with users who install all kinds of software products and connect all kinds of devices. Even when hosts are completely locked down and the ability to install software packages is strictly controlled, the IP addresses and domain names with which software regularly communicates fluctuate so frequently that it’s nearly impossible to establish any meaningful or consistent baseline.

There are entire families of security products that employ anomaly detection-based alerting with the promise of “unmatched insight” but often deliver mixed or poor results. This toil ultimately rolls downhill to the analysts, who either open an investigation for every noisy alert or numb themselves to the alerts generated by these products and ignore them. As a matter of fact, a recent survey by Critical Start found that 49% of analysts turn off high-volume alerting features when there are too many alerts to process.

Home networks are now corporate networks

The pandemic has resulted in a “new normal” of everyone working from home and accessing the corporate network remotely. Before the pandemic, some organizations were able to protect themselves by aggressively inspecting north-south traffic coming in and out of the network on the assumption that all intra-company traffic was inside the perimeter and “safe,” Today, however, the entire workforce is outside the perimeter, and aggressive inspection tends to generate alert storms and lots of false positives. If this perimeter-only security model wasn’t dead already, the pandemic has certainly killed it.

Cyberattacks are easier to automate

A decade ago, successfully exploiting a computer system involved a lot of work. The attacker had to profile the target computer system, go through a painstaking process to select the appropriate exploit for the system, account for things like software version, operating system, processor architecture and firewall rules, and evade host- and system-based security products.

In 2020, there are countless automated exploitation and phishing frameworks both open source and commercial. As a result, exploitation of vulnerable systems is now cheaper, easier and requires less operator skill.

Activity formerly considered malicious is being executed at internet-wide scale by security companies

“Attack Surface Management,” a cybersecurity sub-industry, identifies vulnerabilities in their customers’ Internet-facing systems and alerts them of such. This is a good thing, not a bad thing, but the issue is not what these companies do—it’s how they do it.

Most Attack Surface Management companies constantly scan the entire internet to identify systems with known-vulnerabilities, and organize the returned data by vulnerability and network owner. In previous years, an unknown remote system checking for vulnerabilities on a network perimeter was a powerful indicator of an oncoming attack. Now, alerts raised from this activity provide less actionable value to analysts and happen more frequently as more of these companies enter the market.

The internet is really noisy

Hundreds of thousands of devices, malicious and benign, are constantly scanning, crawling, probing, and attacking every single routable IP address on the entire internet for various reasons. The more benign use cases include indexing web content for search engines, searching for malware command-and-control infrastructure, the above-mentioned Attack Surface Management activity, and other internet-scale research. The malicious use cases are similar: take a reliable, common, easy-to-exploit vulnerability, attempt to exploit every single vulnerable host on the entire internet, then inspect the successfully compromised hosts to find accesses to interesting organizations.

At GreyNoise, we refer to the constant barrage of Internet-wide scan and attack traffic that every routable host on the internet sees as “Internet Noise.” This phenomenon causes a significant amount of pointless alerts on internet-facing systems, forcing security analysts to constantly ask “is everyone on the internet seeing this, or just us?” At the end of the day, there’s a lot of this noise: over the past 90 days, GreyNoise has analyzed almost three million IP addresses opportunistically scanning the internet, with 60% identified as benign or unknown, and only 40% identified as malicious.

Non-Technical Causes of Alert Fatigue

Fear sells

An unfortunate reality of human psychology is that we fear things that we do not understand, and there is absolutely no shortage of scary things we do not understand in cybersecurity. It could be a recently discovered zero-day threat, or a state-sponsored hacker group operating from the shadows, or the latest zillion-dollar breach that leaked 100 million customer records. It could even be the news article written about the security operations center that protects municipal government computers from millions of cyberattacks each day. Sales and marketing teams working at emerging cybersecurity product companies know that fear is a strong motivator, and they exploit it to sell products that constantly remind users how good of a job they’re doing.

And nothing justifies a million-dollar product renewal quite like security “eye candy,” whether it’s a slick web interface containing a red circle with an ever-incrementing number showing the amount of detected and blocked threats, or a 3D rotating globe showing “suspicious” traffic flying in to attack targets from many different geographies. The more red that appears in the UI, the scarier the environment, and the more you need their solution. Despite the fact that these numbers often serve as “vanity metrics” to justify product purchases and renewals, many of these alerts also require further review and investigation by the already overworked and exhausted security operations team.

The stakes are high

Analysts are under enormous pressure to identify cyberattacks targeting their organization, and stop them before they turn into breaches. They know they are the last line of defense against cyber threats, and there are numerous stories about SOC analysts being fired for missing alerts that turn into data breaches.

In this environment, analysts are always worried about what they missed or what they failed to notice in the logs, or maybe they’ve tuned their environment to the point where they can no longer see all of the alerts (yikes!). It’s not surprising that analyst worry of missing an incident has increased. A recent survey by FireEye called this “Fear of Missing Incidents” (FOMI). They found that three in four analysts are worried about missing incidents, and one in four worry “a lot” about missing incidents. The same goes for their supervisors – more than six percent of security managers reported losing sleep due to fear of missing incidents.

Is it any wonder that security analysts exhibit serious alert fatigue and burnout, and that SOCs have extremely high turnover rates?

Everything is a single pane of glass

Security product companies love touting a “single pane of glass” for complete situational awareness. This is a noble undertaking, but the problem is that most security products are really only good at a few core use cases and then trend towards mediocrity as they bolt on more features. At some point, when an organization has surpassed twenty “single panes of glass,” the problem has become worse.

More security products are devoted to “preventing the bad thing” than “making the day to day more efficient”

There are countless security products that generate new alerts and few security products that curate, deconflict or reduce existing alerts. There are almost no companies devoted to reducing drag for Security Operations teams. Too many products measure their value by their customers’ ability to alert on or prevent something bad, and not by making existing, day-to-day security operations faster and more efficient.

Product pricing models are attached to alert/event volume

Like any company, security product vendors are profit-driven. Many product companies are heavily investor-backed and have large revenue expectations. As such, Business Development and Sales teams often price products with scaling or tiered pricing models based on usage-oriented metrics like gigabytes of data ingested or number of alerts raised. The idea is that, as customers adopt and find success with these products, they will naturally increase usage, and the vendor will see organic revenue growth as a result.

This pricing strategy is often necessary when the cost of goods sold increases with heavier usage, like when a server needs additional disk storage or processing power to continue providing service to the customer.

But an unfortunate side effect of this pricing approach is that it creates an artificial vested interest in raising as many alerts or storing as much data as possible. And it reduces the incentive to build the capabilities for the customer to filter and reduce this “noisy” data or these tactically useless alerts.

If the vendor’s bottom line depends on as much data being presented to the user as possible, then they have little incentive to create intelligent filtering options. As a result, these products will continue to firehose analysts, further perpetuating alert fatigue.

False positives drive tremendous duplication of effort

Every day, something weird happens on a corporate network and some security product raises an alert to a security analyst. The alert is investigated for some non-zero amount of time, is determined to be a false positive caused by some legitimate application functionality, and is dismissed. The information on the incident is logged somewhere deep within a ticketing system and the analyst moves on.

The implications of this are significant. This single security product (or threat intelligence feed) raises the same time-consuming false-positive alert on every corporate network where it is deployed around the world when it sees this legitimate application functionality. Depending on the application, the duplication of effort could be quite staggering. For example, for a security solution deployed across 1000 organizations, an event generated from unknown network communications that turns out to be a new Office 365 IP address could generate 500 or more false positives. If each takes 5 minutes to resolve, that adds up to a full week of effort.

Nobody collaborates on false positives

Traditional threat intelligence vendors only share information about known malicious software. Intelligence sharing organizations like Information Sharing and Analysis Centers (ISACs), mailing lists, and trust groups have a similar focus. None of these sources of threat intelligence focus on sharing information related to confirmed false-positive results, which would aid others in quickly resolving unnecessary alerts. Put another way: there are entire groups devoted to reducing the effectiveness of a specific piece of malware or threat actor between disparate organizations. However, no group supports identifying cases when a benign piece of software raises a false positive in a security product.

Security products are still chosen by the executive, not the user

This isn’t unusual. It is a vestige of the old days. Technology executives maintain relationships with vendors, resellers and distributors. They go to a new company and buy the products they are used to and with which they’ve had positive experiences.

Technologies like Slack, Dropbox, Datadog, and other user-first technology product companies disrupted and dominated their markets quickly because they allowed enterprise prospects to use their products for free. They won over these prospects with superior usability and functionality, allowing users to be more efficient. While many technology segments have adopted this “product-led” revolution, it hasn’t happened in security yet, so many practitioners are stuck using products they find inefficient and clunky.

Why You Should Care

The pain of alert fatigue can manifest in several ways:

  1. Death (or burnout) by a thousand cuts, leading to stress and high turnover
  2. Lack of financial return to the organization
  3. Compromises or breaches missed by the security team

There is a “death spiral” pattern to the problem of alert fatigue: at its first level, analysts spend more and more time reviewing and investigating alerts that provide diminishing value to the organization. Additional security products or feeds are purchased that generate more “noise” and false positives, increasing the pressure on analysts. The increased volume of alerts from noisy security products cause the SOC to need a larger team, with the SOC manager trying to grow a highly skilled team of experts while many of them are overwhelmed, burned out, and at risk of leaving.

From the financial side of things, analyst hours spent investigating pointless alerts are a complete waste of security budget. The time and money spent on noisy alerts and false positives is often badly needed in other areas of the security organization to support new tools and resources. Security executives face a difficult challenge in cost justifying the investment of good analysts being fed bad data.

And worst of all, alert fatigue contributes to missed threats and data breaches. In terms of human factors, alert fatigue can create a negative mindset leading to rushing, frustration, mind not on the task, or complacency. As I noted earlier, almost 50% of analysts who are overwhelmed will simply turn off the noisy alert sources. All of this contributes to an environment where threats are more easily able to sneak through an organization’s defenses.

What can we do about it?

The analyst

Get to “No” faster. To some extent, analysts are the victim of the security infrastructure in their SOC. The part of the equation they control is their ability to triage alerts quickly and effectively. So from a pragmatic viewpoint, find ways to use analyst expertise and time as effectively as possible. In particular, find tools and resources that helps you to rule out alerts as fast as possible.

The SOC manager

Tune your alerts. There is significant positive ROI value to investing in tuning, diverting, and reducing your alerts. Tune your alerts to reduce over-alerting. Leverage your Purple Team to assist and validate your alert “sensitivity.” Focus on the critical TTPs of threat actors your organization faces, and audit your attack surface and automatically filter out what doesn’t matter. These kinds of actions can take a tremendous load off your analyst teams and help them focus on the things that do matter.

The CISO

More is not always better. Analysts are scarce, valuable resources. They should be used to investigate the toughest, most sophisticated threats, so use the proper criteria for evaluating potential products and intelligence feeds, and make sure you understand the potential negatives (false positives, over-alerting) as well as the positives. Be skeptical when you hear about a single pane of glass. And focus on automation to resolve as many of the “noise” alerts as possible.

Security vendors

Focus on the user experience. Security product companies need to accept the reality that they cannot solve all of their users’ security problems unilaterally, and think about the overall analyst experience. Part of this includes treating integrations as first-class citizens, and deprioritizing dashboards. If everything is a single pane of glass, nothing is a single pane of glass—this is no different than the adage that “if everyone is in charge, then no one is in charge.” Many important lessons can be learned from others who have addressed UI/UX issues associated with alert fatigue, such as healthcare and aviation.

The industry

More innovation is needed. The cybersecurity industry is filled with some of the smartest people in the world, but lately we’ve been bringing a knife to a gunfight. The bad guys are scaling their attacks tremendously via automation, dark marketplaces, and advanced technologies like artificial intelligence and machine learning. The good guys have been spending all their time in a painfully fragmented and broken security environment, with all their time focused on identifying the signal, and none on reducing the noise. This has left analysts struggling to manually muscle through overwhelming volumes of alerts. We need some security’s best and brightest to turn their amazing brains to the problem of reducing the noise in the system, and drive innovation that helps analysts focus on what matters the most.

Conclusion

Primary care clinicians became less likely to accept alerts as they received more of them, particularly as they received more repeated (and therefore probably uninformative) alerts.

–  Ancker, et al.

Our current approach to security alerts, requiring analysts to process ever-growing volumes, just doesn’t scale, and security analysts are paying the price with alert fatigue, burnout, and high turnover. I’ve identified a number of the drivers of this problem, and our next job is to figure out how to solve it. One great area to start is to figure out how other industries have improved their approach, with aviation being a good potential model. With some of these insights in mind, we can figure out how to do better in our security efforts by doing less.

Andrew Morris
Founder of GreyNoise

WHITEPAPER | May 17, 2021

Cross-Platform Feature Comparison

For an Intel-commissioned study, IOActive compared security-related technologies from both the 11th Gen Intel Core vPro mobile processors and the AMD Ryzen PRO 4000 series mobile processors, as well as highlights from current academic research where applicable.

Our comparison was based on a set of objectives bundled into five categories: Below the OS, Platform Update, Trusted Execution, Advanced Threat Protection, and Crypto Extension. Based on IOActive research, we conclude that AMD offers no corresponding technologies those categories while Intel offers features; Intel and AMD have equivalent capabilities in the Trusted Execution category.

EDITORIAL | April 8, 2021

Trivial Vulnerabilities, Serious Risks

Introduction

The digital transformation brought about by the social distancing and isolation caused by the global COVID-19 pandemic was both extremely rapid and unexpected. From shortening the distance to our loved ones to reengineering entire business models, we’re adopting and scaling new solutions that are as fast-evolving as they are complex. The full impact of the decisions and technological shifts we’ve made in such short a time will take us years to fully comprehend.

Unfortunately, there’s a darker side to this rapid innovation and growth which is often performed to strict deadlines and without sufficient planning or oversight – over the past year, cyberattacks have increased drastically worldwide [1]. Ransomware attacks rose 40% to 199.7 million cases globally in Q3 alone [2], and 2020 became the “worst year on record” for data breaches by the end of Q2 [1].

In 2020, the U.S. government suffered a series of attacks targeting several institutions, including security agencies, the Congress, and the judiciary, combining in what was arguably the “worst-ever US government cyberattack,” and also affecting major tech companies.

The attacks were reported in detail [3], bringing attention to the mass media [4]. A recent article by Kari Paul and Lois Beckett in The Guardian stated[5]:

“Key federal agencies, from the Department of Homeland Security to the agency that oversees America’s nuclear weapons arsenal, were reportedly targeted, as were powerful tech and security companies including Microsoft. Investigators are still trying to determine what information the hackers may have stolen, and what they could do with it.”

In November of last year, the Brazilian judicial system faced its own personal chapter of this story. The Superior Court of Justice, the second-highest of Brazil’s courts, had over 1,000 servers taken over and backups destroyed in a ransomware attack [6]. As a result of the ensuing chaos, their infrastructure was down for about a week.

Adding insult to injury, shortly afterward, Brazil’s Superior Electoral Court also suffered a cyberattack that threatened and delayed recent elections [7].

In this post, we will briefly revisit key shifts in cyberattack and defense mechanisms that followed the technological evolution of the past several decades. Even after a series of innovations and enhancements in the field, we will illustrate how simple security issues still pose major threats today, and certainly will tomorrow.

We will conclude by presenting a cautionary case study [25] of the trivial vulnerability that could have devastated the Brazilian Judicial System.

The Ever-changing ROI of Cyberattacks

Different forms of intrusion technology have come into and out of vogue with attackers over the decades since security threats have been in the public consciousness.

In the 1980s, default logins and guest accounts gave attackers carte blanche access to systems across the globe. In the 1990s and early 2000s, plentiful pre-authentication buffer overflows could be found everywhere.

Infrastructure was designed flatly then, with little compartmentalization in mind, leaving computers — clients and servers — vastly exposed to the Internet. With no ASLR [8] or DEP/NX [9] insight, exploiting Internet-shattering vulnerabilities was a matter of a few hours or days of work — access was rarely hard to obtain for those who wanted it.

In the 2000s, things started to change. The rise of the security industry, Bill Gates’ famous 2002 memo, [10] and the growing full-disclosure movement leading the charge in the appropriate regulation of vulnerabilities, brought with them a full stack of security practices covering everything from software design and development to deployment and testing.

By 2010, security assessments, red-teaming exercises, and advanced protection mechanisms were common standards among developed industries. Nevertheless, zero-day exploits were still widely used for both targeted and mass attacks.

Between 2010 and 2015, non-native web applications and virtualized solutions multiplied. Over the following years, as increasing computing power permitted, hardware was built with robust chain-of-trust [11], virtualization [12], and access control capabilities. Software adopted strongly typed languages, with verification and validation [13] a part of code generation and runtime procedures. Network technologies were designed to support a variety of solutions for segregation and orchestration, with fine-grained controls.

From 2015 onwards, applications were increasingly deployed in decentralized infrastructures, along with ubiquitous web applications and services, and the Cloud started to take shape. Distributed multifactor authentication and authorization models were created to support users and components of these platforms.

These technological and cultural shifts conveyed changes to the mechanics of zero-day-based cyberattacks.

Today at IOActive, we frequently find complex, critical security issues in our clients’ products and systems. However, turning many of those bugs into reliable exploits can take massive amounts of effort. Most of the time, a start-to-end compromise would depend on entire chains of vulnerabilities to support a single attack.

In parallel to the past two decades of security advancements, cyberattacks adapted and evolved alongside them in what many observers compare to a “cyber-arms race,” with scopes on the private and government sectors.

While the major players in cyber warfare have virtually unlimited resources, for the majority of mid-tier cyber-attackers the price of such over-engineering simply doesn’t pay for itself. With better windows of opportunity elsewhere, attackers are instead increasingly relying on phishing, data breaches, asset exposures, and other relatively low-tech intrusion methods.

Simple Issues, Serious Threats: Today and Tomorrow

Technologies and Practices Today

While complex software vulnerabilities remain a threat today, increasingly devastating attacks are being leveraged from simple security issues. The reasons for this can vary, but it often results from recently adopted technologies and practices:

  • Cloud services [14] becoming sine qua non make it hard to track assets, content and controls [15] in the overly agile DevOps lifecycle
  • Third-party chains-of-trust become weaker as they grow (we’ve recently seen a code-dependency-critical attack based on typosquatting) [16]
  • Weak MFA mechanisms based on telephony, SMS, and instant messengers leveraging identity theft and authentication bypasses
  • Collaborative development via public repositories often leak API keys and other secrets by mistake
  • Interconnected platforms create an ever-growing supply-chain complex that must be validated across multiple vendors

New Technologies and Practices Tomorrow

Tomorrow should bring interesting new shades to this watercolor landscape:

What they didn’t tell you about AI (Thanks @mbsuiche)

Old Technologies and Practices Today and Tomorrow

There is another factor contributing to the method, from which simple security issues continue to present major threats today and will so tomorrow. It echoes silently from a past where security scrutiny wasn’t praxis.

Large governmental, medical, financial, and industrial control systems all have one thing in common: they’re a large stack of interconnected components. Many of these are either legacy components or making use of ancient technologies that lack minimal security controls.

A series of problems face overstretched development teams who often need to be overly agile and develop “full stack” applications: poor SDLC, regression bugs, lack of unit tests, and short deadlines all contribute to code with simple and easily exploitable bugs that make it into production environments. Although tracking and sanitizing such systems can be challenging to industries and governments, a minor mistake can cause real disasters.

Case Study [view the file here]

The Brazilian National Justice Council (CNJ) maintains a Judicial Data Processing System capable of facilitating the procedural activities of magistrates, judges, lawyers, and other participants in the Brazilian legal system with a single platform, making it ubiquitous as a result.

The CNJ Processo Judicial Eletrônico (CNJ PJe) system processes judicial data, with the objective of fulfilling the needs of the organs of the Brazilian Judiciary Power: Superior, Military, Labor, and Electoral courts; the courts of both the Federal Union and the individual states themselves; and the specialized justice systems that handle ordinary law and employment tribunals on both the federal and state level.

The CNJ PJeOffice software allows access to a user’s workspace through digital certificates, where individuals are provided with specific permissions, access controls, and scope of access in accordance with their roles. The primary purpose of this application is to guarantee legal authenticity and integrity to documents and processes through digital signatures.

Read the IOActive case study of the CNJ PJe vulnerabilities that fully details the scenario that represented big risks for the Brazilian Judicial System users.

Conclusion

While Information Security has strongly evolved over the past several decades, creating solid engineering, processual, and cultural solutions, new directions in the way we depend upon and use technology will come with issues that are not necessarily new or complex.

Despite their simplicity, attacks arising from these issues can have a devastating impact.

How people work and socialize, the way businesses are structured and operated, even ordinary daily activities are changing, and there’s no going back. The post-COVID-19 world is yet to be known.

Apart from the undeniable scars and changes that year 2020 imposed on our lives, one certainty is assured: information security has never been so critical.

References

[4] https://apnews.com/article/coronavirus-pandemic-courts-russia-375942a439bee4f4b25f393224d3d778

[5] https://www.theguardian.com/technology/2020/dec/18/orion-hack-solarwinds-explainer-us-government

[6] https://www.theregister.com/2020/11/06/brazil_court_ransomware/

[7] https://www.tse.jus.br/imprensa/noticias-tse/2020/Novembro/tentativas-de-ataques-de-hackers-ao-sistema-do-tse-nao-afetaram-resultados-das-eleicoes-afirma-barroso

[8] https://en.wikipedia.org/wiki/Address_space_layout_randomization

[9] https://en.wikipedia.org/wiki/Executable_space_protection

[10] https://www.wired.com/2002/01/bill-gates-trustworthy-computing/

[11] https://en.wikipedia.org/wiki/Trusted_Execution_Technology

[12] https://en.wikipedia.org/wiki/Hypervisor

[13] https://en.wikipedia.org/wiki/Software_verification_and_validation

[14] https://cloudsecurityalliance.org/blog/2020/02/18/cloud-security-challenges-in-2020/

[15] https://ioactive.com/guest-blog-docker-hub-scanner-matias-sequeira/

[16] https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610

[17] https://act-on.ioactive.com/acton/attachment/34793/f-87b45f5f-f181-44fc-82a8-8e53c501dc4e/0/-/-/-/-/LoRaWAN%20Networks%20Susceptible%20to%20Hacking.pdf

[18] https://act-on.ioactive.com/acton/fs/blocks/showLandingPage/a/34793/p/p-003e/t/form/fm/0/r//s/?ao_gatedpage=p-003e&ao_gatedasset=f-2c315f60-e9a2-4042-8201-347d9766b936

[19] https://ioactive.com/wp-content/uploads/2018/05/IOActive_HackingCitiesPaper_cyber-security_CesarCerrudo-1.pdf

[20] https://ioactive.com/wp-content/uploads/2018/05/Hacking-Robots-Before-Skynet-Paper_Final.pdf

[21] https://ioactive.com/wp-content/uploads/2018/05/IOActive_Compromising_Industrial_Facilities_from_40_Miles_Away.pdf

[22] https://ioactive.com/pdfs/IOActive_SATCOM_Security_WhitePaper.pdf

[23] https://ioactive.com/wp-content/uploads/2018/08/us-18-Santamarta-Last-Call-For-Satcom-Security-wp.pdf

[24] https://www.belfercenter.org/publication/AttackingAI

[25] https://act-on.ioactive.com/acton/attachment/34793/f-e426e414-e895-4fb0-971f-4fa432e5ad9b/1/-/-/-/-/IOA-casestudy-CNJ-PJe.pdf