EDITORIAL | February 13, 2020

Do You Blindly Trust LoRaWAN Networks for IoT?

Do you blindly trust that your IoT devices are being secured by the encryption methods employed by LoRaWAN? If so, you’re not alone. Long Range Wide Area Networking (LoRaWAN) is a protocol designed to allow low-power devices to communicate with Internet-connected applications over long-range wireless connections. It’s being adopted by major organizations across the world because of its promising capabilities. For example, a single gateway (antenna) can cover an entire city, hundreds of square miles.

With more than 100 million LoRaWAN-connected devices in use across the globe, many cellular carriers are racing to join in by offering LoRa nationwide coverage as a service for a low price: on average, a tenth of LTE-based services. However, neither equipment vendors nor service providers nor the end users who are implementing the technology are paying attention to security pitfalls, and are instead spreading a false sense of security.

Our New Research

In exploring the LoRaWAN protocol, we found major cyber security problems in the adoption of this technology. LoRaWAN is advertised as having “built-in encryption,” which may lead users to believe it is secure by default. When talking about the networks that are used across the globe to transmit data to and from IoT devices in smart cities, industrial settings, smart homes, smart utilities, vehicles, and healthcare, we can’t afford to blindly trust LoRaWAN and ignore cyber security. Last week, IOActive presented these LoRaWAN cyber security problems at The Things Conference in Amsterdam, and it grabbed the attention and interest of conference attendees.

The Root of the Risk

The root of the risk lies in the keys used for encrypting communications between devices, gateways, and network servers, which are often poorly protected and easily obtainable. Basically the keys are everywhere, making encryption almost useless. This leaves networks vulnerable to malicious hackers who could compromise the confidentiality and integrity of the data flowing to and from connected devices.

For example, if malicious hackers want to launch a Denial of Service attack, once they have the encryption keys, they can access the network and disrupt communications between connected devices and the network server, meaning companies can’t receive any data.

Alternatively, attackers could intercept communications and replace real sensor or meter readings with false data. Hackers could exploit this to damage industrial equipment, potentially halting operations and putting company infrastructure at risk.

These are just two examples of how attackers can leverage LoRaWAN to execute malicious attacks, but the list goes on. From preventing utility firms from taking smart meter readings, to stopping logistics companies from tracking vehicles, to blocking industrial control processes from receiving sensor readings, if we unwittingly trust flawed technology, we will pay the price.

What Now?

Currently there isn’t a way for an organization to know if a LoRaWAN implementation is or has been hacked or if an encryption key has been compromised. Furthermore, there are no tools to audit/penetration test/hack LoRaWAN implementations. Standing in the gap, IOActive has released a set of useful tools, the LoRaWAN Auditing Framework, which allows organizations to audit and penetration test their infrastructure, detect possible attacks, and eliminate or reduce the impact of an attack. Our goal is to ensure LoRaWAN is deployed securely.

Resources

IOActive LoRaWAN Networks Susceptible to Hacking: Common Cyber Security Problems, How to Detect and Prevent Them (whitepaper)
IOActive LoRaWAN Auditing Framework tools

EDITORIAL | August 1, 2019

Eight Steps to Improving Your Supply Chain Security Program

In this second, of a two-part blog series on the supply chain, I’ll discuss how to improve your supply chain security.

Supply chain attacks aren’t anything new, but we’re hearing more about them lately, as threat actors continue to find new ways to breach networks. In fact, the most well-known supply chain attack dates back to 2013 when Target was breached through its HVAC supplier, exposing the credit card data of 110 million customers. In the last two years, NotPetya, Trisis and the more recent Wipro compromise have served as not-so-gentle reminders that supply chain attacks are damaging, costly and present many risks to both businesses and their suppliers.

The fact is: the more secure an organization itself is, the more attractive that organization’s supply chain becomes in the mind of the attacker. An attacker wants to find the easiest pathway to get into the network so oftentimes, it’s the supplier who has an exploitable vulnerability that can get them full access into the original target’s network.

The more secure an organization itself is, the more attractive that organization’s supply chain becomes in the mind of the attacker.

Most threat actors organizations face today are very smart. They know they don’t actually need to leverage a sophisticated, complex supply chain hack to wreak havoc on a network, steal data or intellectual property, or cause catastrophic damage. All they really need to do is look for unpatched servers and systems or send out a simple phishing email. Just look at the recent Wipro breach where dozens of employees’ emails were compromised through a phishing scam that gave the threat actors access to over 100 Wipro computer systems to mount attacks on a dozen Wipro customers.

Phishing and the use of stolen credentials are repeat offenders that keep coming up over and over again. In fact, the 2019 Verizon Data Breach Investigations Report cited that 32 percent of the breaches involved phishing scams and 29 percent involved the use of stolen credentials.

An unsophisticated cyberattack often yields a better outcome for an attacker — saving them time, money and resources while making attribution more difficult, so it’s in their best interest to take the easier path to their goal. We’ve seen many successful breaches where attackers penetrated systems through hardcoded credentials or just poorly patched systems.

That’s why, if you’re not protecting your own network against basic threat actors, doing your due diligence to properly patch, and holding your suppliers accountable for securing their own networks, you have no hope in protecting against nation-states or more capable threat actors. This is where third-party testing comes in handy to trust and verify your suppliers.

Here are a few key steps you can take today to build a supply chain security program:

  1. Know your suppliers and look upstream as well as downstream. Start with your tier-one suppliers and then identify tier twos and others. Take a full inventory of who you do business with so you can identify any weak links.
  2. Conduct a risk assessment. Once you’ve identified all your partners, you need to properly assess each one’s cybersecurity posture so you know the risks they may pose to your organization. You must consider where each device or component was built and who exactly built it. Is there a possible backdoor or counterfeit part? Or is it just the more likely software quality issues that can result in a breach?
  3. Utilize third-party testing. Hire a third-party firm to test your system, and that of your suppliers, to provide actionable results on what you need to fix first.
  4. Regularly scan and patch all vulnerable systems.
  5. Use strong passwords. Teach your employees about the importance of using strong passwords and not recycling them across accounts.
  6. Ensure your staff has set up multi-factor authentication everywhere possible.
  7. Conduct regular security awareness training to teach employees how to identify phishing scams, update software and become more security-conscious.
  8. Harden the security of the devices connected to your networks.

Make sure you’re not worrying about low-likelihood events like supply chain attacks if you’re not doing the basics of foundational security at your own organization. It’s really quite simple: you need to crawl before you walk, and walk before you run.

EDITORIAL | July 17, 2019

Supply Chain Risks Go Beyond Cyber: Focus on Operational Resilience

In this first, of a two-part blog series on supply chain, I’ll discuss the security and operational risk in today’s supply chain.

In the past 20 years, we’ve seen the globalization of the supply chain and a significant movement to disperse supply chains outside national borders. With this globalization comes many supply chain risks — risks that go beyond just cyber attacks and demonstrate a need for stronger operational resilience.

Most organizations want to take advantage of tariff treaties and overall cost savings by outsourcing the manufacturing and production of their goods, resulting in greater operational efficiencies. However, much of this supply chain globalization has actually made our supply chain longer, much more complex and less resilient. Nowadays, a product may have to go through multiple countries before it’s complete, offering more opportunities for things to go wrong from a supply chain risk perspective.

In the last two years alone, the global supply chain has experienced major disruptions from natural disasters, weather-related events and factory fires that have put organizations out of business. One of the most notable supply chain disruptions occurred in the 2000s when the production of hard disk drives produced in Thailand was gravely impacted by significant flooding in the country. The flooding impacted the whole logistics chain including the hardware manufacturers, component suppliers, the transportation of the devices, as well as the manufacturing plants and facilities involved in the hard drive development.

Puerto Rico is home to more than 40 drug manufacturing companies so when Hurricane Maria’s tragic landfall in 2017 caused power outages, loss of life and utter devastation, it also disrupted the island’s biggest export: pharmaceutical and medical devices. Even a year after the hurricane, there were still supply chain disruptions involving a major manufacturing plant supplying IV saline bags to U.S. hospitals.

Another, more direct supply chain risk involves the delivery of sub-standard or altered components — this is when the supplier is seeking enhanced profit by delivering low-cost goods. There are many examples of this over the years including the 2010 Vision Tech scandal where the company was charged with selling 59,000 counterfeit microchips to U.S. Navy.  Driven by profit-seeking behavior, in 2018, the owner of PRB Logics Corporation was arrested and charged with selling counterfeit computer parts. They were repainted and remarked with counterfeit logos and PRB took it a step further to defraud the purchaser of the equipment by falsifying test results when the buyer wanted verification that the components were delivered as specified.

While it’s difficult to predict when disasters, hurricanes or flooding may occur, or to know for certain if a device has been tampered with, there are several steps organizations can take to improve their supply chain management and overall operational resiliency, including:

  1. Don’t just select one risk to manage. Take a holistic view of your entire supply chain and try to identify the weakest links.
  2. Consider all potential disruptions and ways you can build and design your supply chain to keep it operational in the face of any foreseeable and unforeseeable challenges. If the suppliers with whom you deal directly are required to have a supply chain program and they expect the same of their suppliers, this will create a far more resilient supply chain of higher integrity.
  3. Don’t use substandard or modified/altered components and parts to save money. This can result in major issues with supply chain integrity and data integrity down the road.
  4. Trust and verify. Know what’s in your firmware and ensure there are no counterfeit hardware components. You need to verify what you cannot trust, including components from a third-party. You need to trust what you cannot verify. Even if you trust a vendor, there’s always the possibility of a compromise further up the supply chain.
  5. Understand high-order effects within your supply chain. A first-order effect directly impacts that device, whereas a second-order effect is simply the consequence of the first effect of an event.
EDITORIAL | March 1, 2019

RSA Conference Requires Changes

For many years, IOActive has been hosting our IOAsis event as a refuge from the madness of crowds and marketing pitches. This was a hugely successful event and we appreciate everyone’s support and participation over the years to make it a high-quality “hallway con” in an upscale environment. Last year, we noticed a reduction in the quality of attendance at our event even though there was an increase in overall RSA Conference (RSAC) attendance. We discovered in talking to our clients, friends and peers in the industry that many of them no longer felt the ROI was there to attend given the changes in the conference coupled with the significant impact to their companies’ limited travel budgets (with lodging easily averaging $1,000 per night before taxes and fees).

This exorbitant lodging cost has put the conference out of reach of far too many security practitioners and front-line managers and unfortunately further reduces the quality of this conference and the surrounding events. At IOActive, we believe all things are designed and few things are designed well. This applies to the RSAC just as it applies to a poorly designed product. RSAC has moved from being a conference about cybersecurity to a conference about the companies involved in cybersecurity. While RSAC has always had a heavy product focus with a parade of companies offering whatever marketing has determined to be the magic talisman du jour, it has moved to more of a focus on the acquisition and funding of the companies that make these talismans. This only further removes RSAC from a conference for security practitioners and managers.

We have greatly enjoyed all the fantastic discussion and comaraderie at our past IOAsis events. Much as we evolve our services to meet the changing threatscape, we feel we must evolve our events to meet the changing “conferencescape.” While conferences such as RSAC are no longer focused on practitioners, there are numerous regional, vertical, and topical events that have gained traction over recent years and are filling the void to provide the high-quality content and attendees necessary for a valuable event and associated “hallway con.” With this in mind, we are bidding a fond farewell to our IOAsis at RSAC and will be launching a series of IOAsis events aligned with these more focused events.

We spend quite a bit of time pointing out flaws and vulnerabilities, but we also offer constructive recommendations on how to address the identified issues. First, RSAC could find a more affordable venue to host the conference at this scale. The cost of hotel rooms indicates a dramatic lack of supply in San Francisco for a conference of this size. Second, the acquisition and funding focus could be moved to the week before the cybersecurity events starting on Wednesday or Thursday of the preceding week. This would allow the handful of executives to come in a bit earlier to have a less-rushed interaction with the PE, VC and investment banker communities before they need to focus on their clients and prospects. Lastly, we believe RSAC should focus on cybersecurity rather than simply acting as a showroom for current cybersecurity products. We appreciate running a conference of this size is not easy and these observations are meant to be help improve the quality and help RSAC stay relevant for many years to come.

The IOActive team will still be present at RSA and speaking at several of the surrounding events throughout the week. We look forward to seeing our clients, prospects, friends, family and peers in San Francisco this year. If you would like to see us, please get in touch.

EDITORIAL | August 15, 2018

Secure Design? Help!

secure design

“So, Brook, in your last post you pointed to the necessity, underlined a requirement for “secure design”. But what does that mean, and how do I proceed?”

It’s a fair question that I get asked regularly: How does one get security architecture started? Where can I learn more, and grow towards mastery?

It used to be that the usual teaching method was to “shadow” (follow) a seasoned or master practitioner as she or he went about their daily duties. That’s how I learned (way back in the “Dark Ages” of security architecture) and how I’ve helped dozens of others. That methods works, at least some of the time and with some people. But shadowing has its limits.

One of the tasks that I’ve set before myself[i] is to try to figure out techniques for imparting the requisite knowledge set. Obviously, many of my publications are dedicated to this task[ii].

Still, I thought in this post that I’d set down at a very high level that set of skills that I find most of the great practitioners that I know bring to bear on the problems in secure design. In my next book (Confessions of a Cyber Security Architect) I hope to explain what security architects need to know and to delve into how one might go about gaining the knowledge.  The following quote is a peek at a bit of that book:

“One may think of security architecture as the application of information security practices to digital systems (as I proposed in Securing Systems). It is more properly the intersection of a number of disciplines:

Security knowledge:

  • Information security, and especially, the situationally appropriate application of security controls
  • A working knowledge of existing and potential threat actors (“threat agents”)
  • Digital attack types and exploits, which must include the nature of vulnerability and misuse
  • Risk as it relates to software and digital systems
  • Applied cryptography (at least at an architectural level)

Specific areas in software:

  • Software development practices
  • Software design
  • Software and system architectures

General computer science:

  • Computer languages and their compilation and execution
  • Software interactions with underlying hardware
  • Communication and data exchange protocols
  • Data storage mechanisms and protocols
  • Runtime and execution models and environments
  • Operating systems
  • Central processing unit (CPU) execution, and machine language execution in general
  • Random access memory (RAM) usage and misusage”[iii]

As you may see from the list, much of the knowledge set (from computer languages onward in my list) is basic computer science, as any computer science university student, probably anywhere in the world, is likely to know.

The items in my list (above) from the top through “Applied cryptography” are the security specialist’s body of knowledge. The middle section lists those areas of software development that I believe are required in order to integrate this working knowledge into software development. Further, as I propose with developer-centric security, securing development practices as well as helping developers deliver secure designs and implementations requires security people to fit easily and organically into development practices.

In my experience, a security architect must first and foremost be a software and system architect, conversant in software and system structures and structuring, in architecture theory and practice. Understanding security without the requisite architecture and design skills means that the practitioner cannot integrate their knowledge properly into a structure and design as it evolves.

How does one gain the requisite knowledge?

Like many skill-sets that require analytic skills, it is experience, lots of practice, and indeed, learning from multiple mistakes that are excellent teachers. Learning takes time and concerted effort. Only the preternaturally gifted will attend a single secure design class and then emerge fully formed, ready and able to confront an organization’s secure design challenges. The rest of us must stumble along attempting to apply what we’ve learned, gathering scars from the inevitable errors along the way.

However, there is an emerging set of knowledge, some of which might help to shorten the learning process.

As a follow-up to my post that pointed out the continuing importance of secure design, I’d like to offer the following collection of resources as pointers that may help to build secure design skills. These have all been made publicly available for your use. Hopefully one or more the following will help you build or improve your secure design skills and/or programme?

Resources:

Threat modeling:
Avoiding the Top 10 Software Security Design Flaws
https://safecode.org/wp-content/uploads/2017/05/SAFECode_TM_Whitepaper.pdf
https://www.owasp.org/index.php/Application_Threat_Modeling
https://owasp.org/www-pdf-archive/AppSecEU2012_PASTA.pdf

Risk evaluation:
Prioritizing Information Security Risks
Just Good Enough Risk Rating (JGERR): https://brookschoenfield.com/?page_id=271
https://owasp.org/www-community/Threat_Modeling
FAIR standard: https://pubs.opengroup.org/onlinepubs/9699919899/toc.pdf

Attack methods study:
https://mitre.github.io/attack-navigator/mobile/

I hope that this list will be of some use.


[i] Of course, I’m far from the only person who gave dedicated themselves to the task of training security architects.
[ii] You’ll have to decide if my attempts are fruitful.
[iii] From Schoenfield, Brook S.E., Confessions of a Cyber Security Architect, CRC Press, expected in 2019
EDITORIAL | July 13, 2018

Secure Design Remains Critical

From time to time, a technically astute person challenges me around some area of secure design. Not too long ago, a distinguished engineer opined that “Threat modeling doesn’t do anything.” A CTO asked why there was any need for security architects, arguing, “We pay for static analysis. That should fix our secure development problems.”

I’m not making these comments up. The people who made them are not clueless idiots, but rather, very bright individuals. These are worthy questions. If we, security architects (that is, those of us trying to create designs that embody and enact security principles), cannot articulate what it is that we do and why it’s important, then we’re literally “out of business”; harried execs aren’t going to pay for something whose value cannot be explained, let alone quantified.

The essential problem with security architecture is that success will result in resiliency, in longevity, in plasticity in the face of change (particularly, a changing threat landscape). These are hard things to measure, and they have a time element. It can take a rather long time to demonstrate effectiveness and realize the benefits of investment. It’s what my friend, Srikanth Narasimhan (one of Cisco’s Distinguished Engineers) terms, “the long tail of architecture.” There may be no Immediate results from secure design efforts.

Every once in while, we see a good example of the art in practice. Chrome’s recent release of “Site Isolation” provides an interesting example of the application of secure design. https://security.googleblog.com/2018/07/mitigating-spectre-with-site-isolation.html

Site Isolation is a large change to Chrome’s architecture that limits each renderer process to documents from a single site.” Charlie Reis, Google Security Blog, July 11, 2018, https://security.googleblog.com/2018/07/mitigating-spectre-with-site-isolation.html. The highlight is mine.

The so-called “Spectre” technique, if you recall, is a new set of techniques that misuse speculative execution (the CPU pipeline that executes multiple potential branches of code in parallel) cache to bypass operating system and firmware restrictions. Spectre has been used by real-world attackers. Operating system and CPU vendors have been scrambling to close these avenues since January 2018. At the same time, security researchers have been finding ever more ingenious methods to expand upon the techniques. Meltdown and Spectre continue to be important security issues1.

I’m not going to dig into the technical details of site isolation here. Instead, let’s focus on the secure design improvement to Chrome.

Of course, Chrome Site Isolation is ultimately coded, which means that it may suffer from any of the security horrors that code might contain. That is, what are commonly called, “vulnerabilities.” Presumably, the Chrome team did their best to find and fix any secure coding errors (on a side note, I spent some time studying Chrome’s secure development processes and commitment to security in a past role, and I’ve no doubt that if they’ve followed their stated processes, they’ve done their utmost to write secure code and to test it well).

Secure coding aside, the Chrome team faces a significant problem: It is demonstrable that JavaScript (and potentially other web-loaded, client-side browser code) can be used to exploit Spectre2. Spectre protection cannot rely solely on coding. Because Spectre does not exploit a coding error, it cannot be “fixed” in code. Hence, in response to that CTO I quoted earlier, static analysis cannot solve this problem. Protection against Spectre in a browser must be designed. That is precisely my point.

All the vulnerability scanning tools in the world will not provide a mitigation nor fix Spectre browser exploitation. It has to be designed. That is the art of security architecture.

The threat model for attacking Chrome via Spectre exploitation is mercifully simple, in that we know the attack and its method; we have example code against which to test3.

In order to conceive of Site Isolation, its designers had to understand Spectre thoroughly, especially JavaScript/client-side code exploitation. Then, they had to weigh that knowledge against potential defenses that would be possible to build. At some point, they chose Site Isolation as the best solution from the pool of potentials, and from that point, they had to build and test Site Isolation.

Site Isolation at a very high level, confines communications and data from each website to a strictly controlled sandbox process dedicated to that site. You can read more here.

We won’t know the effectiveness of Site Isolation until it faces attacks. How long a real-world challenge will take to emerge depends upon a number of factors: adoption amongst Chrome users, metrics on defended attacks, perceived value of exploitation by real-world attackers, etc.

In the meantime, I suspect that the security research community will examine Site Isolation in great detail against its promised protection. That examination, in and of itself, must be part of a robust secure design process: independent review and the resulting dialog around the design, as threat modeling author, Adam Shostack, so wisely opines, (my response).

In any event, as events unfold for Site Isolation, we can immediately note that this is a potential success for security architecture and secure design practices. I doff my hat to the prudent Chrome folks who’ve tried to defend against Spectre directly, thus, protecting us all and perhaps offering a ray of hope (and codable technique!) to other browser makers.


1When I was at McAfee, I contributed to this early analysis of the techniques: https://www.mcafee.com/blogs/mcafee-labs/decyphering-the-noise-around-meltdown-and-spectre/
2I believe that Alex Ionescu was the first to demonstrate JavaScript Spectre exploitation in January of 2018.
3The usual threat model is not so lucky, in that the credible attacks have to be generated (which is the art and science of threat modeling. Please see my book, Securing Systems, or one of the other books on threat modeling for more information on attack trees and identifying which attacks are relevant.)
EDITORIAL | January 31, 2018

Security Theater and the Watch Effect in Third-party Assessments

Before the facts were in, nearly every journalist and salesperson in infosec was thinking about how to squeeze lemonade from the Equifax breach. Let’s be honest – it was and is a big breach. There are lessons to be learned, but people seemed to have the answers before the facts were available.

It takes time to dissect these situations and early speculation is often wrong. Efforts at attribution and methods take months to understand. So, it’s important to not buy into the hysteria and, instead, seek to gain a clear vision of the actual lessons to be learned. Time and again, these supposed “watershed moments” and “wake-up calls” generate a lot of buzz, but often little long-term effective action to improve operational resilience against cyber threats.


At IOActive we guard against making on-the-spot assumptions. We consider and analyze the actual threats, ever mindful of the “Watch Effect.” The Watch Effect can be simply explained:  you wear a watch long enough, you can’t even feel it.
I won’t go into what third-party assessments Equifax may or may not have had because that’s largely speculation. The company has probably been assessed many times, by many groups with extensive experience in the prevention of cyber threats and the implementation of active defense. And they still experienced a deep impact cyber incursion.

The industry-wide point here is: Everyone is asking everyone else for proof that they’re secure.

The assumption and Watch Effect come in at the point where company executives think their responses to high-level security questions actually mean something.

Well, sure, they do mean something. In the case of questionnaires, you are asking a company to perform a massive amount of tedious work, and, if they respond with those questions filled in, and they don’t make gross errors or say “no” where they should have said “yes”, that probably counts for something.

But the question is how much do we really know about a company’s security by looking at their responses to a security questionnaire?

The answer is, “not much.”

As a company that has been security testing for 20 years now, IOActive has successfully breached even the most advanced cyber defenses across countless companies during penetration tests that were certified backwards and forwards by every group you can imagine. So, the question to ask is, “Do questionnaires help at all? And if so, how much?”
 
Here’s a way to think about that.

At IOActive we conduct full, top-down security reviews of companies that include business risk, crown-jewel defense, and every layer that these pieces touch. Because we know how attackers get in, we measure and test how effective the company is at detecting and responding to cyber events – and use this comprehensive approach to help companies understand how to improve their ability to prevent, detect, and ever so critically, RESPOND to intrusions. Part of that approach includes a series of interviews with everyone from the C-suite to the people watching logs. What we find is frightening.

We are often days or weeks into an assessment before we discover a thread to pull that uncovers a major risk, whether that thread comes from a technical assessment or a person-to-person interview or both.

That’s days—or weeks—of being onsite with full access to the company as an insider.

Here’s where the Watch Effect comes in. Many of the companies have no idea what we’re uncovering or how bad it is because of the Watch Effect. They’re giving us mostly standard answers about their day-to-day, the controls they have in place, etc. It’s not until we pull the thread and start probing technically – as an attacker – that they realize they’re wearing a broken watch.

Then they look down at a set of catastrophic vulnerabilities on their wrist and say, “Oh. That’s a problem.”

So, back to the questionnaire…

If it takes days or weeks for an elite security firm to uncover these vulnerabilities onsite with full cooperation during an INTERNAL assessment, how do you expect to uncover those issues with a form?

You can’t. And you should stop pretending you can. Questionnaires depend far too much upon the capability and knowledge of the person or team filling it out, and often are completed with impartial knowledge. How would one know if a firewall rule were updated improperly to “any/any” in the last week if it is not tested and verified?

To be clear, the problem isn’t that third party assessments only give 2/10 in security assessment value. The problem is that executives THINK it’s giving them 6/10, or 9/10.

It’s that disconnect that’s causing the harm.

Eventually, companies will figure this out. In the meantime, the breaches won’t stop.

Until then, we as technical practitioners can do our best to convince our clients and prospects to understand the value these types of cursory, external glances at a company provide. Very little. So, let’s prioritize appropriately.

EDITORIAL | January 24, 2018

Cryptocurrency and the Interconnected Home

There are many tiny elements to cryptocurrency that are not getting the awareness time they deserve. To start, the very thing that attracts people to cryptocurrency is also the very thing that is seemingly overlooked as a challenge. Cryptocurrencies are not backed by governments or institutions. The transactions allow the trader or investor to operate with anonymity. We have seen a massive increase in the last year of cyber bad guys hiding behind these inconspicuous transactions – ransomware demanding payment in bitcoin; bitcoin ATMs being used by various dealers to effectively clean money.

Because there are few regulations governing crypto trading, we cannot see if cryptocurrency is being used to fund criminal or terrorist activity. There is an ancient funds transfer capability, designed to avoid banks and ledgers called Hawala. Hawala is believed to be the method by which terrorists are able to move money, anonymously, across borders with no governmental controls. Sound like what’s happening with cryptocurrency? There’s an old saying in law enforcement – follow the money. Good luck with that one.

Many people don’t realize that cryptocurrencies depend on multiple miners. This allows the processing to be spread out and decentralized. Miners validate the integrity of the transactions and as a result, the miners receive a “block reward” for their efforts. But, these rewards are cut in half every 210,000 blocks. A bitcoin block reward when it first started in 2009 was 50 BTC, today it’s 12.5. There are about 1.5 million bitcoins left to mine before the reward halves again.

This limit on total bitcoins leads to an interesting issue – as the reward decreases, miners will switch their attention from bitcoin to other cryptocurrencies. This will reduce the number of miners, therefore making the network more centralized. This centralization creates greater opportunity for cyber bad guys to “hack” the network and wreak havoc, or for the remaining miners to monopolize the mining.

At some point, and we are already seeing the early stages of this, governments and banks will demand to implement more control. They will start to produce their own cryptocurrency. Would you trust these cryptos? What if your bank offered loans in Bitcoin, Ripple or Monero? Would you accept and use this type of loan?

Because it’s a limited resource, what happens when we reach the 21 million bitcoin limit? Unless we change the protocols, this event is estimated to happen by 2140.  My first response  – I don’t think bitcoins will be at the top of my concerns list in 2140.

The Interconnected Home

So what does crypto-mining malware or mineware have to do with your home? It’s easy enough to notice if your laptop is being overused – the device slows down, the battery runs down quickly. How can you tell if your fridge or toaster are compromised? With your smart home now interconnected, what happens if the cyber bad guys operate there? All a cyber bad guy needs is electricity, internet and CPU time. Soon your fridge will charge your toaster a bitcoin for bread and butter. How do we protect our unmonitored devices from this mineware? Who is responsible for ensuring the right level of security on your home devices to prevent this?

Smart home vulnerabilities present a real and present danger. We have already seen baby monitors, robots, and home security products, to name a few, all compromised. Most by IOActive researchers. There can be many risks that these compromises introduce to the home, not just around cryptocurrency. Think about how the interconnected home operates. Any device that’s SMART now has the three key ingredients to provide the cyber bad guy with everything he needs – internet access, power and processing.

Firstly, I can introduce my mineware via a compromised mobile phone and start to exploit the processing power of your home devices to mine bitcoin. How would you detect this? When could you detect this? At the end of the month when you get an electricity bill. Instead of 50 pounds a month, its now 150 pounds. But how do you diagnose the issue? You complain to the power company. They show you the usage. It’s correct. Your home IS consuming that power.

They say that crypto mining is now using as much power as a small country. That’s got a serious impact on the power infrastructure as well as the environment. Ahhhh you say, I have a smart meter, it can give me a real time read out of my usage. Yes, it’s a computer. And, if I’m a cyber bad guy, I can make that computer tell me the latest football scores if I want. The key for a corporation when a cyber bad guy is attacking is to reduce dwell time. Detect and stop the bad guy from playing in your network. There are enterprise tools that can perform these tasks, but do you have these same tools at home? How would you Detect and React to a cyber bad guy attacking your smart home?

IOActive has proven these attack vectors over and over. We know this is possible and we know this is almost impossible to detect. Remember, a cyber bad guy makes several assessments when deciding on an attack – the risk of detection, the reward for the effort, and the penalty for capture. The risk of detection is low, like very low. The reward, well you could be mining blocks for months without stopping, that’s tens of thousands of dollars. And the penalty… what’s the penalty for someone hacking your toaster… The impact is measurable to the homeowner. This is real, and who’s to say not happening already. Ask your fridge!!

What’s the Answer –  Avoid Using Smart Home Devices Altogether?

No, we don’t believe the best defense is to avoid adopting this new technology. The smart and interconnected home can offer its users fantastic opportunities. We believe that the responsibility rests with the manufacturer to ensure that devices are designed and built in a safe and secure way. And, yes, everything is designed; few things are designed well.IOActive researchers spend 99% of their time trying to identify vulnerabilities in these devices for the safety of everyone, not just corporations. The power is in the hands of the consumer. As soon as the consumer starts to purchase products based not only on their power efficiency, but their security rating as well, then we will see a shift into a more secure home.

In the meantime, consider the entry point for most cyber bad guys. Generally, this is your desktop, laptop or mobile device. Therefore, ensure you have suitable security products running on these devices, make sure they are patched to the correct levels, be conscious of the websites you are visiting. If you control the available entry points, you will go a long way to protecting your home.
EDITORIAL | November 14, 2017

Treat the Cause, not the Symptoms!

With the publication of the National Audit Office report on WannaCry fresh off the press, I think it’s important that we revisit what it actually means. There are worrying statements within the various reports around preventative measures that could have been taken. In particular, where the health service talks about treating the cause, not the symptom, you would expect that ethos to cross functions, from the primary caregivers to the primary security services. 

I read that the NHS Digital team carried out an onsite cyber assessment of 88 out of 236 Trusts. None passed. Not one. Think about this. These trusts are businesses whose core function is the health and well-being of its customers, the patients. If this were a bank, and someone did an onsite assessment and said: “well the bank left all the doors open and didn’t lock the vault”, would you put your hard-earned money in there for safe keeping? I don’t think so. More importantly, if the bank said after a theft of all the money, “well the thieves used masks; we didn’t recognize them; they were very sophisticated”, would you be happy? No. Now imagine what could have been found if someone had carried out an in-depth assessment, thinking like the adversary. 


The report acknowledges the existence of a cyber-attack plan. However, the plan hadn’t been communicated. So, no one knew who was doing what because the plan hadn’t been practiced and perfected. The only communication channel the plan provided for, email, was shut down. This meant that primary caregivers ended up communicating with personal devices using WhatsApp, potentially exposing Patient Medical Records on personal mobile phones through a social messaging tool. 

The report also states the NHS Digital agency had no power to force the Trusts to “take remedial action even if it [NHS Digital] has concerns about the vulnerability of an organization”. At IOActive, we constantly talk to our customers about what to do in the case of a found vulnerability. Simply ticking a box without follow up is a pointless exercise. “My KPI is to perform a security assessment of 50% of the Trusts” – box ticked. That’s like saying “I will perform triage on 50% of my patients, but won’t treat them”. Really?! 

An efficacy assessment of your security practices is not an audit report. It is not a box-ticking exercise. It is a critical function designed specifically to enable you to identify vulnerabilities within your organization’s security posture and empower you to facilitate appropriate controls to manage risk at a business level. Cyber Security and Information Security are not IT issues; they are a business issue. As such, the business should absolutely be focused on having skilled experts providing actionable intelligence, enabling them to make business decisions based on risk, impact and likelihood. It’s not brain surgery, or maybe it is.

It’s generally accepted that, if the bank had taken basic IT security steps, this problem would have been avoided. Treat the cause not the symptom. We are hearing a lot of evidence that this was an orchestrated attack from a nation-state. However, I’m pretty sure, with the basic failures of the NHS Digital to protect the environment, it wouldn’t have taken a nation-state to launch this destructive attack. 

Amyas Morse, Head of NAO said: “It was a relatively unsophisticated attack and could have been prevented by the NHS following basic IT security best practices. There are more sophisticated cyber-threats out there than WannaCry, so the Department and the NHS need to get their act together to ensure the NHS is better protected against future attacks.” I can absolutely guarantee there are more sophisticated attacks out there. 

Eighty-one NHS organizations were impacted. Nineteen-thousand five hundred medical appointments canceled. Six hundred GP surgeries unable to support patients. Five hospitals diverted ambulances elsewhere. Imagine the human factor. You’re waiting for a lifesaving operation – canceled. You’ve been in a car crash – ambulance diverted 40 miles away. All because Windows 7 wasn’t patched. Is that acceptable for an organization trusted with the care and well-being of you and your loved ones? Imagine the damage had this attack been more sophisticated.

Cybersecurity Assessments are not audit activities. They are mission critical assessments for the longevity of your business. The NHS got lucky. There are not many alternatives for health care. It’s not like you can pop down the street and choose the hospital next door. And that means they can’t be complacent about their duty of care. People’s lives are at stake. Treat the cause not the symptoms.

EDITORIAL | October 3, 2017

[Meta Analysis] Rick and Morty S3E1: The Hacker’s Episode

Hi folks, I’m a huge Rick and Morty fan. Sometimes while watching it, I notice allegories and puns related to security, privacy, physics, psychology, and a wide range of scientific fields. Because of this, I’ve decided to review some Rick and Morty episode and share my observations with the wonderful folks who work in these fields and those who aspire to 😉 Enjoy!
A machine force feeding a human. Being brutally and utterly dedicated to our whims, the robots show us how perverted our ideas of success, good, and bad are when taken to an extreme – effectively giving us a taste of our own “medicine.”  
 

Before we dig into what this episode means, here’s a quick summary from Wikipedia:

Rick is interrogated via a mind-computer link inside a galactic federal prison. Rick is interrogated via a mind-computer link inside a galactic federal prison. Summer and Morty attempt to rescue him, but they are captured by SEAL Team Ricks, who take them to the Citadel of Ricks and decide to assassinate Rick. Back at the prison, Rick tricks both the federal agents and his aspiring assassins by switching bodies with them. He then teleports the entire Citadel into the federal prison, prompting a massive battle. Amid the confusion, Rick rescues Morty and Summer and uses the Galactic Federation’s mainframe to make their currency worthless. The Federation falls into chaos and collapses as a result, with the aliens leaving Earth. Back at home, Jerry asks Beth to choose between him and Rick, but she chooses Rick. After the new status quo is established, Rick reveals to Morty that his ulterior motive was to become his de facto male influence. This escalates into a nonsensical angry rant, centered around Rick’s desire to find more of the discontinued McDonald’s Szechuan sauce, a promotional product for the 1998 film Mulan.
Lets dig in…
 

The Brainalyzer

Rick is trapped in something called a “Brainalyzer” which is effectively a brain-to-computer link. This is a computer science pun in a couple of ways. It obviously references cutting-edge computer science research into literally connecting peoples brains to computers. In academic circles, researchers have affected a way to control a computer with your brain or have your brain controlled by a computer.
Rick in the Brainalyzer.

The Brainalyzer also serves as an ironic expression of the false ideology people have of computers: that they are NOT directly connected to our brains.

The software we write for the computer, the hardware we build for the computer, and the perspectives we have of all of these things are entirely inside our heads.

As a hacker, I can tell you that what you do when you try to break someone’s algorithm is basically argue with the person who wrote the algorithm in your head! One person’s implementation of their idea of what security is competes with yours; it’s all mind games. Coming back to the episode, this is literally what Rick is doing in the scene; inside his head, he argues with the person who is ‘securing him’ in the prison.

The flip side of this Brainalyzer – from a security perspective – is that it is a huge security design mistake. Interfacing the thoughts of the prisoners to the prison computer system (which controls the prison) makes separation failure a huge security risk. Ironically, the prison exists to physically restrain the prisoners because they are bad people who think of doing bad things. The Brainalyzer is implemented in utter reverse to this idea; in a sense, it is a way to control the prison in the literal minds of the people they are imprisoning.

Rick’s Mind Virus(es)
The strategy the interrogators employ is to ask Rick to share the memory of his first successful creation of the Portal Gun. Rick then leads them to the memory, which turns out to be a complete fabrication.
Rick’s exploit being uploaded…

What has happened here is Rick has convinced them one kind of information or data in his head is actually another kind of information or data. This is strongly analogous to what is called a memory corruption attack. In a memory corruption attack, the adversary convinces the machine that malicious data (one kind of information) is actually execution code (another kind of data), or that “data” should be treated as literal instructions. This flaw allows the attacker to inject data that is interpreted as code, thereby allowing the attacker’s data to arbitrarily influence the behavior of the machine.

So now we see Rick has triggered a memory corruption bug! And he does this in order to inject code into the machine giving them access to his brain. Rick confirms this by referring to the code he gave them as a “virus,” and stating that he did it to install a backdoor that allows him full control of the facility.

Rick literally psychoanalyzing his opponent – “getting inside his head.”
Rick now has full control of the “memory” they are trapped in and reveals that the entire time he was fooling them. What is ironic about this is that Rick is physically trapped in a machine that allows them direct access to his brain. This is to say while they are “inside his brain,” because Rick was hustling them this whole time, he was actually inside their heads!

Gotta Go Take a Sh*t…

After escaping the Brainalyzer, Rick suddenly needs to use the bathroom on Level 9. This is obviously a social engineering exploit. Restrooms are often located behind security barriers. However, if there is a kind reception person, they can usually be talked into letting you go through to use the bathroom; at which that point, you’ve bypassed security and you’re in! A very old trick, Kevin Mitnick would probably giggle to see Rick haphazardly employ this tactic.
Rick social engineering his way to level 9
The allegory becomes a little more obvious after this scene. Starting from a subtle, nuanced example of a security exploit (by the depth of the metaphor) the plot switches to a more obvious and almost sloppy “gotta go take a sh*t ,” with Rick literally declaring his intention at the end of the episode (speaking in an encapsulating fashion). This “escalation” is a common cadence to security attacks (starting small, ending big): you always start from a small crack and chip away until you have Domain Admin rights. Every hacker knows that!
Just before Rick can make use of the password, he is interrupted by assassins from the council of Ricks. Quickly, he escapes in what is a literal instantiation of an authentication replay or a kind of “session hijacking” attack. Here Rick swaps identities with someone in the group that is trying to catch him, thereby, tricking them into killing the wrong person. Rick has now gone from an interrogator in the prison to a member of SEAL Team Rick. One epic privilege escalation attack!

The Privilege Escalation Attack

After killing the entire SEAL team, he makes his way to the citadel as Rick D99. He then pulls off another obvious privilege escalation attack by specifically asking for someone with a certain level of “higher” clearance.
Rick Phone Phreaking/Hardware Hacking his way into the citadel

Assuming this role, he then moves to gain control of the entire domain and jokes about how bad the system design is (another jab at information security engineering). The design flaw here is that there is no further authentication or oversight needed to perform an incredibly dangerous function; you just walk up to it and press the right buttons – no, executive calls, no approval process…just buttons! He abuses this design as a citadel employee to teleport the entire citadel straight into the galactic prison he just escaped from, causing a massive war between the citadel and the galactic prison.

The person in the center here looks strikingly similar to Rick in the previous image. The armor, the hair. Some might recognize the “Butter Robot”-esque android being built in the background. This is the site banner for DefCon the world’s biggest hacking conference :).

The BitFlip Attack

Following this, Rick makes his way to Level 9, finally admitting his entire scheme was all an elaborate ploy to in fact “get Level 9 access without a password!” He explains his entire chain of exploits as a revenge arch triggered by the citadel interrupting his imminent access to Level 9 by the attempted assassination. Having gained Level 9 access, Rick uses what could be seen as another very old security attack called “Bit Flipping.” This term is sometimes used to loosely refer to attacks that can deterministically change something’s state in a way that affects security. Usually, these “states” are represented using simple Boolean values of 0 or 1. For example, Row Hammer has been exploited to flip bits in a table that holds security relevant information. Effectively, this is what Rick is doing with the currency value: flipping a 1 bit to 0. A small error that eventually topples an entire federation. Start small, end big!
That’s it for now, until I dig up more macabre information security or extra-scientific analogies.
This blog was originally written and posted on September 25, 2017 by Keith Makan and is reproduced here with his express permission.