EDITORIAL | April 13, 2020

Mismatch? CVSS, Vulnerability Management, and Organizational Risk

I’ll never forget a meeting I attended where a security engineer demanded IT remediate each of the 30,000 vulnerabilities he had discovered. I know that he wasn’t just dumping an unvetted pile of vulnerabilities on IT; he’d done his best to weed out false-positive results, other errors, and misses before presenting the findings. These were real issues, ranked using the Common Vulnerability Scoring System (CVSS). There can be no doubt that in that huge (and overwhelming) pile were some serious threats to the organization and its digital assets.

The reaction of the IT attendees did not surprise me, nor did the security engineer’s subsequent reaction. It didn’t go well. Presented with that much work, IT refused, describing their already fully loaded plans and referring the security engineer to the CIO. In other words, “Security, take your vulnerabilities and be gone. We have other work to do.”

I’ve seen this same dynamic play out over and over again. Faced with 72,000 unqualified static analysis findings, the application team addressed none of them. Given 130,000 issues across an organization’s entire (scannable) infrastructure, the operations team’s first reaction is to do nothing.

The foregoing, real-world numbers are overwhelming. As one senior architect told me, “It took me an average of 15 minutes to figure out whether each of the five findings was a real issue. None of them was. In order to work through this set of findings, the effort will take about six person-weeks. We don’t have a resource to dedicate to this for six weeks, especially at a high false-positive rate. Much of it will be wasted effort.”

At the same time, we intuitively know that somewhere in those piles of vulnerabilities are issues that will be exploited and whose exploitation will cause serious harm: we know there’s organizational risk in the pile. But how do we find the dangerous needle in the haystack of vulnerability findings?

Knee-jerk security responses don’t help. I cannot count the number of times a security person has flatly stated, “Just patch it.” As though patching is the simplest thing in the world.

Upgrading a security patch may be simple for a single application; it’s not quite so straightforward when faced with thousands of potential issues across thousands of software components. This is especially true as potential disruption from unexpected side-effects must be considered when introducing new software (patches) into complex systems whose failure might have disastrous consequences.

As far as I’ve been able to see, few organizations cope well with tens of thousands of issues, each demanding a fix.  Plus, a continuing flow of new issues discovered each day adds to the work queue.

Ultimately, managing cybersecurity risk is a business decision just as managing any other organizational or operational risk is for an organization. When these issues are viewed from different, sometimes adversarial, technical silos, it is not surprising that a consensus understanding of organizational risk management priorities does not coalesce.

Up until recently, industry practice has been to prioritize issues based upon CVSS base score. However, research from 2014 indicates that using the CVSS base score may be no better than “choosing at random.”1 Maybe that’s why even organizations with fairly mature security and operations functions continue to be compromised through unpatched vulnerabilities.

Perhaps we’re fixing the wrong issues. Are there attributes that will help find the most likely exploits?

If we rely solely on CVSS, especially, published CVSS base scores, then yes, we will prioritize many issues that will rarely, perhaps never, be exploited by real attackers. The 2014 ground-breaking academic analysis by Allodi and Massacci2 found that CVSS base scores have been a poor predictor of exploitation. Their results have since been validated by some vendor-funded studies.3 CVSS has certainly proven useful for potential severity ratings, but using it as a predictor, or worse, as a risk calculation seems to be a mistake, despite the prevalence of the practice.

If not CVSS, then how can we identify the issues most likely to be exploited? Allodi and Massacci found that the addition of an exploit to an Exploit Kit dramatically increases the likelihood of use “in the wild,” that is, by real-world attackers. Their second strong predictor is Dark Web chatter about a vulnerability and its exploitation. When these two activities happen in tandem, one should fix the issue, and soon. This aligns well with the intuitive insights of today’s security practitioners who focused on threat intelligence and assessing their security posture through adversary emulation assessments such as red team exercises.

That should be easy, right? Unfortunately, processing Dark Web chatter proves non-trivial. Commercial products4 might not provide quite the right information, meaning that users must craft their own searches. Search capabilities in these products vary dramatically from full regular expressions to simple keyword searches. Buyer beware.

However, a recent announcement may signal the path forward. The Cyentia Institute and Kenna Security announced the release of their Exploit Prediction Scoring System (EPSS)5 and the research from which EPSS was built. Kenna Security is supplying the data from which the EPSS calculator works. EPSS employs further predictors than the two primary ones named by Allodi and Massacci; please see the EPSS research6 to learn more. EPSS may be vulnerability management’s better mousetrap.

EPSS includes the CVSS severity score. But it offers an entirely different dimension into the potential for active vulnerability misuse by real attackers. Don’t mistake CVSS for EPSS. They deliver very different facets of the vulnerability picture. Severity is our best guess as to how bad successful exploitation might be in a normalized, generalized case. CVSS lacks context, often glaringly missing. In comparison, EPSS attempts to tell us which vulnerabilities attackers will try, producing a percentage prediction of how likely exploitation will be at the time of calculation.

In the research (please see endnotes), exploitation of high-severity issues is actually much rarer than the misuse of low and medium issues. That may come as a surprise. One reason for the preference for low and medium issues might be the ease of crafting exploits. Plus, attackers hesitate using issues that require significant setup and preconditions. Instead, they routinely string together issues that in isolation aren’t all that impactful. But taken as a set of steps, the “kill chain”, several low and medium issues can lead to full compromise. A quick survey through a few of MITRE’s ATT&CK Threat Groups7 demonstrates how techniques are used to generate a kill chain.

When we rely upon CVSS severity as our priority, we fix the issues that in the most generalized case might cause the most damage, scheduling the lower severities for some later date. This is precisely the problem predictive analysis addresses: identify those issues in which attackers are interested, and prioritize those. It turns out that quite often, some low and medium severity issues are the ones to worry about.

Remove attacker leverage by patching some kill chain steps, and we raise the cost or even prevent chained attacks. But we can only do that if we know which issues, irrespective of their potential severity, attackers are considering. EPSS and predictive models, in general, may offer users a way to sift attacker-preferred issues from the chaff of overwhelming vulnerability queues.

I must warn readers that there are problems with EPSS. Today, all one can get is a single, point-in-time predictive score through a web browser interface. One-at-a-time scoring isn’t how vulnerability management must work in order to scale and provide just-in-time information. Unless a score is high enough to act upon when calculated, any score’s increase over time is the quantity to watch. Each vulnerability’s score needs to be monitored in order to identify issues that exceed the organization’s risk tolerance. Going to a website and checking tens of thousands of issues one at a time isn’t really workable.

If EPSS is going to be of use, there must be some automation for organizations to periodically check scores. The threat landscape is dynamic, so any solution must be equally dynamic. I hope that Cyentia and Kenna Security will provide a service or API through which organizations can monitor predictive score changes over time, and at scale.

EPSS is tightly coupled to the management of vulnerabilities. It would be a major error to apply EPSS, or any vulnerability misuse prediction method, to other aspects of organizational risk management. As always, every organization needs a robust and thorough understanding of its risk tolerances, dedicated skilled people to managing risk, and must adopt a rigorous and proven risk scoring mechanism, for instance, The Open Group standard: Factor Analysis of Information Risk (FAIR)8.

Importantly, EPSS will not supersede human risk analysis. EPSS and CVSS as well, are adjuncts to human analysis, not replacements. Well-resourced attackers appear to be using more so-called, zero-day vulnerabilities9, that is, vulnerabilities unknown before use and not yet fixed. To confront zero-days we must rely on our threat intelligence gathering and contextual risk analysis. Human threat modeling continues to be one of the best techniques for assessing potential danger from the unexpected appearance of a possible threat vector.

The Cyentia researchers indicated to me that Kenna Security owns the data used by EPSS. I attempted to contact someone at Kenna Security multiple times for this article, but Kenna Security has, unfortunately, not responded.

IOActive offers a full range of security consulting services, including vulnerability management, risk assessment, software security, and threat modeling.

Hopefully, this post helps your organization deal with your unmitigated vulnerability queue and better translate it into definable organizational and operational risks. Effective vulnerability management has the potential to free up resources that can be applied to other aspects of a robust cyber-risk program.

Cheers,
/brook s.e. Schoenfield
Master Security Architect
Director of Advisory Services


[1] Allodi, Luca & Massacci, Fabio. (2014). Comparing Vulnerability Severity and Exploits Using Case-Control Studies. ACM Transactions on Information and System Security. 17. 1-20. 10.1145/2630069. Thanks to Luis Servin (@lfservin) for the reference to this academic paper.

[2] http://seconomicsproject.eu/sites/default/files/seconomics/public/content-files/downloads/Comparing Vulnerabilities and Exploits using case-control studies.pdf

[3] NopSec, Inc’s 2016 and 2018 State of Vulnerability Risk Management Reports: https://www.nopsec.com/

[4] There is an open-source, public Dark Web search engine, DarkSearch.io. DarkSearch doesn’t offer full regular expressions, but it does offer several keyword and grouping enhancements.

[5] https://www.kennaresearch.com/tools/epss-calculator/

[6] Prioritization to Prediction, Cyentia Institute, and Kenna Security: https://www.kennasecurity.com/prioritization-to-prediction-report/images/Prioritization_to_Prediction.pdf

[7] https://mitre-attack.github.io/attack-navigator/enterprise/

[8] https://www.opengroup.org/forum/security-forum-0/risk-management

[9] Please see https://www.fireeye.com/blog/threat-research/2020/04/zero-day-exploitation-demonstrates-access-to-money-not-skill.html

EDITORIAL | August 1, 2019

Eight Steps to Improving Your Supply Chain Security Program

In this second, of a two-part blog series on the supply chain, I’ll discuss how to improve your supply chain security.

Supply chain attacks aren’t anything new, but we’re hearing more about them lately, as threat actors continue to find new ways to breach networks. In fact, the most well-known supply chain attack dates back to 2013 when Target was breached through its HVAC supplier, exposing the credit card data of 110 million customers. In the last two years, NotPetya, Trisis and the more recent Wipro compromise have served as not-so-gentle reminders that supply chain attacks are damaging, costly and present many risks to both businesses and their suppliers.

The fact is: the more secure an organization itself is, the more attractive that organization’s supply chain becomes in the mind of the attacker. An attacker wants to find the easiest pathway to get into the network so oftentimes, it’s the supplier who has an exploitable vulnerability that can get them full access into the original target’s network.

The more secure an organization itself is, the more attractive that organization’s supply chain becomes in the mind of the attacker.

Most threat actors organizations face today are very smart. They know they don’t actually need to leverage a sophisticated, complex supply chain hack to wreak havoc on a network, steal data or intellectual property, or cause catastrophic damage. All they really need to do is look for unpatched servers and systems or send out a simple phishing email. Just look at the recent Wipro breach where dozens of employees’ emails were compromised through a phishing scam that gave the threat actors access to over 100 Wipro computer systems to mount attacks on a dozen Wipro customers.

Phishing and the use of stolen credentials are repeat offenders that keep coming up over and over again. In fact, the 2019 Verizon Data Breach Investigations Report cited that 32 percent of the breaches involved phishing scams and 29 percent involved the use of stolen credentials.

An unsophisticated cyberattack often yields a better outcome for an attacker — saving them time, money and resources while making attribution more difficult, so it’s in their best interest to take the easier path to their goal. We’ve seen many successful breaches where attackers penetrated systems through hardcoded credentials or just poorly patched systems.

That’s why, if you’re not protecting your own network against basic threat actors, doing your due diligence to properly patch, and holding your suppliers accountable for securing their own networks, you have no hope in protecting against nation-states or more capable threat actors. This is where third-party testing comes in handy to trust and verify your suppliers.

Here are a few key steps you can take today to build a supply chain security program:

  1. Know your suppliers and look upstream as well as downstream. Start with your tier-one suppliers and then identify tier twos and others. Take a full inventory of who you do business with so you can identify any weak links.
  2. Conduct a risk assessment. Once you’ve identified all your partners, you need to properly assess each one’s cybersecurity posture so you know the risks they may pose to your organization. You must consider where each device or component was built and who exactly built it. Is there a possible backdoor or counterfeit part? Or is it just the more likely software quality issues that can result in a breach?
  3. Utilize third-party testing. Hire a third-party firm to test your system, and that of your suppliers, to provide actionable results on what you need to fix first.
  4. Regularly scan and patch all vulnerable systems.
  5. Use strong passwords. Teach your employees about the importance of using strong passwords and not recycling them across accounts.
  6. Ensure your staff has set up multi-factor authentication everywhere possible.
  7. Conduct regular security awareness training to teach employees how to identify phishing scams, update software and become more security-conscious.
  8. Harden the security of the devices connected to your networks.

Make sure you’re not worrying about low-likelihood events like supply chain attacks if you’re not doing the basics of foundational security at your own organization. It’s really quite simple: you need to crawl before you walk, and walk before you run.

EDITORIAL | July 17, 2019

Supply Chain Risks Go Beyond Cyber: Focus on Operational Resilience

In this first, of a two-part blog series on supply chain, I’ll discuss the security and operational risk in today’s supply chain.

In the past 20 years, we’ve seen the globalization of the supply chain and a significant movement to disperse supply chains outside national borders. With this globalization comes many supply chain risks — risks that go beyond just cyber attacks and demonstrate a need for stronger operational resilience.

Most organizations want to take advantage of tariff treaties and overall cost savings by outsourcing the manufacturing and production of their goods, resulting in greater operational efficiencies. However, much of this supply chain globalization has actually made our supply chain longer, much more complex and less resilient. Nowadays, a product may have to go through multiple countries before it’s complete, offering more opportunities for things to go wrong from a supply chain risk perspective.

In the last two years alone, the global supply chain has experienced major disruptions from natural disasters, weather-related events and factory fires that have put organizations out of business. One of the most notable supply chain disruptions occurred in the 2000s when the production of hard disk drives produced in Thailand was gravely impacted by significant flooding in the country. The flooding impacted the whole logistics chain including the hardware manufacturers, component suppliers, the transportation of the devices, as well as the manufacturing plants and facilities involved in the hard drive development.

Puerto Rico is home to more than 40 drug manufacturing companies so when Hurricane Maria’s tragic landfall in 2017 caused power outages, loss of life and utter devastation, it also disrupted the island’s biggest export: pharmaceutical and medical devices. Even a year after the hurricane, there were still supply chain disruptions involving a major manufacturing plant supplying IV saline bags to U.S. hospitals.

Another, more direct supply chain risk involves the delivery of sub-standard or altered components — this is when the supplier is seeking enhanced profit by delivering low-cost goods. There are many examples of this over the years including the 2010 Vision Tech scandal where the company was charged with selling 59,000 counterfeit microchips to U.S. Navy.  Driven by profit-seeking behavior, in 2018, the owner of PRB Logics Corporation was arrested and charged with selling counterfeit computer parts. They were repainted and remarked with counterfeit logos and PRB took it a step further to defraud the purchaser of the equipment by falsifying test results when the buyer wanted verification that the components were delivered as specified.

While it’s difficult to predict when disasters, hurricanes or flooding may occur, or to know for certain if a device has been tampered with, there are several steps organizations can take to improve their supply chain management and overall operational resiliency, including:

  1. Don’t just select one risk to manage. Take a holistic view of your entire supply chain and try to identify the weakest links.
  2. Consider all potential disruptions and ways you can build and design your supply chain to keep it operational in the face of any foreseeable and unforeseeable challenges. If the suppliers with whom you deal directly are required to have a supply chain program and they expect the same of their suppliers, this will create a far more resilient supply chain of higher integrity.
  3. Don’t use substandard or modified/altered components and parts to save money. This can result in major issues with supply chain integrity and data integrity down the road.
  4. Trust and verify. Know what’s in your firmware and ensure there are no counterfeit hardware components. You need to verify what you cannot trust, including components from a third-party. You need to trust what you cannot verify. Even if you trust a vendor, there’s always the possibility of a compromise further up the supply chain.
  5. Understand high-order effects within your supply chain. A first-order effect directly impacts that device, whereas a second-order effect is simply the consequence of the first effect of an event.
INSIGHTS | February 11, 2013

Your network may not be what it SIEMs

The number of reports of networks that are rampaged by adversaries is staggering. In the past few weeks alone we’ve seen reports from The New York Times, The Washington Post and Twitter. I would argue that the public reports are just the tip of the iceberg. What about the hacks that never were? What about the companies that absorbed the blow and just kept on trucking or … perhaps even those companies that never recovered?

When there’s an uptick in media attention over security breaches, the question most often asked – but rarely answered – is “What if this happens to me?”

Today you don’t want to ask that question too loudly – else you’ll find product vendors selling turn-key solutions and their partners on your doorstep, closely followed by ‘Managed Security Services’ providers. All ready to solve your problems once you send back their signed purchase order… if you want to believe that.

Most recently they’ve been joined by the “let’s hack the villains back” start-ups. That last one is an interesting evolution but not important for this post today.

I’m not here to provide a side-by-side comparison of service providers or product vendors. I encourage you to have an open conversation with them when you’re ready for it; but what I want to share today is my experience being involved in SIEM projects at scale, and working hands-on with the products as a security analyst. The following lessons were gained through a lot of sweat and tears:

  • SIEM projects are always successful, even when delivered over budget, over time and with only 1/3rd of the requirements actually realized.
  • Managed services don’t manage your services, they provide the services they can manage at the price you are willing to pay for them.
  • There is no replacement for knowing your environment. Whatever the price you are willing to pay for it.

This obviously begs the question whether installing a SIEM is worth spending a limited security budget upon.

It is my personal opinion that tooling to facilitate Incident Response, including SIEM to delve through piles and piles of log data, is always an asset. However, it’s also my opinion that buying a tool RIGHT NOW is not your priority if you can not confidently answer “YES” to the following questions :

  1. I know where my most valuable assets reside on my network, which controls are implemented to protect them and how to obtain security-related data from those controls.
  2. I know the hardware and software components that support those most valuable assets and know how to obtain security-related data from them.
  3. I know, for those most valuable assets, which devices communicate with them, at which rate and at which times. I can gather relevant data about that.
  4. I know, and can verify, which machines can (and should) communicate with external networks (including the internet). I can gather relevant data about that.

In case of a resounding “NO” or a reluctantly uttered “maybe”, I would argue that there are things you should do before acquiring a SIEM product. It is your priority to understand your network and to have control over it, unless you look forward to paying big money for shiny data aggregators.

Advice

With that challenge identified how do you move ahead and regain control of your network. Here’s some advice :

The most important first step is very much like what Forensics investigators call “walking the grid”. You will need to break down your network in logical chunks and scrutinize them. Which are the components that are most important to our business and what are the controls protecting them. Which data sources can tell us about the security health of those components and how? how frequently? in which detail? Depending on the size and complexity of the network this may seem like a daunting task but at the same time you’ll have to realize that this is not a one time exercise. You’ll be doing this for the foreseeable future so it’s important that it is turned into a repeatable process. A process that can be reliably executed by other people than you, with consistent results.

Next, epiphany! Nobody but yourself can supplement the data delivered by appliances and software distributed across your network with actual knowledge about your own network. Out of the box rulesets don’t know about your network and -however bright they may be- security analysts on follow-the-sun teams don’t either. Every event and every alert only makes sense in context and when that context is removed, you’re no longer doing correlation. You’re swatting flies. While I came up with this analogy on the fly, it makes sense. You don’t swat flies with a Christian Louboutin or a Jimmy Choo, you use a flip-flop. There are plenty of information security flip-flops out there on the internet. Log aggregators (syslog, syslog-ng), Full-blown open-source NIDS systems (Snort, Bro) or HIDS systems (OSSEC), and even dedicated distributions (e.g. Security Onion) or open source SIEM (like OSSIM) can go a long way in helping you understand what is going on in your network. The most amazing part is that, until now, you’ve spend $0 on software licensing and all of your resources are dedicated to YOUR people learning about YOUR network and what is going on there. While cracking one of the toughest nuts in IT Security, you’re literally training your staff to be better security analysts and incident responders.

The first push-back I generally receive when I talk (passionately, I admit) about open source security tooling is in the form of concern. The software is not controlled, we can’t buy a support contract for it (not always true by the way!!), our staff doesn’t have the training, we are too small/big for these solutions…It’s impossible to argue closed vs. open source in this blogpost.

I believe it’s worth looking at to solve this problem, others may disagree. To the point of training staff I will say that what those tools largely do is what your staff currently does manually in an ad hoc fashion. They do understand logging and network traffic, learning how the specific tools work and how they can make their job easier will be only a fraction of the time they spend on implementing them. It is my experience that the enthusiasm of people that get to work with tools –commercial or otherwise– that makes their actual job easier, compensates for any budget you have to set aside for ‘training’. To the point of size, I have personally deployed open source security tools in SMB environments as well as in 1000+ enterprise UNIX farms. It is my strongest belief that, as security engineers, it is not our job to buy products. It is our task to build solutions for the problems at hand, using the tools that best fit the purpose. Commercial or not.

It makes sense that, as you further mature in your monitoring capability, the free tools might not continue to scale and you’ll be looking to work with the commercial products or service providers I mentioned above. The biggest gain at that moment is that you perfectly understand what you need, which parts of the capability you can delegate to a third party and what your expectations are, which parts of the problem space you can’t solve without dedicated products. From experience, most of the building blocks will be easily reused and integrated with commercial solutions. Many of those commercial solutions have support for the open source data generators (Snort, Bro, OSSEC, p0f, …).

Let’s be realistic: if you’re as serious about information security as I think you are, you don’t want to be a “buyer of boxes” or a cost center. You want to (re)build networks that allow you to defend your most valuable assets against those adversaries that matter and, maybe as important as anything else, you want to stop running behind the facts on fancy high heels.

*For the purpose of this post SIEM stands for Security Information and Event Management. It is often referred to as SIM, SEM and a bunch of other acronyms and we’re ok with those too.

INSIGHTS | January 30, 2013

Energy Security: Less Say, More Do

Due to recent attacks on many forms of energy management technology ranging from supervisory control and data acquisition (SCADA) networks and automation hardware devices to smart meters and grid network management systems, companies in the energy industry are increasing significantly the amount they spend on security. However, I believe these organizations are still spending money in the wrong areas of security.  Why? The illusion of security, driven by over-engineered and over-funded policy and control frameworks and the mindset that energy security must be regulated before making a start is preventing, not driving, real world progress.

Sadly, I don’t see organizations in the oil and gas exploration, utility, and consumer energy management sectors taking more visible and proactive approaches to improving the security of their assets in 2013 any more than they did in 2012.
It’s only January, you protest. But let me ask you: on what areas are your security teams going to focus in 2013?
I’ve had the privilege in the past six months of travelling to Asia, the Middle East, Europe and the U.S. to deliver projects and have seen a number of consistent shortcomings in security programs in almost every energy-related organization that I have dealt with. Specialized security teams within IT departments are commonplace now, which is great. But these teams have been in place for some time. And even though as an industry we spend millions on security products every year, the number of security incidents is also increasing every year.  I’m sure this trend will continue in 2013. It is clear to me (and this is a global issue in energy security), that the great majority of organizations do not know where or how to correctly spend their security budgets.
Information security teams focus heavily on compliance, policies, controls, and the paper perception of what good security looks like when in fact there is little or no evidence that this is the case. Energy organizations do very little testing to validate the effectiveness of their security controls, which leaves these companies exposed to attacks and wondering what they are doing wrong.

 

For example, automated malware has been mentioned many times in the press and is a persistent threat, but companies are living under the misapprehension that having endpoint solutions alone will protect them from this threat. Network architectures are still being poorly designed and communication channels are still operating in the clear, leaving critical infrastructure solutions exposed and vulnerable.
I do not mean to detract from technology vendors who are working hard to keep up with all the new malware challenges, and let’s face it, we would we would be lost without many of their solutions. But organizations that are purchasing these products need to “trust but verify” these products and solutions by requiring vendors and solution integrators to prove that the security solutions they are selling are in fact secure. The energy industry as a whole needs to focus on proving the existence of controls and to not rely on documents and designs that say how a system should be secure. Policies may make you look good, but how many people read them? And, if they did read them, would they follow them? How would you know? And could you place your hand on heart and swear to the CEO, “I’m confident that our critical systems and data cannot be compromised.”?

 

I say, “Less say, more do in 2013.” Energy companies globally need to stop waiting for regulations or for incidents to happen and must do more to secure their systems and supply. We know we have a problem in the industry and it won’t go away while we wait for more documents that define how we should improve our security defenses. Make a start. The concepts aren’t new, and it’s better to invest money and effort in improved systems rather than churning out more polices and paper controls and hoping they make you more secure. And it is hope, because without evidence how can you really be sure the controls you design and plan are in place and effective?

 

Start by making improvements in the following areas and your overall security posture will also improve (a lot of this is old news, but sadly is not being done):

 

Recognize that compliance doesn’t guarantee security. You must validate it.
·         Use ISA99 for SCADA and ISO27001/2/5 for security risk management and controls.
·         Use compliance to drive budget conversations.
·         Don’t get lost in a policy framework. Instead focus on implementing, then validating.
·         Always validate paper security by testing internal and external controls!
Understand what you have and who might want to attack it.
·         Define critical assets and processes.
·         Create a list of who could affect these assets and how.
·         Create a layered security architecture to protect these assets.
·         Do this work in stages. Create value to the business incrementally.
·         Test the effectiveness of your plans!
Do the basics internally, including:
·         Authentication for logins and machine-to-machine communications.
·         Access control to ensure that permissions for new hires, job changers, and departing employees are managed appropriately.
·         Auditing to log significant events for critical systems.
·         Availability by ensuring redundancy and that the organization can recover from unplanned incidents.
·         Integrity by validating critical values and ensuring that accuracy is always upheld.
·         Confidentiality by securing or encrypting sensitive communications.
·         Education to make staff aware of good security behaviors. Take a Health & Safety approach.
Trust but verify when working with your suppliers:
·         Ask vendors to validate their security, not just tell you “it’s secure.”
·         Ask suppliers what their security posture is. Do they align to any security standards? When was the last time they performed a penetration test on client-related systems? Do they use a Security Development Lifecycle for their products?
·         Test their controls or ask them to provide evidence that they do this themselves!
Work with agencies who are there to assist you and make them part of your response strategy, such as:
·         Computer Emergency Readiness Team (CERT)
·         Centre for the Protection of National Infrastructure (CPNI)
·         North American Electric Reliability Corporation (NERC)
Trevor Niblock, Director, ICS and Smart Grid Services