INSIGHTS | October 11, 2012

SexyDefense Gets Real

As some of you know by now, the recent focus of my research has been defense. After years of dealing almost exclusively with offensive research, I realized that we have been doing an injustice to ourselves as professionals. After all, we eventually get to help organizations protect themselves (having the mindset that the best way to learn defense is to study the offensive techniques), but nevertheless, when examining how organizations practice defense one has a feeling of missing something.
For far too long the practice (and art?) of defense has been entrusted to bureaucrats and was lowered down to a technical element that is a burden on an organization. We can see it from the way that companies have positioned defensive roles: “firewall admin,” “IT security manager,” “incident handler,” and even the famous “CISO.” CISOs have been getting less and less responsibility over time, basically watered down to dealing with the network/software elements of the organization’s security. No process, no physical, no human/social. These are all handled by different roles in the company (audit, physical security, and HR, respectively).
This has led to the creation of the marketing term “APT”: Advanced Persistent Threat. The main reason why non-sophisticated attackers are able to deploy an APT is the fact that organizations are focusing on dealing with extremely narrow threat vectors; any threat that encompasses multiple attack vectors that affect different departments in an organization automatically escalates into an APT since it is “hard” to deal with such threats. I call bullshit on that.
As an industry, we have not really been supportive of the defensive front. We have been pushing out products that deal mainly with past threats and are focused on post-mortem detection of attacks. Anti-virus systems, firewalls, IDS, IPS, and DLP – these are all products that are really effective against attacks from yesteryears. We ignore a large chunk of the defense spectrum nowadays, and attackers are happily using this against us, the defenders.
When we started SexyDefense, the main goal was to open the eyes of defensive practitioners, from the hands-on people to executive management. The reason for this is that this syndrome needs to be fixed throughout the ranks. I already mentioned that the way we deal with security in terms of job titles is wrong. It’s also true for the way we approach it on Day 1. We make sure that we have all the products that industry best practices tell us to have (which are from the same vendors that have been pushing less-than-effective products for years), and then we wait for the alert telling us that we have been compromised for days or weeks.
What we should be doing is first understanding what are we protecting! How much is it worth to the organization? What kind of processes, people, and technologies “touch” those assets, and how do they affect it? What kind of controls are there to protect such assets? And ultimately, what are the vulnerabilities in processes, people, and technologies related to said assets?
These are tough questions – especially if you are dealing with an “old school” practice of security in a large organization. Now try asking the harder question: who is your threat? No, don’t say hackers! Ask the business line owners, the business development people, sales, marketing, and finance. These are the people who probably know best what are the threats to the business, and who is out there to get it. Now align that information with the asset related ones, and you get a more complete picture of what you are protecting, and from whom. In addition, you can already see which controls are more or less effective against such threats, as it’s relatively easy to figure out the capabilities, intent, and accessibility of each adversary to your assets.
Now, get to work! But don’t open that firewall console or that IPS dashboard. “Work” means gathering intelligence on your threat communities, keeping track of organizational information and changes, and owning up to your home-field advantage. You control the information and resources used by the organization. Use them to your advantage to thwart threats, to detect intelligence gathering against your organization, to set traps for attackers, and yes, even to go the whole 9 yards and deal with counterintelligence. Whatever works within the confines of the law and ethics.
If this sounds logical to you, I invite you to read my whitepaper covering this approach [sexydefense.com] and participate in one of the SexyDefense talks in a conference close to you (or watch the one given at DerbyCon online: [http://www.youtube.com/watch?v=djsdZOY1kLM].
If you have not yet run away, think about contributing to the community effort to build a framework for this, much like we did for penetration testing with PTES. Call it SDES for now: Strategic Defense Execution Standard. A lot of you have already been raising interest in it, and I’m really excited to see the community coming up with great ideas and initiatives after preaching this notion for a fairly short time.
Who knows what this will turn into?
INSIGHTS | September 11, 2012

Malware Doesn’t Care About Your Disclosure Policy, But You Better Have One Anyway

All over the world, things are changing in ICS security—we are now in the spotlight and the only way forward is, well, forward. Consequently, I’m doing more reading than ever to keep up with technical issues, global incidents, and frameworks and policies that will ensure the security of our future.

From a security researcher’s perspective, one exciting development is that .gov is starting to understand the need for disclosure in some cases. They have found that by giving companies lead time to implement fixes, they often get stonewalled for months or years. Yes, it sometimes takes years to fix specific ICS security issues, but that is no excuse for failing to contact the researcher and ICS-CERT with continually-updated timelines. This is well reflected in the document we are about to review.

The Common Industrial Control System Vulnerability Disclosure Framework was published a bit before BlackHat/Defcon/BSideLV, and I’ve just had some time to read it. The ICSJWG put this together and I would say that overall it is very informative.

For example, let’s start with the final (and most blogged about) quote of the Executive Summary:

“Inconsistent disclosure policies have also contributed to a public perception of disorganization within the ICS security community.”

I can’t disagree with that—failure to have a policy already has contributed to many late nights for engineers.

On Page 7, we see a clarification of vulnerabilities found during customer audits that is commendable:

“Under standard audit contracts, the results of the audit are confidential to the organization customer and any party that they choose to share those results with. This allows for information to be passed back to the vendor without violating the terms of the audit. The standard contract will also prevent the auditing company from being able to disclose any findings publically. It is important to note however, that it is not required for a customer to pass audit results on to a vendor unless explicitly noted in their contract or software license agreement.”

Is there a vendor who explicitly asks customers to report vulnerabilities in their license agreements? Why/why not?

On Page 9, Section 5 we find a dangerous claim, one that I would like to challenge as firmly and fairly as I can:

“Not disclosing an issue is not discussed; however it remains an option and may be appropriate in some scenarios.”

Very, well. I’m a reasonable guy whose even known to support responsible disclosure despite the fact it puts hand-cuffs on only the good guys. Being such a reasonable guy, I’m going to pretend I can accept the idea that a company selling industrial systems or devices might have a genuine reason to not disclose a security flaw to its customers. In the spirit of such a debate, I invite any vendor to comment on this blog post with a hypothetical scenario in which this is justified.

Hypothetically speaking: When is it appropriate to withhold vulnerabilities and not disclose them to your ICS customers?

While we’re at it, we also see the age-old disclosure always increases risk trope again, here:

“Public Disclosure does increase risk to customers, as any information disclosed about the vulnerability is available to malicious individuals as well as to legitimate customers. If a vulnerability is disclosed publically prior to a fix being made available, or prior to an available fix being deployed to all customers, malicious parties may be able to use that information to impact customer operations.”

Since I was bold enough to challenge all vendors to answer my question about when it is appropriate to remain silent, it’s only fair to tackle a thorny issue from the document myself. Imagine you have a serious security flaw without a fix. The argument goes that you shouldn’t disclose it publicly since that would increase the risk. However, what if the exploit were tightly constrained and detectable in 100% of cases? It seems clear that in this case, public disclosure gives the best chance for your customers to DETECT exploitation as opposed to waiting for the fix. Wouldn’t that DECREASE risk? Unfortunately, until you can measure both risk and the occurrence of 0-day vulnerabilities in the wild RELIABLY, this is all just conjecture.

There exists a common misconception in vulnerability management that only the vendor can protect the customer by fixing an issue, and that public disclosure always increases risk. With public disclosure, you widen the circle of critical and innovative eyes, and a third party might be able to mitigate where the vendor cannot—for example, by using one of their own proprietary technologies.

Say, for example, that a couple of ICS vendors had partnered with an Intrusion Detection and Prevention system company that is a known defender of industrial systems. They could then focus their early vulnerability analysis efforts on detecting and mitigating exploits on the wire reliably before they’re even fixed. This would reduce the number of days after zero the exploit can’t be detected and, to my thinking, that reduces the risk. I’m disappointed that—in the post-Stuxnet era—we continue to have ICS disclosure debates because the malware authors ultimately don’t even care. I can’t help but notice that recent ICS malware authors weren’t consulted about their “disclosure policies” and also didn’t choose to offer them.

As much as I love a lively debate, I wanted to commend the ICSJWG for having the patience to explain disclosure when the rest of us get tired.

INSIGHTS | June 28, 2012

Thoughts on FIRST Conference 2012

I recently had the opportunity to attend the FIRST Conference in Malta and meet Computer Emergency Response Teams from around the world. Some of these teams and I have been working together to reduce the internet exposure of Industrial Control Systems, and I met new teams who are interested in the data I share. For those of you who do not work with CERTs, FIRST is the glue that holds together the international collaborative efforts of these teams—they serve as both an organization that makes trusted introductions, and vets new teams or researchers (such as myself).

It was quite an honor to present a talk to this audience of 500 people from strong technical teams around the world. However, the purpose of this post is not my presentation, but rather to focus on all of the other great content that can be found in such forums. While it is impossible to mention all the presentations I saw in one blog post, I’d like to highlight a few.
A session from ENISA and RAND focused on the technical and legal barriers to international collaboration between National CERTS in Europe. I’m interested in this because during the process of sharing my research with various CERTs, I have come to understand they aren’t equal, they’re interested in different types of information, and they operate within different legal frameworks. For example, in some European countries an IP address is considered private information and will not be accepted in incident reports from other teams. Dr. Silvia Portesi and Neil Robinson covered a great wealth of this material type in their presentation and report, which can be found at the following location:
In the United Kingdom, this problem has been analyzed by Andrew Cormack, Chief Regulatory Advisor at Janet. If I recall correctly, our privacy model is far more usable in this respect  and Andrew explained it to me like this:
If an organization cannot handle private data to help protect privacy (which is part of its mission), then we are inhibiting the mission of the organization with our interpretation of the law.
This is relevant to any security researcher who works within incident response frameworks in Europe and who takes a global view of security problems.
Unfortunately, by attending this talk—which was directly relevant to my work—I had to miss a talk by Eldar Lillevik and Marie Moe of the NorCERT team. I had wanted to meet with them regarding some data I shared months ago while working in Norway. Luckily, I bumped into them later and they kindly shared the details I had missed; they also spent some of their valuable time helping me improve my own reporting capabilities for CERTs and correcting some of my misunderstandings. They are incredibly knowledgeable people, and I thank them for both their time and their patience with my questions.
Of course, I also met with the usual suspects in ICS/Smart Grid/SCADA security: ICS-CERT and Siemens. ICS-CERT was there to present on what has been an extraordinary year in ICS incident response. Of note, Siemens operates the only corporate incident response team in the ICS arena that’s devoted to their own products. We collectively shared information and renewed commitments to progress the ICS agenda in Incident Response by continuing international collaboration and research. I understand that GE-CIRT was there too, and apparently they presented on models of Incident Response.
Google Incident Response gave some excellent presentations on detecting and preventing data exfiltration, and network defense. This team impressed me greatly: they presented as technically-savvy, capable defenders who are actively pursuing new forensic techniques. They demonstrated clearly their operational maturity: no longer playing with “models,” they are committed to holistic operational security and aggressive defense.
Austrian CERT delivered a very good presentation on handling Critical Infrastructure Information Protection that focused on the Incident Response approach to critical infrastructure. This is a difficult area to work in because standard forensic approaches in some countries—such as seizing a server used in a crime—aren’t appropriate in control system environments. We met later to talk over dinner and I look forward to working with them again.
Finally, I performed a simple but important function of my own work, which comprises meeting people face-to-face and verifying their identities. This includes our mutually signing crypto-keys, which allows us to find and identify other trusted researchers in case of an emergency. Now that SCADA security is a global problem, I believe it’s incredibly important (and useful) to have contacts around the world with which IOActive already shares a secure channel
INSIGHTS | April 25, 2012

Thoughts on AppSecDC 2012

The first week of April brought another edition of AppSecDC to Washington, D.C., but this year people from two different worlds came to the same conference: Web security and Industrial Control Systems security.  Of course, at the device level this convergence happened a long time ago if we take into account that almost every modern PLC  includes at least a web server, among other things.

 
I was presenting Real-world Backdoors in Industrial Devices on the Critical Infrastructure track, which included really exciting topics from well-known researchers including:
  •        Pentesting Smart Grid Web Apps from Justin Searle
  •        Vulnerabilities in Industrial Control Systems from ICS-CERT
  •        AMI Security from John Sawyer and Don Weber
  •        Project Basecamp: News from Camp 4 from Reid Wightman
  •        Denial of Service from Eireann Leverett
  •        Securing Critical Infrastructure from Francis Cianfrocca
I found it remarkable that most of the talks were basically about offensive security. I think that’s because ICS researchers are still at the point of squeezing all the potential attack vectors, an approach that eventually will provide the intelligence necessary to actually protect critical infrastructure in the best way possible. We would do well to remember that it’s taken many years for the IT sector to finally reach a point where some defensive technologies are solid enough to stop complex attacks.
 
The best thing about the CI track was that it introduced different perspectives and the technical talks highlighted two issues that should be addressed ASAP:  backdoors/unauthenticated protocols and exposure. Amazingly, a large number of industrial devices still rely on unauthenticated protocols and backdoors to implement their functionalities.  PLCs, smart meters, HVAC… during the talks we saw real-world examples that would let attackers control facilities, even remotely!
 
The talk from the ICS-CERT was pretty interesting since it brought another point of view to the track: what happens on the other side? For example, when vendors realize their products contain vulnerabilities or how real incidents are handled—yes, there have been real attacks against industrial facilities. The scary thing is that, according to the data presented by the ICS-CERT, these attacks are not isolated, but represent a trend.
 
The number of published SCADA vulnerabilities has dramatically increased, and societies (as well as the security industry and researchers) are slowly becoming more aware of and concerned about the importance of securing critical infrastructures. Even so, there are still a lot of things waiting to be discovered, so we should expect exciting findings in this area.
 
In summary, security conferences are great places to learn about and meet brilliant people, so if you have the chance to attend some, don’t hesitate! It was a pleasure to attend and speak at AppSecDC, so I would like to thank OWASP and IOActive for giving me this opportunity.
 
See you at the next one!