INSIGHTS | September 11, 2012

Malware Doesn’t Care About Your Disclosure Policy, But You Better Have One Anyway

All over the world, things are changing in ICS security—we are now in the spotlight and the only way forward is, well, forward. Consequently, I’m doing more reading than ever to keep up with technical issues, global incidents, and frameworks and policies that will ensure the security of our future.

From a security researcher’s perspective, one exciting development is that .gov is starting to understand the need for disclosure in some cases. They have found that by giving companies lead time to implement fixes, they often get stonewalled for months or years. Yes, it sometimes takes years to fix specific ICS security issues, but that is no excuse for failing to contact the researcher and ICS-CERT with continually-updated timelines. This is well reflected in the document we are about to review.

The Common Industrial Control System Vulnerability Disclosure Framework was published a bit before BlackHat/Defcon/BSideLV, and I’ve just had some time to read it. The ICSJWG put this together and I would say that overall it is very informative.

For example, let’s start with the final (and most blogged about) quote of the Executive Summary:

“Inconsistent disclosure policies have also contributed to a public perception of disorganization within the ICS security community.”

I can’t disagree with that—failure to have a policy already has contributed to many late nights for engineers.

On Page 7, we see a clarification of vulnerabilities found during customer audits that is commendable:

“Under standard audit contracts, the results of the audit are confidential to the organization customer and any party that they choose to share those results with. This allows for information to be passed back to the vendor without violating the terms of the audit. The standard contract will also prevent the auditing company from being able to disclose any findings publically. It is important to note however, that it is not required for a customer to pass audit results on to a vendor unless explicitly noted in their contract or software license agreement.”

Is there a vendor who explicitly asks customers to report vulnerabilities in their license agreements? Why/why not?

On Page 9, Section 5 we find a dangerous claim, one that I would like to challenge as firmly and fairly as I can:

“Not disclosing an issue is not discussed; however it remains an option and may be appropriate in some scenarios.”

Very, well. I’m a reasonable guy whose even known to support responsible disclosure despite the fact it puts hand-cuffs on only the good guys. Being such a reasonable guy, I’m going to pretend I can accept the idea that a company selling industrial systems or devices might have a genuine reason to not disclose a security flaw to its customers. In the spirit of such a debate, I invite any vendor to comment on this blog post with a hypothetical scenario in which this is justified.

Hypothetically speaking: When is it appropriate to withhold vulnerabilities and not disclose them to your ICS customers?

While we’re at it, we also see the age-old disclosure always increases risk trope again, here:

“Public Disclosure does increase risk to customers, as any information disclosed about the vulnerability is available to malicious individuals as well as to legitimate customers. If a vulnerability is disclosed publically prior to a fix being made available, or prior to an available fix being deployed to all customers, malicious parties may be able to use that information to impact customer operations.”

Since I was bold enough to challenge all vendors to answer my question about when it is appropriate to remain silent, it’s only fair to tackle a thorny issue from the document myself. Imagine you have a serious security flaw without a fix. The argument goes that you shouldn’t disclose it publicly since that would increase the risk. However, what if the exploit were tightly constrained and detectable in 100% of cases? It seems clear that in this case, public disclosure gives the best chance for your customers to DETECT exploitation as opposed to waiting for the fix. Wouldn’t that DECREASE risk? Unfortunately, until you can measure both risk and the occurrence of 0-day vulnerabilities in the wild RELIABLY, this is all just conjecture.

There exists a common misconception in vulnerability management that only the vendor can protect the customer by fixing an issue, and that public disclosure always increases risk. With public disclosure, you widen the circle of critical and innovative eyes, and a third party might be able to mitigate where the vendor cannot—for example, by using one of their own proprietary technologies.

Say, for example, that a couple of ICS vendors had partnered with an Intrusion Detection and Prevention system company that is a known defender of industrial systems. They could then focus their early vulnerability analysis efforts on detecting and mitigating exploits on the wire reliably before they’re even fixed. This would reduce the number of days after zero the exploit can’t be detected and, to my thinking, that reduces the risk. I’m disappointed that—in the post-Stuxnet era—we continue to have ICS disclosure debates because the malware authors ultimately don’t even care. I can’t help but notice that recent ICS malware authors weren’t consulted about their “disclosure policies” and also didn’t choose to offer them.

As much as I love a lively debate, I wanted to commend the ICSJWG for having the patience to explain disclosure when the rest of us get tired.