INSIGHTS | May 11, 2015

Vulnerability disclosure the good and the ugly

I can’t believe I continue to write about disclosure problems. More than a decade ago, I started disclosing vulnerabilities to vendors and working with them to develop fixes. Since then, I have reported hundreds of vulnerabilities. I often think I have seen everything, and yet, I continue to be surprised over and over again. I wrote a related blog post a year and a half ago (Vulnerability bureaucracy: Unchanged after 12 years), and I will continue to write about disclosure problems until it’s no longer needed.

 

Everything is becoming digital. Vendors are producing software for the first time or with very little experience, and many have no security knowledge. As a result, insecure software is being deployed worldwide. The Internet of Things (IoT), industrial devices and industrial systems (SCADA/ICS), Smart City technology, automobile systems, and so on are insecure and getting worse instead of better.

 

Besides lacking of security knowledge, many vendors do not know how to deal with vulnerability reports. They don’t know what to do when an individual researcher or company privately discloses a vulnerability to them, how to properly communicate the problem, or how to fix it. Many vendors haven’t planned for security patches. Basically, they never considered the possibility of a latent security flaw. This creates many of the problems the research community commonly faces.

 

When IOActive recently disclosed vulnerabilities in CyberLock products, we faced problems, including threats from CyberLock’s lawyers related to the Digital Millennium Copyright Act (DMCA). CyberLock’s response is a very good example of a vendor that does not know how to properly deal with vulnerability reports.

 

On the other hand, we had a completely different experience when we recently reported vulnerabilities to Lenovo. Lenovo’s response was very professional and collaborative. They even publicly acknowledged our collaboration:

 

“Lenovo’s development and security teams worked directly with IOActive regarding their System Update vulnerability findings, and we value their expertise in identifying and responsibly reporting them.”

Source: http://www.absolutegeeks.com/2015/05/06/round-2-lenovo-is-back-in-the-news-for-a-new-security-risk-in-their-computers (no longer active)

IOActive approached both cases in the same way, but with two completely different reactions and results.

 

We always try to contact the affected vendor through a variety of channels and offer our collaboration to ensure a fix is in place before we disclose our research to the public. We invest a lot of time and resources to helping vendors understand the vulnerabilities we find. We have calls with developers and managers, test their fixes, and so on, all for free without expecting anything in return. We do not propose nor discuss business opportunities; our only motive is to see that the vulnerabilities get fixed. We have a great track record; we’ve reported dozens of vulnerabilities and collaborated with many vendors and CERTs too.

 

When a vendor is nonresponsive, we feel that the best solution is usually to disclose the vulnerability to the public. We do this as a last resort, as no vendor patch or solution will be available in such a case. We do not want to be complicit in hiding a flaw. Letting people know can force the vendor to address the vulnerability.

Dealing with vulnerability reports shouldn’t be this difficult. I’m going to give some advice, based on my experience, to help companies avoid vulnerability disclosure problems and improve the security of their products:

 

  • Clearly display a contact email for vulnerability reports on the company/product website
  • Continuously monitor that email address and instantly acknowledge when you get a vulnerability report
  • Agree on response procedures including regular timeframes for updating status information after receiving the report
  • Always be collaborative with the researcher/company, even if you don’t like the reporter
  • Always thank the researcher/company for the report, even if you don’t like the reporter
  • Ask the reporter for help if needed, and work together with the reporter to find better solutions
  • Agree on a time for releasing a fix
  • Agree on a time for publicly disclosing the vulnerability
  • Release the fix on time and alert customers

That’s it! Not so difficult. Any company that produces software should follow these simple guidelines at a minimum.

It comes down to this: If you produce software, consider the possibility that your product will have security vulnerabilities and plan accordingly. You will do yourself a big favor, save money, and possibly save your company’s reputation too.
INSIGHTS | January 21, 2014

Scientifically Protecting Data

This is not “yet another Snapchat Pwnage blog post”, nor do I want to focus on discussions about the advantages and disadvantages of vulnerability disclosure. A vulnerability has been made public, and somebody has abused it by publishing 4.6 million records. Tough luck! Maybe the most interesting article in the whole Snapchat debacle was the one published at www.diyevil.com [1], which explains how data correlation can yield interesting results in targeted attacks. The question then becomes, “How can I protect against this?”

Stored personal data is always vulnerable to attackers who can track it down to its original owner. Because skilled attackers can sometimes gain access to metadata, there is very little you can do to protect your data aside from not storing it at all. Anonymity and privacy are not new concepts. Industries, such as healthcare, have used these concepts for decades, if not centuries. For the healthcare industry, protecting patient data remains one of the most challenging problems. Where does the balance tip when protecting privacy by not disclosing that person X is a diabetic, and protecting health by giving EMT’s information about allergies and existing conditions? It’s no surprise that those who research data anonymity and privacy often use healthcare information for their test cases. In this blog, I want to focus on two key principles relating to this.

k-Anonymity [2] 
In 2000, Latanya Sweeney used the US Census data to prove that 87% of US citizens are uniquely identifiable by their birth date, gender, and zip code[3]. That isn’t surprising from a mathematical point of view as there are approximately 310 million Americans and roughly 2 billion possible combinations of the {birth date,gender, zip code} tuple. You can easily find out how unique you really are through an online application using the latest US Census data [4] Although it is not a good idea to store “unique identifiers” like names, usernames, or social security numbers, this is not at all practical. Assuming that data storage is a requirement, k-Anonymity comes into play. By using data suppression, where data is replaced by an *, and data generalization, where—as an example—a specific age is replaced by an age range, companies can anonymize a data set to a level where each row is, at the very least, identical to k-1 rows in the dataset. Whoever thought an anonymity level could actually be mathematically proven?

k-Anonymity has known weaknesses. Imagine that you know that the data of your Person of Interest (POI) is among four possible records in four anonymous datasets. If these four records have a common trait like “disease = diabetes”, you know that your POI suffers from this disease without knowing the record in which their data is contained. With sufficient metadata about the POI, another concept comes into play. Coincidentally, this is also where we find a possible solution for preventing correlation attacks against breached databases.

l-diversity [5] 
One thing companies cannot control is how much knowledge about a POI an adversary has. This does not, however, divorce us from our responsibility to protect user data. This is where l-Diversity comes into play. This concept does not focus on the fields that attackers can use to identify a person with available data. Instead, it focuses on sensitive information in the dataset. By applying the l-Diversity principle to a dataset, companies can make it notably expensive for attackers to correlate information by increasing the number of required data points.

Solving Problems 
All of this sounds very academic, and the question remains whether or not we can apply this in real-life scenarios to better protect user data. In my opinion, we definitely can.

Social application developers should become familiar with the principles of k-Anonymity and l-Diversity. It’s also a good idea to build KPIs that can be measured against. If personal data is involved, organizations should agree on minimum values for k and l.

More and more applications allow user email addresses to be the same as the associated user name. This directly impacts the l-Diversity database score. Organizations should allow users to select their username and also allow the auto-generation of usernames. Both these tactics have drawbacks, but from a security point of view, they make sense.

Users should have some control. This becomes clear when analyzing the common data points that every application requires.

  • Email address:
    • Do not use your corporate email address for online services, unless it is absolutely necessary
    • If you have multiple email addresses, randomize the email addresses that you use for registration
    • If your email provider allows you to append random strings to your email address, such as name+random@nullgmail.com, use this randomization—especially if your email address is also your username for this service
  • Username:
    • If you can select your username, make it unique
    • If your email address is also your username, see my previous comments on this
  • Password:
    • Select a unique password for each service
    • In some cases, select a phone number for 2FA or other purposes

By understanding the concepts of k-Anonymity and l-Diversity, we now know that this statement in the www.diyevil.com article is incorrect:

“While the techniques described above depend on a bit of luck and may be of limited usefulness, they are yet another tool in the pen tester’s toolset for open source intelligence gathering and user attack vectors.”

The success of techniques discussed in this blog depend on science and math. Also, where science and math are in play, solutions can be devised. I can only hope that the troves of “data scientists” that are currently being recruited also understand the principles I have described. I also hope that we will eventually evolve into a world where not only big data matters but where anonymity and privacy are no longer empty words.

[1] http://www.diyevil.com/using-snapchat-to-compromise-users/
[2] /wp-content/uploads/2014/01/K-Anonymity.pdf
[3] /wp-content/uploads/2014/01/paper1.pdf
[4] http://aboutmyinfo.org/index.html
[5] /wp-content/uploads/2014/01/ldiversityTKDDdraft.pdf

INSIGHTS | October 21, 2013

NCSAM – Eireann Leverett on why magic is crucial

Late last week I had the pleasure of interviewing IOActive Labs CTO – Cesar Cerrudo on how he got into IT security. Today I am fortunate enough to have the pleasure of interviewing Eireann Leverett, a senior researcher for IOActive on this field and how magic played a part.

IOActive: How did you get into security?
 
Eireann: Actually, I was very slow to get security as an official title for a job, it was only really in the last few years. However, I always knew that’s how my mind was bent.
For example, everything I know about software security I learned from card tricks at 14. I know it seems ridiculous, but it’s true. Predicting session id’s from bad PRNGs is just like shuffle-tracking and card counting. Losing a card in the deck and finding it again is like controlling a pointer. If you understand the difference between a riffle shuffle and an overhand shuffle you understand list manipulation.
Privilege escalation is not unlike using peeks and forces in mentalism to corrupt assumptions about state. Cards led me to light maths and light crypto and zero-knowledge proofs.
From there I studied formally in Artificial Intelligence and Software Engineering in Edinburgh. The latter part took me into 5+ years Quality Assurance and Automated Testing, which I still believe is a very important place to breed security professionalism.
After that I did my Master’s at Cambridge and accepted a great offer from IOActive mixing research and penetration testing. Enough practicality so I’m not irrelevant to real world application, and enough theory & time to look out to the horizon.
IOActive: What do you find most exciting about security?
 
Eireann: The diversity of subjects. I will never know it all, and I love that it continually evolves. It is exciting to pit yourself against the designs of others, and work against malicious and deceptive people.
There are not many jobs in life where you get to pretend to be a bad guy. Deceptionists are not usually well regarded by society. However, in this industry having the mindset is very much rewarded.
There’s a quote from Houdini I once shared with our President and Founder, and we smile about it often. The quote is:
“Magic is the right way to do wrong.”
That’s what being an IOActive pirate is: the right way to do wrong. We make invisible badness visible, and explain it to those who need to understand business and process instead of worrying about the detail of the technology.
IOActive: What do you like to research, and why?
 
Eireann: Anything with a physical consequence. I don’t want to work in banking protecting other people’s money. My blood gets flowing when there are valves to open, current flowing, or bay doors to shut. In this sense, I’m kind of a redneck engineer.
There’s a more academic side to me as well though and my research passions. I like incident response and global co-operation. I took something Team Cymru & Dragon Research said to heart:
Security is more about protecting your neighbours.
If you don’t understand that your company can be compromised by the poor security of your support connections you’ve missed the point. That’s why we have dynamic information sharing and great projects like openIOC, BGPranking, honeynets, and organisations like FIRST. It doesn’t do you any good to secure only yourself. We must share to win, as a global society.
IOActive: What advice would you give to someone who would like to become a pentester/researcher?
 
Eireann: Curiosity and autodidacticism is the only way.
The root to hacking is understanding. To hack is to understand something better than it understands itself, and then nudge it to alter outputs. You won’t understand anything you listen to other people tell you about. So go do things, either on your own, or in formal education. Ultimately it doesn’t matter as long as you learn and understand, and can articulate what you learned to others.