Year: 2014
An Equity Investor’s Due Diligence
Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies.
Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity.
Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy.
When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:
-
- Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.
-
- Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.
-
- Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.
-
- Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.
-
- Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.
- Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.
Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.
Scientifically Protecting Data
This is not “yet another Snapchat Pwnage blog post”, nor do I want to focus on discussions about the advantages and disadvantages of vulnerability disclosure. A vulnerability has been made public, and somebody has abused it by publishing 4.6 million records. Tough luck! Maybe the most interesting article in the whole Snapchat debacle was the one published at www.diyevil.com [1], which explains how data correlation can yield interesting results in targeted attacks. The question then becomes, “How can I protect against this?”
Stored personal data is always vulnerable to attackers who can track it down to its original owner. Because skilled attackers can sometimes gain access to metadata, there is very little you can do to protect your data aside from not storing it at all. Anonymity and privacy are not new concepts. Industries, such as healthcare, have used these concepts for decades, if not centuries. For the healthcare industry, protecting patient data remains one of the most challenging problems. Where does the balance tip when protecting privacy by not disclosing that person X is a diabetic, and protecting health by giving EMT’s information about allergies and existing conditions? It’s no surprise that those who research data anonymity and privacy often use healthcare information for their test cases. In this blog, I want to focus on two key principles relating to this.
k-Anonymity [2]
In 2000, Latanya Sweeney used the US Census data to prove that 87% of US citizens are uniquely identifiable by their birth date, gender, and zip code[3]. That isn’t surprising from a mathematical point of view as there are approximately 310 million Americans and roughly 2 billion possible combinations of the {birth date,gender, zip code} tuple. You can easily find out how unique you really are through an online application using the latest US Census data [4] Although it is not a good idea to store “unique identifiers” like names, usernames, or social security numbers, this is not at all practical. Assuming that data storage is a requirement, k-Anonymity comes into play. By using data suppression, where data is replaced by an *, and data generalization, where—as an example—a specific age is replaced by an age range, companies can anonymize a data set to a level where each row is, at the very least, identical to k-1 rows in the dataset. Whoever thought an anonymity level could actually be mathematically proven?
k-Anonymity has known weaknesses. Imagine that you know that the data of your Person of Interest (POI) is among four possible records in four anonymous datasets. If these four records have a common trait like “disease = diabetes”, you know that your POI suffers from this disease without knowing the record in which their data is contained. With sufficient metadata about the POI, another concept comes into play. Coincidentally, this is also where we find a possible solution for preventing correlation attacks against breached databases.
l-diversity [5]
One thing companies cannot control is how much knowledge about a POI an adversary has. This does not, however, divorce us from our responsibility to protect user data. This is where l-Diversity comes into play. This concept does not focus on the fields that attackers can use to identify a person with available data. Instead, it focuses on sensitive information in the dataset. By applying the l-Diversity principle to a dataset, companies can make it notably expensive for attackers to correlate information by increasing the number of required data points.
Solving Problems
All of this sounds very academic, and the question remains whether or not we can apply this in real-life scenarios to better protect user data. In my opinion, we definitely can.
Social application developers should become familiar with the principles of k-Anonymity and l-Diversity. It’s also a good idea to build KPIs that can be measured against. If personal data is involved, organizations should agree on minimum values for k and l.
More and more applications allow user email addresses to be the same as the associated user name. This directly impacts the l-Diversity database score. Organizations should allow users to select their username and also allow the auto-generation of usernames. Both these tactics have drawbacks, but from a security point of view, they make sense.
Users should have some control. This becomes clear when analyzing the common data points that every application requires.
- Email address:
- Do not use your corporate email address for online services, unless it is absolutely necessary
- If you have multiple email addresses, randomize the email addresses that you use for registration
- If your email provider allows you to append random strings to your email address, such as name+random@gmail.com, use this randomization—especially if your email address is also your username for this service
- Username:
- If you can select your username, make it unique
- If your email address is also your username, see my previous comments on this
- Password:
- Select a unique password for each service
- In some cases, select a phone number for 2FA or other purposes
By understanding the concepts of k-Anonymity and l-Diversity, we now know that this statement in the www.diyevil.com article is incorrect:
“While the techniques described above depend on a bit of luck and may be of limited usefulness, they are yet another tool in the pen tester’s toolset for open source intelligence gathering and user attack vectors.”
The success of techniques discussed in this blog depend on science and math. Also, where science and math are in play, solutions can be devised. I can only hope that the troves of “data scientists” that are currently being recruited also understand the principles I have described. I also hope that we will eventually evolve into a world where not only big data matters but where anonymity and privacy are no longer empty words.
[1] http://www.diyevil.com/using-snapchat-to-compromise-users/
[2] https://ioactive.com/wp-content/uploads/2014/01/K-Anonymity.pdf
[3] https://ioactive.com/wp-content/uploads/2014/01/paper1.pdf
[4] http://aboutmyinfo.org/index.html
[5] https://ioactive.com/wp-content/uploads/2014/01/ldiversityTKDDdraft.pdf