Material
|
Color
|
P doping
|
|
N doping
|
|
Polysilicon
|
|
Via
|
|
Metal 1
|
|
Metal 2
|
|
Metal 3
|
|
Metal 4
|
|
Material
|
Color
|
P doping
|
|
N doping
|
|
Polysilicon
|
|
Via
|
|
Metal 1
|
|
Metal 2
|
|
Metal 3
|
|
Metal 4
|
|
On average I receive a postal letter from a bank or retailer every two months telling me that I’ve become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB – and that’s just the ones that were disclosed publicly.
It’s clear to me that something is broken and it’s only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I’d question why they’re collecting it in the first place. All too often these organizations – of which I’m supposedly a customer – are collecting personal data about “my experience” doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they’d be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:
Whether or not the organizations hording personal data know how to profit from it or not, it’s clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they’re doing. The gray market for identity laundering has expanded phenomenonly since I talked about at Blackhat in 2010.
We can moan all we like about the state of the situation now, but we’ll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.
The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the “what data are you collecting and why?” questions, and incentivize corporations to take much more care protecting the personal data they’ve been entrusted with.
I feel that the data hording problem can be dealt with fairly easily. At the end of the day it’s about transparency and the ability to “opt out”. If I was to choose a role model for making a sizable fraction of this threat go away, I’d look to the basic component of the UK’s Data Protection Act as being the cornerstone of a solution – especially here in the US. I believe the key components of personal data collection should encompass the following:
If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You’d hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the “big stick” approach needs to be reinforced.
A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer’s personal data should be bigger and divvied up. Essentially I’d argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.
Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I’ve become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today.
2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.
3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.
The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported.
The number of reports of networks that are rampaged by adversaries is staggering. In the past few weeks alone we’ve seen reports from The New York Times, The Washington Post and Twitter. I would argue that the public reports are just the tip of the iceberg. What about the hacks that never were? What about the companies that absorbed the blow and just kept on trucking or … perhaps even those companies that never recovered?
When there’s an uptick in media attention over security breaches, the question most often asked – but rarely answered – is “What if this happens to me?”
Today you don’t want to ask that question too loudly – else you’ll find product vendors selling turn-key solutions and their partners on your doorstep, closely followed by ‘Managed Security Services’ providers. All ready to solve your problems once you send back their signed purchase order… if you want to believe that.
Most recently they’ve been joined by the “let’s hack the villains back” start-ups. That last one is an interesting evolution but not important for this post today.
I’m not here to provide a side-by-side comparison of service providers or product vendors. I encourage you to have an open conversation with them when you’re ready for it; but what I want to share today is my experience being involved in SIEM projects at scale, and working hands-on with the products as a security analyst. The following lessons were gained through a lot of sweat and tears:
This obviously begs the question whether installing a SIEM is worth spending a limited security budget upon.
It is my personal opinion that tooling to facilitate Incident Response, including SIEM to delve through piles and piles of log data, is always an asset. However, it’s also my opinion that buying a tool RIGHT NOW is not your priority if you can not confidently answer “YES” to the following questions :
In case of a resounding “NO” or a reluctantly uttered “maybe”, I would argue that there are things you should do before acquiring a SIEM product. It is your priority to understand your network and to have control over it, unless you look forward to paying big money for shiny data aggregators.
Advice
With that challenge identified how do you move ahead and regain control of your network. Here’s some advice :
The most important first step is very much like what Forensics investigators call “walking the grid”. You will need to break down your network in logical chunks and scrutinize them. Which are the components that are most important to our business and what are the controls protecting them. Which data sources can tell us about the security health of those components and how? how frequently? in which detail? Depending on the size and complexity of the network this may seem like a daunting task but at the same time you’ll have to realize that this is not a one time exercise. You’ll be doing this for the foreseeable future so it’s important that it is turned into a repeatable process. A process that can be reliably executed by other people than you, with consistent results.
Next, epiphany! Nobody but yourself can supplement the data delivered by appliances and software distributed across your network with actual knowledge about your own network. Out of the box rulesets don’t know about your network and -however bright they may be- security analysts on follow-the-sun teams don’t either. Every event and every alert only makes sense in context and when that context is removed, you’re no longer doing correlation. You’re swatting flies. While I came up with this analogy on the fly, it makes sense. You don’t swat flies with a Christian Louboutin or a Jimmy Choo, you use a flip-flop. There are plenty of information security flip-flops out there on the internet. Log aggregators (syslog, syslog-ng), Full-blown open-source NIDS systems (Snort, Bro) or HIDS systems (OSSEC), and even dedicated distributions (e.g. Security Onion) or open source SIEM (like OSSIM) can go a long way in helping you understand what is going on in your network. The most amazing part is that, until now, you’ve spend $0 on software licensing and all of your resources are dedicated to YOUR people learning about YOUR network and what is going on there. While cracking one of the toughest nuts in IT Security, you’re literally training your staff to be better security analysts and incident responders.
The first push-back I generally receive when I talk (passionately, I admit) about open source security tooling is in the form of concern. The software is not controlled, we can’t buy a support contract for it (not always true by the way!!), our staff doesn’t have the training, we are too small/big for these solutions…It’s impossible to argue closed vs. open source in this blogpost.
I believe it’s worth looking at to solve this problem, others may disagree. To the point of training staff I will say that what those tools largely do is what your staff currently does manually in an ad hoc fashion. They do understand logging and network traffic, learning how the specific tools work and how they can make their job easier will be only a fraction of the time they spend on implementing them. It is my experience that the enthusiasm of people that get to work with tools –commercial or otherwise– that makes their actual job easier, compensates for any budget you have to set aside for ‘training’. To the point of size, I have personally deployed open source security tools in SMB environments as well as in 1000+ enterprise UNIX farms. It is my strongest belief that, as security engineers, it is not our job to buy products. It is our task to build solutions for the problems at hand, using the tools that best fit the purpose. Commercial or not.
It makes sense that, as you further mature in your monitoring capability, the free tools might not continue to scale and you’ll be looking to work with the commercial products or service providers I mentioned above. The biggest gain at that moment is that you perfectly understand what you need, which parts of the capability you can delegate to a third party and what your expectations are, which parts of the problem space you can’t solve without dedicated products. From experience, most of the building blocks will be easily reused and integrated with commercial solutions. Many of those commercial solutions have support for the open source data generators (Snort, Bro, OSSEC, p0f, …).
Let’s be realistic: if you’re as serious about information security as I think you are, you don’t want to be a “buyer of boxes” or a cost center. You want to (re)build networks that allow you to defend your most valuable assets against those adversaries that matter and, maybe as important as anything else, you want to stop running behind the facts on fancy high heels.
*For the purpose of this post SIEM stands for Security Information and Event Management. It is often referred to as SIM, SEM and a bunch of other acronyms and we’re ok with those too.
Throughout the second half of 2012 many security folks have been asking “how much is a zero-day vulnerability worth?” and it’s often been hard to believe the numbers that have been (and continue to be) thrown around. For the sake of clarity though, I do believe that it’s the wrong question… the correct question should be “how much do people pay for working exploits against zero-day vulnerabilities?”
The answer in the majority of cases tends to be “it depends on who’s buying and what the vulnerability is” regardless of the questions particular phrasing.
On the topic of exploit development, last month I wrote an article for DarkReading covering the business of commercial exploit development, and in that article you’ll probably note that I didn’t discuss the prices of what the exploits are retailing for. That’s because of my elusive answer above… I know of some researchers with their own private repository of zero-day remote exploits for popular operating systems seeking $250,000 per exploit, and I’ve overheard hushed bar conversations that certain US government agencies will beat any foreign bid by four-times the value.
But that’s only the thin-edge of the wedge. The bulk of zero-day (or nearly zero-day) exploit purchases are for popular consumer-level applications – many of which are region-specific. For example, a reliable exploit against Tencent QQ (the most popular instant messenger program in China) may be more valuable than an exploit in Windows 8 to certain US, Taiwanese, Japanese, etc. clandestine government agencies.
More recently some of the conversations about exploit sales and purchases by government agencies have focused in upon the cyberwar angle – in particular, that some governments are trying to build a “cyber weapon” cache and that unlike kinetic weapons these could expire at any time, and that it’s all a waste of effort and resources.
I must admit, up until a month ago I was leaning a little towards that same opinion. My perspective was that it’s a lot of money to be spending for something that’ll most likely be sitting on the shelf that will expire in to uselessness before it could be used. And then I happened to visit the National Museum of Nuclear Science & History on a business trip to Albuquerque.
Museum: Polaris Missile |
Museum: Minuteman missile part? |
For those of you that have never heard of the place, it’s a museum that plots out the history of the nuclear age and the evolution of nuclear weapon technology (and I encourage you to visit!).
Anyhow, as I literally strolled from one (decommissioned) nuclear missile to another – each laying on its side rusting and corroding away, having never been used, it finally hit me – governments have been doing the same thing for the longest time, and cyber weapons really are no different!
Perhaps it’s the physical realization of “it’s better to have it and not need it, than to need it and not have it”, but as you trace the billions (if not trillions) of dollars that have been spent by the US government over the years developing each new nuclear weapon delivery platform, deploying it, manning it, eventually decommissioning it, and replacing it with a new and more efficient system… well, it makes sense and (frankly) it’s laughable how little money is actually being spent in the cyber-attack realm.
So what if those zero-day exploits purchased for measly 6-figured wads of cash curdle like last month’s milk? That price wouldn’t even cover the cost of painting the inside of a decommissioned missile silo.
No, the reality of the situation is that governments are getting a bargain when it comes to constructing and filling their cyber weapon caches. And, more to the point, the expiry of those zero-day exploits is a well understood aspect of managing an arsenal – conventional or otherwise.
— Gunter Ollmann, CTO – IOActive, Inc.
Day by day the endless fight between the bad guys and good guys mostly depends on how fast a countermeasure or anti-reversing protection can be broken. These anti-reversing mechanisms can be used by attackers in a number of ways: to create malware, to be used in precompiled zero-day exploits in the black market, to hinder forensic analysis, and so on. But they can also be used by software companies or developers that want to protect the internal logic of their software products (copyright).
The other day I was thinking: why run and hide (implementing anti-reversing techniques such as the aforementioned) instead of standing up straight and give the debugger a punch in the face (crashing the debugging application). In the next paragraphs I’ll explain briefly how I could implement this anti-reversing technique on ELF binaries using a counterattack approach.
However, as can be seen, even with the anti-debugging technique at runtime, the ELF file was completely loaded and parsed by the debugger.
After a bit of analysis, I found a bug in the DWARF [3] (a debugging file format used by many compilers and debuggers to support source-level debugging) processor that fails when parsing the data within the .debug_line section. This prevents gdb from loading an ELF executable for debugging due to a NULL pointer dereference. Evidently it could be used to patch malicious executables (such as rootkits, zero-day exploits, and malware) that wouldn’t be able to be analyzed by gdb.
T
C
I have also programmed a simple tool (ida_63_elf_shield.c) to patch the ELF executables to make them impossible for IDA Pro to load. This code only generates two random numbers and assigns the bigger one to e_shstrndx:
As can be seen, the original binary (hostname) works perfectly after executing the parasite code (backdoor on port 31337). Now, let’s see what happens after patching it:
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.