As a penetration tester, I encounter interesting problems with network devices and software. The most common problems that I notice in my work are configuration issues. In today’s security environment, we can accept that a zero-day exploit results in system compromise because details of the vulnerability were unknown earlier. But, what about security issues and problems that have been around for a long time and can’t seem to be eradicated completely? I believe the existence of these types of issues shows that too many administrators and developers are not paying serious attention to the general principles of computer security. I am not saying everyone is at fault, but many people continue to make common security mistakes. There are many reasons for this, but the major ones are:
Here are a number of examples to support this discussion:
The majority of network devices can be accessed using Simple Network Management Protocol (SNMP) and community strings that typically act as passwords to extract sensitive information from target systems. In fact, weak community strings are used in too many devices across the Internet. Tools such as snmpwalk and snmpenum make the process of SNMP enumeration easy for attackers. How difficult is it for administrators to change default SNMP community strings and configure more complex ones? Seriously, it isn’t that hard.
A number of devices such as routers use the Network Time Protocol (NTP) over packet-switched data networks to synchronize time services between systems. The basic problem with the default configuration of NTP services is that NTP is enabled on all active interfaces. If any interface exposed on the Internet uses UDP port 123 the remote user can easily execute internal NTP readvar queries using ntpq to gain potential information about the target server including the NTP software version, operating system, peers, and so on.
Unauthorized access to multiple components and software configuration files is another big problem. This is not only a device and software configuration problem, but also a byproduct of design chaos i.e. the way different network devices are designed and managed in the default state. Too many devices on the Internet allow guests to obtain potentially sensitive information using anonymous access or direct access. In addition, one has to wonder about how well administrators understand and care about security if they simply allow configuration files to be downloaded from target servers. For example, I recently conducted a small test to verify the presence of the file Filezilla.xml, which sets the properties of FileZilla FTP servers, and found that a number of target servers allow this configuration file to be downloaded without any authorization and authentication controls. This would allow attackers to gain direct access to these FileZilla FTP servers because the passwords are embedded in these configuration files.
It is really hard to imagine that anyone would deploy sensitive files containing usernames and passwords on remote servers that are exposed on the network. What results do you expect when you find these types of configuration issues?
Cisco Catalyst – Insecure Interface
How can we forget about backend databases? The presence of default and weak passwords for the MS SQL system administrator account is still a valid configuration that can be found in the internal networks of many organizations. For example, network proxies that require backend databases are configured with default and weak passwords. In many cases administrators think internal network devices are more secure than external devices. But, what happens if the attacker finds a way to penetrate the internal network
Thesimple answer is: game over. Integrated web applications with backend databases using default configurations are an “easy walk in the park” for attackers. For example, the following insecure XAMPP installation would be devastative.
|
XAMPP Security – Web Page
The security community recently encountered an interesting configuration issue in the GitHub repository (http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code), which resulted in the disclosure of SSH keys on the Internet. The point is that when you own an account or repository on public servers, then all administrator responsibilities fall on your shoulders. You cannot blame the service provider, Git in this case. If users upload their private SSH keys to the repository, then whom can you blame? This is a big problem and developers continue to expose sensitive information on their public repositories, even though Git documents show how to secure and manage sensitive data.
Another interesting finding shows that with Google Dorks (targeted search queries), the Google search engine exposed thousands of HP printers on the Internet (http://port3000.co.uk/google-has-indexed-thousands-of-publicly-acce). The search string “inurl:hp/device/this.LCDispatcher?nav=hp.Print” is all you need to gain access to these HP printers. As I asked at the beginning of this post, “How long does it take to secure a printer?” Although this issue is not new, the amazing part is that it still persists.
The existence of vulnerabilities and patches is a completely different issue than the security issues that result from default configurations and poor administration. Either we know the security posture of the devices and software on our networks but are being careless in deploying them securely, or we do not understand security at all. Insecure and default configurations make the hacking process easier. Many people realize this after they have been hacked and by then the damage has already been done to both reputations and businesses.
This post is just a glimpse into security consciousness, or the lack of it. We have to be more security conscious today because attempts to hack into our networks are almost inevitable. I think we need to be a bit more paranoid and much more vigilant about security.
|
|
Throughout the second half of 2012 many security folks have been asking “how much is a zero-day vulnerability worth?” and it’s often been hard to believe the numbers that have been (and continue to be) thrown around. For the sake of clarity though, I do believe that it’s the wrong question… the correct question should be “how much do people pay for working exploits against zero-day vulnerabilities?”
The answer in the majority of cases tends to be “it depends on who’s buying and what the vulnerability is” regardless of the questions particular phrasing.
On the topic of exploit development, last month I wrote an article for DarkReading covering the business of commercial exploit development, and in that article you’ll probably note that I didn’t discuss the prices of what the exploits are retailing for. That’s because of my elusive answer above… I know of some researchers with their own private repository of zero-day remote exploits for popular operating systems seeking $250,000 per exploit, and I’ve overheard hushed bar conversations that certain US government agencies will beat any foreign bid by four-times the value.
But that’s only the thin-edge of the wedge. The bulk of zero-day (or nearly zero-day) exploit purchases are for popular consumer-level applications – many of which are region-specific. For example, a reliable exploit against Tencent QQ (the most popular instant messenger program in China) may be more valuable than an exploit in Windows 8 to certain US, Taiwanese, Japanese, etc. clandestine government agencies.
More recently some of the conversations about exploit sales and purchases by government agencies have focused in upon the cyberwar angle – in particular, that some governments are trying to build a “cyber weapon” cache and that unlike kinetic weapons these could expire at any time, and that it’s all a waste of effort and resources.
I must admit, up until a month ago I was leaning a little towards that same opinion. My perspective was that it’s a lot of money to be spending for something that’ll most likely be sitting on the shelf that will expire in to uselessness before it could be used. And then I happened to visit the National Museum of Nuclear Science & History on a business trip to Albuquerque.
![]() |
Museum: Polaris Missile |
![]() |
Museum: Minuteman missile part? |
For those of you that have never heard of the place, it’s a museum that plots out the history of the nuclear age and the evolution of nuclear weapon technology (and I encourage you to visit!).
Anyhow, as I literally strolled from one (decommissioned) nuclear missile to another – each laying on its side rusting and corroding away, having never been used, it finally hit me – governments have been doing the same thing for the longest time, and cyber weapons really are no different!
Perhaps it’s the physical realization of “it’s better to have it and not need it, than to need it and not have it”, but as you trace the billions (if not trillions) of dollars that have been spent by the US government over the years developing each new nuclear weapon delivery platform, deploying it, manning it, eventually decommissioning it, and replacing it with a new and more efficient system… well, it makes sense and (frankly) it’s laughable how little money is actually being spent in the cyber-attack realm.
So what if those zero-day exploits purchased for measly 6-figured wads of cash curdle like last month’s milk? That price wouldn’t even cover the cost of painting the inside of a decommissioned missile silo.
No, the reality of the situation is that governments are getting a bargain when it comes to constructing and filling their cyber weapon caches. And, more to the point, the expiry of those zero-day exploits is a well understood aspect of managing an arsenal – conventional or otherwise.
— Gunter Ollmann, CTO – IOActive, Inc.
My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.
Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.
All over the world, things are changing in ICS security—we are now in the spotlight and the only way forward is, well, forward. Consequently, I’m doing more reading than ever to keep up with technical issues, global incidents, and frameworks and policies that will ensure the security of our future.
From a security researcher’s perspective, one exciting development is that .gov is starting to understand the need for disclosure in some cases. They have found that by giving companies lead time to implement fixes, they often get stonewalled for months or years. Yes, it sometimes takes years to fix specific ICS security issues, but that is no excuse for failing to contact the researcher and ICS-CERT with continually-updated timelines. This is well reflected in the document we are about to review.
The Common Industrial Control System Vulnerability Disclosure Framework was published a bit before BlackHat/Defcon/BSideLV, and I’ve just had some time to read it. The ICSJWG put this together and I would say that overall it is very informative.
For example, let’s start with the final (and most blogged about) quote of the Executive Summary:
“Inconsistent disclosure policies have also contributed to a public perception of disorganization within the ICS security community.”
I can’t disagree with that—failure to have a policy already has contributed to many late nights for engineers.
On Page 7, we see a clarification of vulnerabilities found during customer audits that is commendable:
“Under standard audit contracts, the results of the audit are confidential to the organization customer and any party that they choose to share those results with. This allows for information to be passed back to the vendor without violating the terms of the audit. The standard contract will also prevent the auditing company from being able to disclose any findings publically. It is important to note however, that it is not required for a customer to pass audit results on to a vendor unless explicitly noted in their contract or software license agreement.”
Is there a vendor who explicitly asks customers to report vulnerabilities in their license agreements? Why/why not?
On Page 9, Section 5 we find a dangerous claim, one that I would like to challenge as firmly and fairly as I can:
“Not disclosing an issue is not discussed; however it remains an option and may be appropriate in some scenarios.”
Very, well. I’m a reasonable guy whose even known to support responsible disclosure despite the fact it puts hand-cuffs on only the good guys. Being such a reasonable guy, I’m going to pretend I can accept the idea that a company selling industrial systems or devices might have a genuine reason to not disclose a security flaw to its customers. In the spirit of such a debate, I invite any vendor to comment on this blog post with a hypothetical scenario in which this is justified.
Hypothetically speaking: When is it appropriate to withhold vulnerabilities and not disclose them to your ICS customers?
While we’re at it, we also see the age-old disclosure always increases risk trope again, here:
“Public Disclosure does increase risk to customers, as any information disclosed about the vulnerability is available to malicious individuals as well as to legitimate customers. If a vulnerability is disclosed publically prior to a fix being made available, or prior to an available fix being deployed to all customers, malicious parties may be able to use that information to impact customer operations.”
Since I was bold enough to challenge all vendors to answer my question about when it is appropriate to remain silent, it’s only fair to tackle a thorny issue from the document myself. Imagine you have a serious security flaw without a fix. The argument goes that you shouldn’t disclose it publicly since that would increase the risk. However, what if the exploit were tightly constrained and detectable in 100% of cases? It seems clear that in this case, public disclosure gives the best chance for your customers to DETECT exploitation as opposed to waiting for the fix. Wouldn’t that DECREASE risk? Unfortunately, until you can measure both risk and the occurrence of 0-day vulnerabilities in the wild RELIABLY, this is all just conjecture.
There exists a common misconception in vulnerability management that only the vendor can protect the customer by fixing an issue, and that public disclosure always increases risk. With public disclosure, you widen the circle of critical and innovative eyes, and a third party might be able to mitigate where the vendor cannot—for example, by using one of their own proprietary technologies.
Say, for example, that a couple of ICS vendors had partnered with an Intrusion Detection and Prevention system company that is a known defender of industrial systems. They could then focus their early vulnerability analysis efforts on detecting and mitigating exploits on the wire reliably before they’re even fixed. This would reduce the number of days after zero the exploit can’t be detected and, to my thinking, that reduces the risk. I’m disappointed that—in the post-Stuxnet era—we continue to have ICS disclosure debates because the malware authors ultimately don’t even care. I can’t help but notice that recent ICS malware authors weren’t consulted about their “disclosure policies” and also didn’t choose to offer them.
As much as I love a lively debate, I wanted to commend the ICSJWG for having the patience to explain disclosure when the rest of us get tired.
All of this can be done easily and completely automated with a couple of scripts.
End result—while Internet use increases, privacy decreases and the chance that we’ll be attacked also increases.
C-level executives should use their corporate email address for email only. Therefore, companies should implement special security programs and policies to protect executives.