RESEARCH | March 9, 2018

Robots Want Bitcoins too!

Ransomware attacks have boomed during the last few years, becoming a preferred method for cybercriminals to get monetary profit by encrypting victim information and requiring a ransom to get the information back. The primary ransomware target has always been information. When a victim has no backup of that information, he panics, forced to pay for its return.
(more…)

EDITORIAL | January 31, 2018

Security Theater and the Watch Effect in Third-party Assessments

Before the facts were in, nearly every journalist and salesperson in infosec was thinking about how to squeeze lemonade from the Equifax breach. Let’s be honest – it was and is a big breach. There are lessons to be learned, but people seemed to have the answers before the facts were available.

It takes time to dissect these situations and early speculation is often wrong. Efforts at attribution and methods take months to understand. So, it’s important to not buy into the hysteria and, instead, seek to gain a clear vision of the actual lessons to be learned. Time and again, these supposed “watershed moments” and “wake-up calls” generate a lot of buzz, but often little long-term effective action to improve operational resilience against cyber threats.


At IOActive we guard against making on-the-spot assumptions. We consider and analyze the actual threats, ever mindful of the “Watch Effect.” The Watch Effect can be simply explained:  you wear a watch long enough, you can’t even feel it.
I won’t go into what third-party assessments Equifax may or may not have had because that’s largely speculation. The company has probably been assessed many times, by many groups with extensive experience in the prevention of cyber threats and the implementation of active defense. And they still experienced a deep impact cyber incursion.

The industry-wide point here is: Everyone is asking everyone else for proof that they’re secure.

The assumption and Watch Effect come in at the point where company executives think their responses to high-level security questions actually mean something.

Well, sure, they do mean something. In the case of questionnaires, you are asking a company to perform a massive amount of tedious work, and, if they respond with those questions filled in, and they don’t make gross errors or say “no” where they should have said “yes”, that probably counts for something.

But the question is how much do we really know about a company’s security by looking at their responses to a security questionnaire?

The answer is, “not much.”

As a company that has been security testing for 20 years now, IOActive has successfully breached even the most advanced cyber defenses across countless companies during penetration tests that were certified backwards and forwards by every group you can imagine. So, the question to ask is, “Do questionnaires help at all? And if so, how much?”
 
Here’s a way to think about that.

At IOActive we conduct full, top-down security reviews of companies that include business risk, crown-jewel defense, and every layer that these pieces touch. Because we know how attackers get in, we measure and test how effective the company is at detecting and responding to cyber events – and use this comprehensive approach to help companies understand how to improve their ability to prevent, detect, and ever so critically, RESPOND to intrusions. Part of that approach includes a series of interviews with everyone from the C-suite to the people watching logs. What we find is frightening.

We are often days or weeks into an assessment before we discover a thread to pull that uncovers a major risk, whether that thread comes from a technical assessment or a person-to-person interview or both.

That’s days—or weeks—of being onsite with full access to the company as an insider.

Here’s where the Watch Effect comes in. Many of the companies have no idea what we’re uncovering or how bad it is because of the Watch Effect. They’re giving us mostly standard answers about their day-to-day, the controls they have in place, etc. It’s not until we pull the thread and start probing technically – as an attacker – that they realize they’re wearing a broken watch.

Then they look down at a set of catastrophic vulnerabilities on their wrist and say, “Oh. That’s a problem.”

So, back to the questionnaire…

If it takes days or weeks for an elite security firm to uncover these vulnerabilities onsite with full cooperation during an INTERNAL assessment, how do you expect to uncover those issues with a form?

You can’t. And you should stop pretending you can. Questionnaires depend far too much upon the capability and knowledge of the person or team filling it out, and often are completed with impartial knowledge. How would one know if a firewall rule were updated improperly to “any/any” in the last week if it is not tested and verified?

To be clear, the problem isn’t that third party assessments only give 2/10 in security assessment value. The problem is that executives THINK it’s giving them 6/10, or 9/10.

It’s that disconnect that’s causing the harm.

Eventually, companies will figure this out. In the meantime, the breaches won’t stop.

Until then, we as technical practitioners can do our best to convince our clients and prospects to understand the value these types of cursory, external glances at a company provide. Very little. So, let’s prioritize appropriately.

INSIGHTS | June 28, 2017

WannaCry vs. Petya: Keys to Ransomware Effectiveness

With WannaCry and now Petya we’re beginning to see how and why the new strain of ransomware worms are evolving and growing far more effective than previous versions.

I think there are 3 main factors: Propagation, Payload, and Payment.*

  1. Propagation: You ideally want to be able to spread using as many different types of techniques as you can.
  2. Payload: Once you’ve infected the system you want to have a payload that encrypts properly, doesn’t have any easy bypass to decryption, and clearly indicates to the victim what they should do next.
  3. Payment: You need to be able to take in money efficiently and then actually decrypt the systems of those who pay. This piece is crucial, otherwise people will quickly learn they can’t get their files back even if they do pay and be inclined to just start over.


WannaCry vs. Petya

WannaCry used SMB as its main spreading mechanism, and its payment infrastructure lacked the ability to scale. It also had a kill switch, which was famously triggered and halted further propagation.

Petya on the other hand appears to be much more effective at spreading since it’s using both EternalBlue and credential sharing
/ PSEXEC to infect more systems. This means it can harvest working credentials and spread even if the new targets aren’t vulnerable to an exploit.


[NOTE: This is early analysis so some details could turn out to be different as we learn more.]

What remains to be seen is how effective the payload and payment infrastructures are on this one. It’s one thing to encrypt files, but it’s something else entirely to decrypt them.

The other important unknown at this point is if Petya is standalone or a component of a more elaborate attack. Is what we’re seeing now intended to be a compelling distraction?
  
There’s been some reports indicating these exploits were utilized by a sophisticated threat actor against the same targets prior to WannaCry. So it’s possible that WannaCry was poorly designed on purpose. Either way, we’re advising clients to investigate if there is any evidence of a more strategic use of these tools in the weeks leading up to Petya hitting.   

*Note: I’m sure there are many more thorough ways to analyze the efficacy of worms. These are just three that came to mind while reading about Petya and thinking about it compared to WannaCry.

RESEARCH | April 17, 2014

A Wake-up Call for SATCOM Security

During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted.
 
We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand.
 
Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include:
  • Aerospace
  • Maritime
  • Military and governments
  • Emergency services
  • Industrial (oil rigs, gas, electricity)
  • Media
It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft’s ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. 
 
IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. 
 
Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals.  
 
In previous blog posts I’ve explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically.

 
What about the results? 
 
Insecure and undocumented protocols, backdoors, hard-coded credentials…mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors.
 
Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities.
 
I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned.
The following white paper comprehensively explains all the aspects of this research IOActive_SATCOM_Security_WhitePaper
INSIGHTS | December 4, 2013

Practical and cheap cyberwar (cyber-warfare): Part II

Disclaimer: I did not perform any illegal attacks on the mentioned websites in order to get the information I present here. No vulnerability was exploited on the websites, and they are not known to be vulnerable.
 
Given that we live in an age of information leakage where government surveillance and espionage abound, I decided in this second part to focus on a simple technique for information gathering on human targets. If an attacker is targeting a specific country, members of the military and defense contractors would make good human targets. When targeting members of the military, an attacker would probably focus on high ranking officers and for defense contractors, an attacker would focus on key decision makers, C-level executives, etc. Why? Because if an attacker compromises these people, they could gain access to valuable information related to their work, their personal life, and family. Data related to their work could help an attacker strategically by enabling them to build better defenses, steal intellectual property, and craft specific attacks. Data related to a target’s personal life could help attackers for blackmailing or attacking the target’s families.
 
 
There is no need to work for the NSA and have a huge budget in order to get juicy information. Everything is just one click away; attackers only need to find ways to easily milk the web. One easy way to gather information about people is to get their email addresses as I have described last year here http://blog.ioactive.com/2012/08/the-leaky-web-owning-your-favorite-ceos.html . Basically you use a website registration form and/or forgotten password functionality to find out if an email address is already registered on a website. With a list of email addresses attackers can easily enumerate the websites/services where people have accounts. Given the ever-increasing number of online accounts one person usually has, this could provide a lot of useful information. For instance, it could make perform phishing attacks and social engineering easier (see http://www.infosecurity-magazine.com/view/35048/hackers-target-mandiant-ceo-via-limo-service/). Also, if one of the sites where the target has an account is vulnerable, that website could be hacked in order to get the target’s account password. Due to password reuse, attackers can compromise all the target accounts most of the time. 
 
 
This is intended to be easy and practical, so let’s get hands on. I did this research about a year ago. First, I looked for US Army email addresses. After some Google.com searches, I ended up with some PDF files with a lot of information about military people and defense contractors:

I extracted some emails and made a list. I ended up with:
 
1784 total email addresses: military (active and retired), civilians, and defense contractors.
 
I could have gotten more email addresses, but that was enough for testing purposes. I wasn’t planning on doing a real attack.
 
I had a very simple (about 50 LoC or so) Python tool (thanks to my colleague Ariel Sanchez for quickly building original tool!) that can be fed a list of websites and email addresses. I had already built the website database with 20 or so common and well known websites (I should have used military related ones too for better results, but it still worked well), I loaded the list of email addresses I had found, and then ran the tool. A few minutes later I ended up with a long list of email addresses and the websites where those email addresses were used (meaning where the owners of those email addresses have accounts):
 
Site
Accounts
     %
Facebook
  308
17.26457
Google
  229
12.83632
Orbitz
  182
10.20179
WashingtonPost
  149
8.352018
Twitter 
  108
6.053812
Plaxo
  93
5.213004
LinkedIn
  65
3.643498
Garmin
  45
2.522422
MySpace
  44
2.466368
Dropbox
  44
2.466368
NYTimes
  36
2.017937
NikePlus
  23
1.289238
Skype
  16
0.896861
Hulu
  13
0.7287
Economist
  11
0.616592
Sony Entertainment Network
  9
0.504484
Ask
  3
0.168161
Gartner
  3
0.168161
Travelers
  2
0.112108
Naymz
  2
0.112108
Posterous
  1
0.056054
 
Interesting to find accounts on Sony Entertainment Network website, who says the military can’t play Playstation 🙂
 
I decided that I should focus on something juicier, not just random .mil email addresses. So, I did a little research about high ranking US Army officers, and as usual, Google and Wikipedia ended up being very useful.
 
Let’s start with US Army Generals. Since this was done in 2012, some of them could be retired now.

 
I found some retired ones that now may be working for defense contractors and trusted advisors:

 
Active US Army Generals seem not to use their .mil email addresses on common websites; however, we can see a pattern that almost all of them use orbitz.com. For retired ones, since we got the personal (not .mil) email addresses, we can see they use them on many websites.

After US Army Generals, I looked at Lieutenant Generals (we could say future Generals):

Maybe because they are younger they seem to use their .mil email address in several common websites including Facebook.com. Even more, they have most of their Facebook information available to public! I was thinking about publishing the related Facebook information, but I will leave it up to you to explore their Facebook profiles.
 


I also looked for US Army Mayor Generals and found at least 15 of them:
 
Robert Abrams
Email: robert.abrams@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: washingtonpost.com
 
 
Jamos Boozer
Email: james.boozer@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: facebook.com
 
 
Vincent Brooks
Email: vincent.brooks@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
Found account on site: linkedin.com
 
 
James Eggleton
Email: james.eggleton@nullus.army.mil
 
 
 
Found account on site: plaxox.com
 
 
Reuben Jones
Email: reuben.jones@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
 
 
 
David quantock
Email: david-quantock@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: plaxo.com
 
 
 
 
Dave Halverson
Email: dave.halverson@nullconus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Jo Bourque
Email: jo.bourque@nullus.army.mil
 
 
 
Found account on site: washingtonpost.com
 
 
 
 
Kev Leonard
Email: kev-leonard@nullus.army.mil
 
 
 
Found account on site: facebook.com
 
 
James Rogers
Email: james.rogers@nullus.army.mil
 
 
 
Found account on site: plaxo.com
 
 
 
 
William Crosby
Email: william.crosby@nullus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Anthony Cucolo
Email: anthony.cucolo@nullus.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
Found account on site: linkedin.com
 
 
Genaro Dellrocco
Email: genaro.dellarocco@nullmsl.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Stephen Lanza
Email: stephen.lanza@nullus.army.mil
 
 
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: nytimes.com
 
 
Kurt Stein
Email: kurt-stein@nullus.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
 
Later I found about 7 US Army Brigadier General and 120 US Army Colonel email addresses, but I didn’t have time to properly filter the results. These email addresses were associated with many website accounts.
 
Basically, the 1784 total email addresses included a broad list of ranking officers from the US Army.
 
Doing a quick analysis of the gathered information we could infer: 
  • Many have Facebook accounts exposing to public the family and friend relations that could be targeted by attackers. 
  • Most of them read and are probably subscribed to The Washington Post (makes sense, no?). This could be an interesting avenue for attacks such as phishing and watering hole attacks. 
  • Many of them use orbitz.com, probably for car rentals. Hacking this site can give attackers a lot of information about how they move, when they travel, etc. 
  • Many of them have accounts on google.com probably meaning they have Android devices (Smartphones, tablets, etc.).This could allow attackers to compromise the devices remotely (by email for instance) with known or 0days exploits since these devices are not usually patched and not very secure.
  • And last but not least, many of them including Generals use garmin.com or nikeplus.com. Those websites are related with GPS devices including running watches. These websites allow you to upload GPS information making them very valuable for attackers for tracking purposes. They could know on what area a person usually runs, travel, etc.

 

 
As we can see, it’s very cheap and easy to get information about ranking members of the US Army. People serving in the US Army should take extra precautions. They should not make their email addresses public and should only use them for “business” related issues, not personal activities.
INSIGHTS | November 11, 2013

Practical and cheap cyberwar (cyber-warfare): Part I

Every day we hear about a new vulnerability or a new attack technique, but most of the time it’s difficult to imagine the real impact. The current emphasis on cyberwar (cyber-warfare if you prefer) leads to myths and nonsense being discussed. I wanted to show real life examples of large scale attacks with big impacts on critical infrastructure, people, companies, etc.
 

The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported.


The attack in this post is based on research done by my fellow pirates, Lucas Apa and Carlos Penagos. They found critical vulnerabilities in wireless industrial control devices. This research was first presented at BH USA 2013. You can find their full presentation here https://www.blackhat.com/us-13/archives.html#Apa
 
A common information leak occurs when vendors highlight how they helped Company X with their services or products. This information is very useful for supply chain attacks. If you are targeting Company X, it’s good to look at their service and product providers. It’s also useful to know what software/devices/technology they use.

 

In this case, one of the vendors that sells vulnerable wireless industrial control devices is happy to announce in a press release that Company X has acquired its wireless sensors and is using them in the Haynesville Shale fields. So, as an attacker, we now know that Company X is using vulnerable wireless sensors at the Haynesville Shale fields. Haynesville Shale fields, what’s that? Interesting, with a quick Google search you end up with:
 
 
 
How does Google know about shale well locations? It’s simple, publically-available information. You can display wells by name, organization, etc.:
 
 
 
 
 
Even interactive maps are available:
 
 
 
You can find all of Company X’s wells along with their exact location (geographical coordinates). You know almost exactly where the vulnerable wireless sensors are installed.
 
Since the wells are at a remote location, exploiting the wireless sensor vulnerabilities becomes an interesting challenge. Enter drones, UAV unmanned aerial vehicles. Commercially available drones range from a couple hundred dollars to tens of thousands dollars, depending on range, endurance, functionality, etc. You can even build your own and save some money. The idea is to put the attack payload in a drone, send it to the wells’ location, and launch the attack. This isn’t difficult to do since drones can be programmed to fly to x,y coordinates and execute the payload while flying around the target coordinates (no need to return). 
 
Depending on your budget, you can launch an attack from a nearby location or very far away. Depending on the drone’s endurance, you can be X miles away from the target. You can extend the drone’s range depending on the radio and antenna used. 
 
The types of exploits you could launch from the drone range from bricking all of the wireless devices to causing some physical harm on the shale gas installations. Manipulating device firmware or injecting fake data on radio packets could make the control systems believe things like the temperature or pressure are wrong. Just bricking the devices could result in significant lost money to Company X. The devices would need to be reconfigured/reflashed. The exploits could interfere with shale gas extraction and even halt production. The consequences of an attack could be even more catastrophic depending on how the vulnerable devices are being used.
 
Attacks could be expanded to target more than just one vendor’s device. Drones could do reconnaissance first, scan and identify devices from different vendors, and then launch attacks targeting all of the specific devices.
 
In order to highlight attack possibilities and impact consequences I extracted the following from http://www.onworld.com/news/newsoilandgas.html (the companies mentioned in this article are not necessarily vulnerable, this is just for illustrative purposes):
 
“…Pipelines & Corrosion Monitoring
Wireless flow, pressure, level, temperature and valve position monitoring are used to streamline pipeline operation and storage while increasing safety and regulatory compliance. In addition, wireless sensing solutions are targeted at the billions of dollars per year that is spent managing pipeline corrosion. While corrosion is a growing problem for the aging pipeline infrastructure it can also lead to leaks, emissions and even deadly explosions in production facilities and refineries….”
 
Leaks and deadly explosions can have sad consequences.
 
Cyber criminals, terrorists, state actors, etc. can launch big impact attacks with relatively small budgets. Their attacks could produce economical loses, physical damage, even possible explosions.
 
While isolated attacks have a small impact when put in the context of cyberwar, they can cause panic in populations, political crisis, or geopolitical problems if combined with other larger impact attacks.
Probably in a future post I will describe more of these kinds of large scale, big impact attacks.
INSIGHTS | January 21, 2013

When a Choice is a Fingerprint

We frequently hear the phrase “Attribution is hard.” And yes, if the adversary exercises perfect tradecraft, attribution can be hard to the point of impossible. But we rarely mention the opposite side of that coin, how hard it is to maintain that level of tradecraft over the lifetime of an extended operation. How many times out of muscle memory have you absent-mindedly entered one of your passwords in the wrong application? The consequences of this are typically nonexistent if you’re entering your personal email address into your work client, but they can matter much more if you’re entering your personal password while trying to log into the pwned mail server of Country X’s Ministry of Foreign Affairs. People make mistakes, and the longer the timeframe, the more opportunities they have to do so.

This leads me to the recent release from Kaspersky labs, about a malware campaign referred to as “Red October”, which they have attributed to Russian hackers. There are a number of indications pointing to Russian origination, including Russian words in the source code,  a trojan dropper that enables Cyrillic before installation, and targets concentrated in Russia’s sphere of influence. Although Kaspersky has avoided naming the sponsor of the campaign as the Russian government, the targets of the malware are strongly suggestive of government sponsorship. The campaign seemed to selectively target governments, diplomatic facilities, defense, research institutes, etc. These are targets consistent with sponsors seeking geo-political intelligence, not criminals seeking profit. Kaspersky hypothesizes the perpetrators may have collected the data to sell it, but I would argue that this is fallacious. The customer of this information would be a government, and if a government is paying criminals for the information, I would argue that’s state-sponsorship.
With that context, the one datapoint that was most interesting to me in Kaspersky’s release was the inclusion of the word, “zakladka”. As Kasperky mentions in their report, “Zakladka” is a Russian word that can mean “bookmark.” In slang it can also mean “Undocumented feature” or a brick with an embedded microphone, like the kind you would sneak into an adversary nation’s embassy.
It’s delightfully poetic then, that in a piece of malware apparently intended to target embassies someone (presumably Russian) would choose to name a module “zakladka.” The United States and Russia have a rich history of attempting to bug each other’s diplomatic facilities. As early as 1945 the Soviet Union infiltrated an ingenuous listening device into the office of the US ambassador to Moscow, hiding it in a wooden US Seal presented as a gift [1]. By 1964 the Soviets were able to collect extensive classified information from the US embassy through hidden microphones [2]. In 1985 construction work stopped on a new US Embassy building in Moscow after it was determined that the building was so riddled with microphones, which had been integrated into the construction, that it could never be considered secure [3].
Presumably in homage to this history, a programmer decided to name his module of code “zakladka”, which would be included in malware that is effectively the evolution of a microphone hidden in drywall. Zakladka is an appropriate name, but the very elegance with which its name matches
its function undermines the deniability of the malware. In this case, it was a choice made by a programmer years ago, and it has repercussions as forensic experts attempt to unravel the source of the malware today.
It’s a reminder of how humans make mistakes. Defenders often talk about the difficulty of attribution, but as the offense we seldom talk about the challenge in gaining and maintaining network access on a target system while remaining totally unnoticed. We take it for granted.
Seemingly innocuous decisions made days, weeks, or months ago can unravel what was otherwise sound tradecraft. In this case, I just found it fascinating that the choice of name for a module–an elegantly appropriate choice–can be as strong a fingerprint for attribution as anything else.