RESEARCH | November 25, 2015

Privilege Escalation Vulnerabilities Found in Lenovo System Update

Lenovo released a new version of the Lenovo System Update advisory (https://support.lenovo.com/ar/es/product_security/lsu_privilege) about two new privilege escalation vulnerabilities I had reported to Lenovo a couple of weeks ago (CVE-2015-8109, CVE-2015-8110). IOActive and Lenovo have issued advisories on these issues.
 

Before digging into the details, let’s go over a high-level overview of how the Lenovo System Update pops up the GUI application with Administrator privileges.
 
Here is a discussion of the steps depicted above:


1 – The user starts System Update by running the tvsu.exe binary which runs the TvsuCommandLauncher.exe with a specific argument. Previously, Lenovo fixed vulnerabilities that IOActive discovered where an attacker could impersonate a legitimate caller and pass the command to be executed to the SUService service through named pipes to gain a privilege escalation. In the newer version, the argument is a number within the range 1-6 that defines a set of tasks within the dll TvsuServiceCommon.dll

 

2 – TvsuCommandLauncher.exe then, as usual, contacts the SUService service that is running with System privileges, to process the required query with higher privileges.

 

3 – The SUService service then launches the UACSdk.exe binary with System privileges to prepare to execute the binary and run the GUI interface with Administrator privileges.

 

4 – UACSdk.exe checks if the user is a normal unprivileged user or a Vista Administrator with the ability to elevate privileges.

 

5 – Depending on user privileges:

 

    • For a Vista Admin user, the user’s privileges are elevated.
    • For an unprivileged user, UACSdk.exe creates a temporary Administrator account with a random password which is deleted it once the application is closed.

The username for the temporary Administrator account follows the pattern tvsu_tmp_xxxxxXXXXX, where each lowercase x is a randomly generated lower case letter and each uppercase X is a randomly generated uppercase letter. A 19-byte, random password is generated.


Here is a sample of a randomly created user:    



6 – Through tvsukernel.exe binary, the main Lenovo System Update GUI application is then run with Administrator privileges. 




BUG 1 : Lenovo System Update Help Topics Privilege Escalation
The first vulnerability is within the Help system and has two entry points by which a user can open an online help topic that starts an instance of Internet Explorer.

1 – The link in the main application interface 

 
 

2 – By clicking on the Help icon 
at top right and then clicking Settings
 

 

 

 
Since the main application Tvsukernel.exe is running as Administrator, the web browser instance that starts to open a Help URL inherits the parent Administrator privileges.
From there, an unprivileged attacker has many ways to exploit the web browser instance running under Administrator privileges to elevate his or her own privileges to Administrator or SYSTEM.
BUG 2 : Lenovo System Weak Cryptography Function Privilege Escalation
Through a more technical bug and exploitable vulnerability, the temporary administrator account is created in specific circumstances related to Step 5b in the overview.
The interesting function for setting the temporary account is sub_402190 and contains the following important snippets of code:
 
 
The function sub_401810 accepts three arguments and is responsible for generating a random string pattern with the third argument length.
 
Since sub_401810 generates a pattern using RAND, the initialization of the seed is based on the addition of current time and rand values and defined as follows:
 
 
 
Once the seed is defined, the function generates the random value using a loop with RAND and division/multiplication with specific values.
 
Rather than posting the full code, I’ll note that a portion of those loops looks like the following:
 
 
 
The first call to this function is used to generate the 10-letter suffix for the Administrator username that will be created as “tvsu_tmp_xxxxxXXXXX”
 
Since it is based on rand, the algorithm is actually predictable. It is possible for an attacker to regenerate the same username based on the time the account was created.
 
To generate the password (which is more critical), Lenovo has a more secure method: Microsoft Crypto API (Method #1) within the function sub_401BE0. We will not dig into this method, because the vulnerability IOActive discovered is not within this function. Instead, let’s look at how Method #2 generates a password, if Method #1 fails.
 
Let’s return to the snippets of code related to password generation:
 
 
 
We can clearly see that if function sub_401BE0 fails, the execution flow fails back using the custom RAND-based algorithm (defined earlier in function sub_401810) to generate a predictable password for the temporary Administrator account. In other words, an attacker could predict the password created by Method #2.
 

This means an attacker could under certain circumstances predict both the username and password and use them to elevate his or her privileges to Administrator on the machine.
RESEARCH | September 22, 2015

Is Stegomalware in Google Play a Real Threat?

For several decades, the science of steganography has been used to hide malicious code (useful in intrusions) or to create covert channels (useful in information leakage). Nowadays, steganography can be applied to almost any logical/physical medium (format files, images, audio, video, text, protocols, programming languages, file systems, BIOS, etc.). If the steganographic algorithms are well designed, the hidden information is really difficult to detect. Detecting hidden information, malicious or not, is so complex that the study of steganalytic algorithms (detection) has been growing. You can see the growth in scientific publications (source: Scholar Google) and research investment by governments or institutions.
In fact, since the attacks on September 11, 2001, there has been a lot of discussion on the possibility of terrorists using this technology. See:
 
 
 
 
In this post, I would like to illustrate steganography’s ability to hide data in Android applications. In this experiment, I focus on Android applications published in Google Play, leaving aside alternative markets with lower security measures, where it is easier to introduce malicious code.
 
 
Is it possible to hide information on Google Play or in the Android apps released in it?
 
The answer is easy: YES! Simple techniques have been documented, from hiding malware by renaming the file extension (Android / tr DroidCoupon.A – 2011, Android / tr SmsZombie.A – 2012, Android / tr Gamex.A – 2013) to more sophisticated procedures (AngeCryption – BlackHat Europe October2014).
 
Let me show some examples in more depth:
 
 
Google Play Web (https://play.google.com)
 
Google Play includes a webpage for each app with information such as a title, images, and a text description. Each piece of information could conceal data using steganography (linguistic steganography, image steganography, etc.). In fact, I am going to “work” with digital images and demonstrate how Google “works” when there is hidden information inside of files.
 
To do this, I will use two known steganographic techniques: adding information to the end of file (EOF) and hiding information in the least significant bit (LSB) of each pixel of the image.
      
 
          PNG Images
 
You can upload PNG images to play.google.com that hide information using EOF or LSB techniques. Google does not remove this information.
 
For example, I created a sample app (automatically generated – https://play.google.com/store/apps/details?id=com.wMyfirstbaskeballgame) and uploaded several images (which you can see on the web) with hidden messages. In one case, I used the OpenStego steganographic tool (http://www.openstego.com/) and in another, I added the information at the end of an image with a hex editor.
 
The results can be seen by performing the following steps (analyzing the current images “released” on the website):
Example 1: PNG with EOF
 
 
Step 2: Loot at the end of the file 🙂
Example 2: PNG with LSB
 
 
Step 2: Recover the hidden information using Openstego (key=alfonso)
 
JPEG Images
If you try to upload a steganographic JPEG image (EOF or LSB) to Google Play, the hidden information will be removed. Google reprocesses the image before publishing it. This does not necessarily mean that it is not possible to hide information in this format. In fact, in social networks such as Facebook, we can “avoid” a similar problem with Secret Book or similar browser extensions. I’m working on it…
 
https://chrome.google.com/webstore/detail/secretbook/plglafijddgpenmohgiemalpcfgjjbph?hl=en-GB
 
In summary, based on the previous proofs, I can say that Google Play allows information to be hidden in the images of each app. Is this useful? It could be used to exchange hidden information (covert channel using the Google Play). The main question is whether an attacker could use information masked for some evil purpose. Perhaps they could use images to encode executable code that “will exploit” when you are visiting the web (using for example polyglots + stego exploits) or another idea. Time will tell…
 
 
 
APK Steganography
 
Applications uploaded toGoogle Play are not modified by the market. That is, an attacker can use any of the existing resources in an APK to hide information, and Google does not remove that information. For example, machine code (DEX), PNG, JPEG, XML, and so on.
 
 
Could it be useful to hide information on those resources?
 
An attacker might want to conceal malicious code on these resources and hinder automatic detection (static and dynamic analysis) that focuses on the code (DEX). A simple example would be an application that hides a specific phone number in an image (APT?).
 
The app verifies the phone number, and after a few clicks in a specific screen on a mobile phone, checks if the number is equal to that stored in the picture. If the numbers match, you can start leaking information (depending on the permissions allowed in the application).
 
I want to demonstrate the potential of Android stegomalware with a PoC. Instead of developing it, I will analyze an active sample that has been on Google Play since June 9, 2014. This stegomalware was developed by researchers at Universidad Carlos III de Madrid, Spain (http://www.uc3m.es). This PoC hides a DEX file (executable code) in an image (resource) of the main app. When the app is running, and the user performs a series of actions, the image recovers the “new” DEX file. This code runs and connects to a URL with a payload (in this case harmless). The “bad” behavior of this application can only be detected if we analyze the resources of the app in detail or simulate the interaction the app used for triggering the connection to the URL.
 
Let me show how this app works (static manual analysis):
Step 1: Download the APK to our local store. This requires a tool, such as an APK downloader extension or a specific web as http://apps.evozi.com/apk-downloader/
Step 2. Unzip the APK (es.uc3m.cosec.likeimage.apk)
Step 3. Using the Stegdetect steganalytic tool (https://github.com/abeluck/stegdetect) we can detect hidden information in the image “likeimage.jpg”. The author used the F5 steganographic tool (https://code.google.com/p/f5-steganography/).
 
es.uc3m.cosec.likeimageresdrawable-hdpilikeimage.jpg
likeimage.jpg : f5(***)
Step 4. To analyze (reverse engineer) what the app is doing with this image, I use the dex2jar and jd tools.
Step 5. Analyzing the code, we can observe the key used to hide information in the image. We can recover the hidden content to a file (bicho.dex).
java -jar f5.jar x -p cosec -e bicho.dex likeimage.jpg
 
Step 6. Analyzing the new file (bicho.dex), we can observe the connection to http://cosec-uc3m.appspot.com/likeimage for downloading a payload.
 
Step 7. Analyzing the code and payload, we can demonstrate that it is inoffensive.
ZGV4CjAzNQDyUt1DKdvkkcxqN4zxwc7ERfT4LxRA695kAgAAcAAAAHhWNBIAAAAAAAAAANwBAAAKAAAAcAAAAAQAAACYAAAAAgAAAKgAAAAAAAAAAAAAAAMAAADAAAAAAQAAANgAAABsAQAA+AAAACgBAAAwAQAAMwEAAD0BAABRAQAAZQEAAKUBAACyAQAAtQEAALsBAAACAAAAAwAAAAQAAAAHAAAAAQAAAAIAAAAAAAAABwAAAAMAAAAAAAAAAAABAAAAAAAAAAAACAAAAAEAAQAAAAAAAAAAAAEAAAABAAAAAAAAAAYAAAAAAAAAywEAAAAAAAABAAEAAQAAAMEBAAAEAAAAcBACAAAADgACAAEAAAAAAMYBAAADAAAAGgAFABEAAAAGPGluaXQ+AAFMAAhMTUNsYXNzOwASTGphdmEvbGFuZy9PYmplY3Q7ABJMamF2YS9sYW5nL1N0cmluZzsAPk1BTElDSU9VUyBQQVlMT0FEIEZST00gVEhFIE5FVDogVGhpcyBpcyBhIHByb29mIG9mIGNvbmNlcHQuLi4gAAtNQ2xhc3MuamF2YQABVgAEZ2V0UAAEdGhpcwACAAcOAAQABw4AAAABAQCBgAT4AQEBkAIAAAALAAAAAAAAAAEAAAAAAAAAAQAAAAoAAABwAAAAAgAAAAQAAACYAAAAAwAAAAIAAACoAAAABQAAAAMAAADAAAAABgAAAAEAAADYAAAAASAAAAIAAAD4AAAAAiAAAAoAAAAoAQAAAyAAAAIAAADBAQAAACAAAAEAAADLAQAAABAAAAEAAADcAQAA
 
The code that runs the payload:
 
Is Google detecting these “stegomalware”?
 
Well, I don’t have the answer. Clearly, steganalysis science has its limitations, but there are other ways to monitor strange behaviors in each app. Does Google do it? It is difficult to know, especially if we focus on “mutant” applications. Mutant applications are applications whose behavior could easily change. Detection would require continuous monitoring by the market. For example, for a few months I have analyzed a special application, including its different versions and the modifications that have been published, to observe if Google does anything with it. I will show the details:
Step 1. The mutant app is “Holy Quran video and MP3” (tr.com.holy.quran.free.apk). Currently at https://play.google.com/store/apps/details?id=tr.com.holy.quran.free

Step 2.  Analyzing the current and previous version of this app, I discover connections to specific URLs (images files). Are these truly images? Not all.

Step 3. Two URLs that the app connects to are very interesting. In fact, they aren’t images but SQLite databases (with messages in Turkish). This is the trivial steganography technique of simply renaming the file extension. The author changed the content of these files:

Step 4. If we analyze these databases, it is possible to find curious messages. For example, recipes with drugs.

Is Google aware of the information exchanged using their applications? This example does not cease to be a mere curiosity, but such procedures might violate the policy of publication of certain applications on the market or more serious things.
 
Figure: Recipes inside the file io.png (SQLite database)

In summary, this small experiment shows that we can hide information on Google Play and Android apps in general. This feature can be used to conceal data or implement specific actions, malicious or not. Only the imagination of an attacker will determine how this feature will be used…


Disclaimer: part of this research is based on a previous research by the author at ElevenPaths
INSIGHTS | May 11, 2015

Vulnerability disclosure the good and the ugly

I can’t believe I continue to write about disclosure problems. More than a decade ago, I started disclosing vulnerabilities to vendors and working with them to develop fixes. Since then, I have reported hundreds of vulnerabilities. I often think I have seen everything, and yet, I continue to be surprised over and over again. I wrote a related blog post a year and a half ago (Vulnerability bureaucracy: Unchanged after 12 years), and I will continue to write about disclosure problems until it’s no longer needed.

 

Everything is becoming digital. Vendors are producing software for the first time or with very little experience, and many have no security knowledge. As a result, insecure software is being deployed worldwide. The Internet of Things (IoT), industrial devices and industrial systems (SCADA/ICS), Smart City technology, automobile systems, and so on are insecure and getting worse instead of better.

 

Besides lacking of security knowledge, many vendors do not know how to deal with vulnerability reports. They don’t know what to do when an individual researcher or company privately discloses a vulnerability to them, how to properly communicate the problem, or how to fix it. Many vendors haven’t planned for security patches. Basically, they never considered the possibility of a latent security flaw. This creates many of the problems the research community commonly faces.

 

When IOActive recently disclosed vulnerabilities in CyberLock products, we faced problems, including threats from CyberLock’s lawyers related to the Digital Millennium Copyright Act (DMCA). CyberLock’s response is a very good example of a vendor that does not know how to properly deal with vulnerability reports.

 

On the other hand, we had a completely different experience when we recently reported vulnerabilities to Lenovo. Lenovo’s response was very professional and collaborative. They even publicly acknowledged our collaboration:

 

“Lenovo’s development and security teams worked directly with IOActive regarding their System Update vulnerability findings, and we value their expertise in identifying and responsibly reporting them.”

Source: http://www.absolutegeeks.com/2015/05/06/round-2-lenovo-is-back-in-the-news-for-a-new-security-risk-in-their-computers (no longer active)

IOActive approached both cases in the same way, but with two completely different reactions and results.

 

We always try to contact the affected vendor through a variety of channels and offer our collaboration to ensure a fix is in place before we disclose our research to the public. We invest a lot of time and resources to helping vendors understand the vulnerabilities we find. We have calls with developers and managers, test their fixes, and so on, all for free without expecting anything in return. We do not propose nor discuss business opportunities; our only motive is to see that the vulnerabilities get fixed. We have a great track record; we’ve reported dozens of vulnerabilities and collaborated with many vendors and CERTs too.

 

When a vendor is nonresponsive, we feel that the best solution is usually to disclose the vulnerability to the public. We do this as a last resort, as no vendor patch or solution will be available in such a case. We do not want to be complicit in hiding a flaw. Letting people know can force the vendor to address the vulnerability.

Dealing with vulnerability reports shouldn’t be this difficult. I’m going to give some advice, based on my experience, to help companies avoid vulnerability disclosure problems and improve the security of their products:

 

  • Clearly display a contact email for vulnerability reports on the company/product website
  • Continuously monitor that email address and instantly acknowledge when you get a vulnerability report
  • Agree on response procedures including regular timeframes for updating status information after receiving the report
  • Always be collaborative with the researcher/company, even if you don’t like the reporter
  • Always thank the researcher/company for the report, even if you don’t like the reporter
  • Ask the reporter for help if needed, and work together with the reporter to find better solutions
  • Agree on a time for releasing a fix
  • Agree on a time for publicly disclosing the vulnerability
  • Release the fix on time and alert customers

That’s it! Not so difficult. Any company that produces software should follow these simple guidelines at a minimum.

It comes down to this: If you produce software, consider the possibility that your product will have security vulnerabilities and plan accordingly. You will do yourself a big favor, save money, and possibly save your company’s reputation too.
RESEARCH | October 17, 2014

Vicious POODLE Finally Kills SSL

The poodle must be the most vicious dog, because it has killed SSL.

 

POODLE is the latest in a rather lengthy string of vulnerabilities in SSL (Secure Socket Layer) and a more recent protocol, TLS (Transport layer Security). Both protocols secure data that is being sent between applications to prevent eavesdropping, tampering, and message forgery.

POODLE (Padding Oracle On Downgraded Legacy Encryption) rings the death knell for our 18-year-old friend SSL version 3.0 (SSLv3), because at this point, there is no truly safe way to continue using it.

Google announced Tuesday that its researchers had discovered POODLE. The announcement came amid rumors about the researchers’ security advisory white paper which details the vulnerability, which was circulating internally.

SSLv3 had survived numerous prior vulnerabilities, including SSL renegotiation, BEAST, CRIME, Lucky 13, and RC4 weakness. Finally, its time has come; SSLv3 is long overdue for deprecation.

The security industry’s initial view is that POODLE will not be as devastating as other recent vulnerabilities such as Heartbleed, a TLS bug. After all, POODLE is a client-side attack; the others were direct server-side attacks.

However, I believe POODLE will ultimately have a larger overall impact than Heartbleed. Even the hundreds of thousands of applications that use a more recent TLS protocol still use SSLv3 as part of backward compatibility. In addition, some applications that directly use SSLv3 may not support any version of TLS; for these, there might not be a quick fix, if there will be one at all.


POODLE attacks the SSLv3 block ciphers by abusing the non-deterministic nature of block cipher padding of CBC ciphers. The Message Authentication Code (MAC), which checks the integrity of every message after decryption, does not cover these padding bytes. What does this mean? The padding can’t be fully verified. In other words, this attack is very capable of determining the value of HTTPS cookies. This is the heart of the problem. That might not seem like a huge issue until you consider that this may be a session cookie, and the user’s session could be potentially compromised.

TLS version 1.0 (TLSv1.0) and higher versions are not affected by POODLE because these protocols are strict about the contents of the padding bytes. Therefore, TLSv1.0 is still considered safe for CBC mode ciphers. However, we shouldn’t let that lull us into complacency. Keep in mind that even the clients and servers that support recent TLS versions can force the use of SSLv3 by downgrading the transmission channel, which is often still supported. This ‘downgrade dance’ can be triggered through a variety of methods. What’s important to know is that it can happen.

There are a few ways to prevent POODLE from affecting your communication:

Plan A: Disable SSLv3 for all applications. This is the most effective mitigation for both clients and servers.

Plan B: As an alternative, you could disable all CBC Ciphers for SSLv3. This will protect you from POODLE, but leaves only RC4 as the remaining “strong” cryptographic ciphers, which as mentioned above has had weaknesses in the past.

Plan C: If an application must continue supporting SSLv3 in order work correctly, implement the TLS_FALLBACK_SCSV mechanism. Some vendors are taking this approach for now, but it is a coping technique, not a solution. It addresses problems with retried connections and prevents reversion to earlier protocols, as described in the document TLS Fallback Signaling Cipher Suite Value for Preventing Protocol Downgrade Attacks (Draft Released for Comments).

How to Implement Plan A

 

 

 

With no solution that would allow truly safe continued use of SSLv3, you should implement Plan A: Disable SSLv3 for both server and client applications wherever possible, as described below.

Disable SSLv3 for Browsers

 

 

 

 

Browser
 Disabling instructions
Chrome:
Add the command line -ssl-version-min=tls1 so the browser uses TLSv1.0 or higher.
Internet: Explorer:
Go to IE’s Tools menu -> Internet Options -> Advanced tab. Near the bottom of the tab, clear the Use SSL 3.0 checkbox.
Firefox:
Type about:config in the address bar and set security.tls.version.min to 1.
Adium/Pidgin:
 Clear the Force Old-Style SSL checkbox.

Note: Some browser vendors are already issuing patches and others are offering diagnostic tools to assess connectivity.

If your device has multiple users, secure the browsers for every user. Your mobile browsers are vulnerable as well.

Disable SSLv3 on Server Software

 

 

 

 
Server Software 
   Disabling instructions
Apache:
Add -SSLv3 to the SSLProtocol line.
IIS 7:
Because this is an involved process that requires registry tweaking and a reboot, please refer to Microsoft’s instructions: https://support.microsoft.com/kb/187498/en-us
Postfix:
In main.cf, adopt the setting smtpd_tls_mandatory_protocols=!SSLv3 and  ensure that !SSLv2 is present too.

Stay alert for news from your application vendors about patches and recommendations.

POODLE is a high risk for payment gateways and other applications that might expose credit card data and must be fixed in 30 days, according to Payment Card Industry standards. The clock is ticking.

 

 

Conclusion

 

 

 

 

Ideally, the security industry should move to recent versions of the TLS protocol. Each iteration has brought improvements, yet adoption has been slow. For instance, TLS version 1.2, introduced in mid-2008, was scarcely implemented until recently and many services that support TLSv1.2 are still required to support the older TLSv1.0, because the clients don’t yet support the newer protocol version.

A draft of TLS version 1.3 released in July 2014 removes support for features that were discovered to make encrypted data vulnerable. Our world will be much safer if we quickly embrace it.

Robert Zigweid

 

 

References 
INSIGHTS | November 15, 2007

The KEYLOK USB Dongle. Little. Green. And dead before it was born!

We decided to do a teardown on a Keylok USB based dongle from Microcomputer Applications, Inc. (MAI).

Opening the dongle was no challenge at all. We used an x-acto knife to slit the sidewall of the rubber protective coating. This allowed us to remove the dongle’s circuit board from the surrounding protective coating.

The top side of the printed circuit board (PCB) is shown above. MAI did not try to conceal anything internally. We were a little surprised by this :(.

The backside consists of two tracks and a large ground plane. The circuit is very simple for an attacker to duplicate.

With the devices removed, a schematic can be created literally within minutes. The 20-pin version of CY7C63101A can even be used in place of the smaller SOIC 24-pin package (which is difficult for some to work with). The 20-pin is also available in a dual-inline-package (DIP) making it a great candidate for an attacker to use.

Red pin denotes pin 1 on the device.

We performed some magic and once again we have success to unlock the once protected device. A quick look for ASCII text reveals a bunch of text beginning around address $06CB: .B.P.T. .E.n.t.e.r.p.r.i.s.e.s…D.o.n.g.l.e. .D.o.n.g.l.e. .C.o.m.m.<
.E.n.d.P.o.i.n.t.1. .1.0.m.s. .I.n.t.e.r.r.u.p.t. .P.i.p.e.

Ironically, they say, “There are many advantages to using a hardware “based security solution AKA, a Dongle. There are even more advantages however to using KEYLOK Dongles over other competing solutions.”

Statement’s such as the one above are the reason Flylogic Engineering started this blog. We have heard this just one too many times from companies who are franckly pushing garbage. Garbage in, garbage out. Enough said on that.

This dongle is the weakest hardware based security token we have ever seen!! The outer physical protection layers ease of entry places this dongle last on our list of who’s hot and who’s not!

INSIGHTS | November 3, 2007

In retrospect – A quick peek at the Intel 80286

We thought we would mix the blog up a little and take you back in time.  To a time when the fastest PC’s ran at a mere 12 Mhz.  The time was 1982.  Some of us were busy trying to beat Zork or one of the Ultima series role-playing games.  You were lucky to have a color monitor on your PC back then.

We happen to have a 1982 era Siemens 80286

If anyone is interested in donating any old devices such as an i4004 or i8008, please email us.
INSIGHTS | October 30, 2007

Safenet iKey 2032 In-depth Look Inside

Chances are you have probably seen one of these little USB based tokens made from  Safenet, Inc.

The one we opened was in a blue shell.

 

Safekey says, iKey 2032 is a compact, two-factor authentication token that provides client security for network authentication, e-mail encryption, and digital signing applications.”

As well, the brochure the link above takes you too states,  iKey 2032s small size and rugged, tamper resistant construction, make it easy to carry so users can always have their unique digital entities with them.”

Now we’re not really sure what tamper resistant construction has to do with making things easy for a user to carry around  but let’s get down to the good stuff.

 

We carefully decapsulated the epoxy covering the die buried inside the 24 pin SOIC part.  What did we find?  We found a Cypress CY7C63613!  We suspected it might be this part because of the pinout.   This is why scratching off the top of the part does not always help.  Even with the silkscreen scratched away, there are only a few possible candidates using this pinout.   Additionally, this CPU is very common used in  USB  applications.

 

Once the CPU was decapsulated, we performed some tests on the device.   After executing some tricks, the software contained internally was magically in our hands.

 

We looked for some type of copyright information in the software but all we found was the USB identifier string at address offset $3C0: i.K.e.y. .2.0.3.2

 

Now that we successfully analyzed the CPU, the protocol for communications to whatever is present under the epoxy is available to us.   At this point, we believe it’s more than a serial EEPROM because this CPU is not strong enough to calculate  asymmetric cryptographic algorithms in a timely manner.

 

Next we carefully removed the die-bonded substrate from the PCB:

 

With the die-bonded device removed and a little cleanup, we can clearly see the bondout pattern for a die-bonded smartcard IC. We can see VCC, RST, CLK, IO, and GND layed out according to the ISO-7816 standard which Flylogic Engineering are experts on.

 

After completely decapsulating the smartcard processor, we found a quite common Philips smartcard IC.   We will call this part from now on the Crypto-Coprocessor (CCP).

 

The CCP fits into place on the PCB.   It is glued down and then five aluminum wires were wedge-bonded to the PCB.   Aluminum wedge-bonding was used so the PCB would not need to be heated which would help them cut down the time required on the assembly line.

 

In preparation for analysis, we had to rebond the  CCP into a 24-pin ceramic dip (CDIP). Although we only needed five contacts rebonded, the die-size was too large to fit into the cavity of an 8-pin CDIP.

 

The CCP is fabricated by Philips.  It appears to be a  ~250nm, five metal layer technology based on the Intel 8051 platform.  It contains 32k of EEPROM, two static ram areas and a ROM nested underneath a mesh made up of someone(s) initials (probably the layout designers).

 

This CPU (The CCP is also a CPU but acting as a slave to the Cypress CPU)  is not secure.   In fact, this CPU is also all over the globe in GSM SIM cards. The only difference is the code contained inside the processor.

 

Some points of interest:

 

Point #1-  The ‘mesh’ protecting probing from the ROM’s databus outputs is NOT SECURE!
Point #2- A quick search on the internet and we came across a public document from when Philips tried to get this part or a part very close to this one common criteria certified. The document labels this assumed to-be part as a, “Philips P8WE5033V0F Secure 8-bit Smart Card Controller.

 

Reading over this document, we find a block diagram on page 8.

 

“Security Sensors” as a block of logic.  That’s ironic considering we opened a gaping hole in their “mesh” over the ROM and the processor still runs 100% functional.

 

Point #3-  For such a “secure” device, Philips could have done a lot more.  The designer’s were pretty careless in a lot of areas.  Simply reconnecting the two tracks together will definately be helpful to an attacker.   A Focused Ion-Beam Workstation can make bond-pads for those two tracks that we can then bond out to the CDIP.  This way  we can short or open this test-circuit.

 

Now ask yourself if you are a potential customer to Safenet, Inc   Would you purchase this token?
INSIGHTS | October 26, 2007

Unmarked Die Revisions :: Part I

We have noticed a few different die revisions on various Microchip’s substrates that caught our attention.  In most case when a company executes any type of change to the die, they change the nomenclature slightly.  An example is the elder PIC16C622.  After some changes, the later part was named the PIC16C622A and there was major silicon layout changes to the newer ‘A’ part.

The PIC16C54 has been through three known silicon revs (‘A’ – ‘C’) and has now been replaced by the PIC16F54.

However, we’ve noticed two different devices from them (PIC12F683 and PIC18F1320) that caught our eye.  The PIC12F683 changes seem purely security related concerns.  The code protection output track was rerouted.

Our guess- They were concerned the magic track was too easilly accessable.

We will focus this article to the PIC18F1320 and consider this as a follow-up to our friends at Bunnie Studios, LLC.  A few years ago Bunnie wrote an article about being able to reset the fuses of the PIC18F1320 to a ‘1’ thus unlocking the once protected PIC.

The newly improvised second generation PIC18F1320 with fill patterns place over open areas.

You might ask yourself:  What are they hiding?  The part contains three metal layers however, the top layer has been partially removed by wet-etching techniques.  This allows us to see below in denser areas.The newly improvised second generation PIC18F1320 now has these cells covered aka Bunnie attack prevention!

Conclusion:  Did they archieve their goal?  From an optical attack, sort-of.  They are not expecting the attacker to be able to selectively remove this covering metal.  Stay tuned for part II of this report where we show you this area with the covering metal removed and the fuses exposed once again!