INSIGHTS, RESEARCH | September 11, 2020

WSL 2.0 dxgkrnl Driver Memory Corruption

The year 2020 has been a disaster of biblical proportions. Old Testament, real wrath of God type stuff. Fire and brimstone coming down from the skies! Rivers and seas boiling! Forty years of darkness, earthquakes, volcanoes, the dead rising from the grave! Human sacrifices, dogs and cats living together…mass hysteria and reporting Linux kernel bugs to Microsoft!? I thought I would write up a quick blog post explaining the following tweet and walk through a memory corruption flaw reported to MSRC that was recently fixed.

Back in May, before Alex Ionescu briefly disappeared from the Twitter-verse causing a reactionary slew of conspiracy theories, he sent out this tweet calling out the dxgkrnl driver. That evening it was brought to my attention by my buddy Ilja van Sprundel. Ilja has done a lot of driver research over the years, some involving Windows kernel graphics drivers. The announcement of dxgkrnl was exciting and piqued our interest regarding the new attack surface it opens up. So we decided to quickly dive into it and race to find bugs. When examining kernel drivers the first thing I head to are the IOCTL (Input/Output Control) handlers. IOCTL handlers allow users to communicate with the driver via the ioctl syscall. This is a prime attack surface because the driver is going to be handling userland-provided data within kernel space. Looking into drivers/gpu/dxgkrnl/ioctl.c the following function is at the bottom, showing us a full list of the IOCTL handlers that we want to analyze.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
void ioctl_desc_init(void)
{
    memset(ioctls, 0, sizeof(ioctls));
    SET_IOCTL(/*0x1 */ dxgk_open_adapter_from_luid,
          LX_DXOPENADAPTERFROMLUID);
    SET_IOCTL(/*0x2 */ dxgk_create_device,
          LX_DXCREATEDEVICE);
    SET_IOCTL(/*0x3 */ dxgk_create_context,
          LX_DXCREATECONTEXT);
    SET_IOCTL(/*0x4 */ dxgk_create_context_virtual,
          LX_DXCREATECONTEXTVIRTUAL);
    SET_IOCTL(/*0x5 */ dxgk_destroy_context,
          LX_DXDESTROYCONTEXT);
    SET_IOCTL(/*0x6 */ dxgk_create_allocation,
          LX_DXCREATEALLOCATION);
    SET_IOCTL(/*0x7 */ dxgk_create_paging_queue,
          LX_DXCREATEPAGINGQUEUE);
    SET_IOCTL(/*0x8 */ dxgk_reserve_gpu_va,
          LX_DXRESERVEGPUVIRTUALADDRESS);
    SET_IOCTL(/*0x9 */ dxgk_query_adapter_info,
          LX_DXQUERYADAPTERINFO);
    SET_IOCTL(/*0xa */ dxgk_query_vidmem_info,
          LX_DXQUERYVIDEOMEMORYINFO);
    SET_IOCTL(/*0xb */ dxgk_make_resident,
          LX_DXMAKERESIDENT);
    SET_IOCTL(/*0xc */ dxgk_map_gpu_va,
          LX_DXMAPGPUVIRTUALADDRESS);
    SET_IOCTL(/*0xd */ dxgk_escape,
          LX_DXESCAPE);
    SET_IOCTL(/*0xe */ dxgk_get_device_state,
          LX_DXGETDEVICESTATE);
    SET_IOCTL(/*0xf */ dxgk_submit_command,
          LX_DXSUBMITCOMMAND);
    SET_IOCTL(/*0x10 */ dxgk_create_sync_object,
          LX_DXCREATESYNCHRONIZATIONOBJECT);
    SET_IOCTL(/*0x11 */ dxgk_signal_sync_object,
          LX_DXSIGNALSYNCHRONIZATIONOBJECT);
    SET_IOCTL(/*0x12 */ dxgk_wait_sync_object,
          LX_DXWAITFORSYNCHRONIZATIONOBJECT);
    SET_IOCTL(/*0x13 */ dxgk_destroy_allocation,
          LX_DXDESTROYALLOCATION2);
    SET_IOCTL(/*0x14 */ dxgk_enum_adapters,
          LX_DXENUMADAPTERS2);
    SET_IOCTL(/*0x15 */ dxgk_close_adapter,
          LX_DXCLOSEADAPTER);
    SET_IOCTL(/*0x16 */ dxgk_change_vidmem_reservation,
          LX_DXCHANGEVIDEOMEMORYRESERVATION);
    SET_IOCTL(/*0x17 */ dxgk_create_hwcontext,
          LX_DXCREATEHWCONTEXT);
    SET_IOCTL(/*0x18 */ dxgk_create_hwqueue,
          LX_DXCREATEHWQUEUE);
    SET_IOCTL(/*0x19 */ dxgk_destroy_device,
          LX_DXDESTROYDEVICE);
    SET_IOCTL(/*0x1a */ dxgk_destroy_hwcontext,
          LX_DXDESTROYHWCONTEXT);
    SET_IOCTL(/*0x1b */ dxgk_destroy_hwqueue,
          LX_DXDESTROYHWQUEUE);
    SET_IOCTL(/*0x1c */ dxgk_destroy_paging_queue,
          LX_DXDESTROYPAGINGQUEUE);
    SET_IOCTL(/*0x1d */ dxgk_destroy_sync_object,
          LX_DXDESTROYSYNCHRONIZATIONOBJECT);
    SET_IOCTL(/*0x1e */ dxgk_evict,
          LX_DXEVICT);
    SET_IOCTL(/*0x1f */ dxgk_flush_heap_transitions,
          LX_DXFLUSHHEAPTRANSITIONS);
    SET_IOCTL(/*0x20 */ dxgk_free_gpu_va,
          LX_DXFREEGPUVIRTUALADDRESS);
    SET_IOCTL(/*0x21 */ dxgk_get_context_process_scheduling_priority,
          LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY);
    SET_IOCTL(/*0x22 */ dxgk_get_context_scheduling_priority,
          LX_DXGETCONTEXTSCHEDULINGPRIORITY);
    SET_IOCTL(/*0x23 */ dxgk_get_shared_resource_adapter_luid,
          LX_DXGETSHAREDRESOURCEADAPTERLUID);
    SET_IOCTL(/*0x24 */ dxgk_invalidate_cache,
          LX_DXINVALIDATECACHE);
    SET_IOCTL(/*0x25 */ dxgk_lock2,
          LX_DXLOCK2);
    SET_IOCTL(/*0x26 */ dxgk_mark_device_as_error,
          LX_DXMARKDEVICEASERROR);
    SET_IOCTL(/*0x27 */ dxgk_offer_allocations,
          LX_DXOFFERALLOCATIONS);
    SET_IOCTL(/*0x28 */ dxgk_open_resource,
          LX_DXOPENRESOURCE);
    SET_IOCTL(/*0x29 */ dxgk_open_sync_object,
          LX_DXOPENSYNCHRONIZATIONOBJECT);
    SET_IOCTL(/*0x2a */ dxgk_query_alloc_residency,
          LX_DXQUERYALLOCATIONRESIDENCY);
    SET_IOCTL(/*0x2b */ dxgk_query_resource_info,
          LX_DXQUERYRESOURCEINFO);
    SET_IOCTL(/*0x2c */ dxgk_reclaim_allocations,
          LX_DXRECLAIMALLOCATIONS2);
    SET_IOCTL(/*0x2d */ dxgk_render,
          LX_DXRENDER);
    SET_IOCTL(/*0x2e */ dxgk_set_allocation_priority,
          LX_DXSETALLOCATIONPRIORITY);
    SET_IOCTL(/*0x2f */ dxgk_set_context_process_scheduling_priority,
          LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY);
    SET_IOCTL(/*0x30 */ dxgk_set_context_scheduling_priority,
          LX_DXSETCONTEXTSCHEDULINGPRIORITY);
    SET_IOCTL(/*0x31 */ dxgk_signal_sync_object_cpu,
          LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU);
    SET_IOCTL(/*0x32 */ dxgk_signal_sync_object_gpu,
          LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU);
    SET_IOCTL(/*0x33 */ dxgk_signal_sync_object_gpu2,
          LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2);
    SET_IOCTL(/*0x34 */ dxgk_submit_command_to_hwqueue,
          LX_DXSUBMITCOMMANDTOHWQUEUE);
    SET_IOCTL(/*0x35 */ dxgk_submit_wait_to_hwqueue,
          LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE);
    SET_IOCTL(/*0x36 */ dxgk_submit_signal_to_hwqueue,
          LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE);
    SET_IOCTL(/*0x37 */ dxgk_unlock2,
          LX_DXUNLOCK2);
    SET_IOCTL(/*0x38 */ dxgk_update_alloc_property,
          LX_DXUPDATEALLOCPROPERTY);
    SET_IOCTL(/*0x39 */ dxgk_update_gpu_va,
          LX_DXUPDATEGPUVIRTUALADDRESS);
    SET_IOCTL(/*0x3a */ dxgk_wait_sync_object_cpu,
          LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU);
    SET_IOCTL(/*0x3b */ dxgk_wait_sync_object_gpu,
          LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU);
    SET_IOCTL(/*0x3c */ dxgk_get_allocation_priority,
          LX_DXGETALLOCATIONPRIORITY);
    SET_IOCTL(/*0x3d */ dxgk_query_clock_calibration,
          LX_DXQUERYCLOCKCALIBRATION);
    SET_IOCTL(/*0x3e */ dxgk_enum_adapters3,
          LX_DXENUMADAPTERS3);
    SET_IOCTL(/*0x3f */ dxgk_share_objects,
          LX_DXSHAREOBJECTS);
    SET_IOCTL(/*0x40 */ dxgk_open_sync_object_nt,
          LX_DXOPENSYNCOBJECTFROMNTHANDLE2);
    SET_IOCTL(/*0x41 */ dxgk_query_resource_info_nt,
          LX_DXQUERYRESOURCEINFOFROMNTHANDLE);
    SET_IOCTL(/*0x42 */ dxgk_open_resource_nt,
          LX_DXOPENRESOURCEFROMNTHANDLE);
}

When working through this list of functions, I eventually stumbled into dxgk_signal_sync_object_cpu which has immediate red flags. We can see that data is copied from userland into kernel space via dxg_copy_from_user() in the form of the structure d3dkmt_signalsynchronizationobjectfromcpu and the data is passed as various arguments to dxgvmb_send_signal_sync_object().

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
struct d3dkmt_signalsynchronizationobjectfromcpu {
    d3dkmt_handle device;
    uint object_count;
    d3dkmt_handle *objects;
    uint64_t  *fence_values;
    struct d3dddicb_signalflags flags;
};

static int dxgk_signal_sync_object_cpu(struct dxgprocess *process,
                       void *__user inargs)
{
    struct d3dkmt_signalsynchronizationobjectfromcpu args;
    struct dxgdevice *device = NULL;
    struct dxgadapter *adapter = NULL;
    int ret = 0;

    TRACE_FUNC_ENTER(__func__);

    ret = dxg_copy_from_user(&args, inargs, sizeof(args));  // User controlled data copied into args
    if (ret)
        goto cleanup;

    device = dxgprocess_device_by_handle(process, args.device);
    if (device == NULL) {
        ret = STATUS_INVALID_PARAMETER;
        goto cleanup;
    }

    adapter = device->adapter;
    ret = dxgadapter_acquire_lock_shared(adapter);
    if (ret) {
        adapter = NULL;
        goto cleanup;
    }

    ret = dxgvmb_send_signal_sync_object(process, &adapter->channel,        // User controlled data passed as arguments
                         args.flags, 0, 0,                              // specific interest args.object_count
                         args.object_count, args.objects, 0,
                         NULL, args.object_count,
                         args.fence_values, NULL,
                         args.device);

cleanup:

    if (adapter)
        dxgadapter_release_lock_shared(adapter);
    if (device)
        dxgdevice_release_reference(device);

    TRACE_FUNC_EXIT(__func__, ret);
    return ret;
}

The IOCTL handler dxgk_signal_sync_object_cpu lacked input validation of user-controlled data. The user passes a d3dkmt_signalsynchronizationobjectfromcpu structure which contains a uint value for object_count. Moving deeper into the code, in dxgvmb_send_signal_sync_object (drivers/gpu/dxgkrnl/dxgvmbus.c), we know that we control the following arguments at this moment and there’s been zero validation:

  • args.flags (flags)
  • args.object_count (object_count, fence_count)
  • args.objects (objects)
  • args.fence_values (fences)
  • args.device (device)

An interesting note is that args.object_count is being used for both the object_count and fence_count. Generally a count is used to calculate length, so it’s important to keep an eye out for counts that you control. You’re about to witness some extremely trivial bugs. If you’re inexperienced at auditing C code for vulnerabilities, see how many issues you can spot before reading the explanations below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
                   struct dxgvmbuschannel *channel,
                   struct d3dddicb_signalflags flags,
                   uint64_t legacy_fence_value,
                   d3dkmt_handle context,
                   uint object_count,
                   d3dkmt_handle __user *objects,
                   uint context_count,
                   d3dkmt_handle __user *contexts,
                   uint fence_count,
                   uint64_t __user *fences,
                   struct eventfd_ctx *cpu_event_handle,
                   d3dkmt_handle device)
{
    int ret = 0;
    struct dxgkvmb_command_signalsyncobject *command = NULL;
    uint object_size = object_count * sizeof(d3dkmt_handle);            
    uint context_size = context_count * sizeof(d3dkmt_handle);          
    uint fence_size = fences ? fence_count * sizeof(uint64_t) : 0;      
    uint8_t *current_pos;
    uint cmd_size = sizeof(struct dxgkvmb_command_signalsyncobject) +
        object_size + context_size + fence_size;

    if (context)
        cmd_size += sizeof(d3dkmt_handle);                              

    command = dxgmem_alloc(process, DXGMEM_VMBUS, cmd_size);
    if (command == NULL) {
        ret = STATUS_NO_MEMORY;
        goto cleanup;
    }

    command_vgpu_to_host_init2(&command->hdr,                     
                   DXGK_VMBCOMMAND_SIGNALSYNCOBJECT,
                   process->host_handle);

    if (flags.enqueue_cpu_event)
        command->cpu_event_handle = (winhandle) cpu_event_handle; 
    else
        command->device = device;                                     
    command->flags = flags;                                             
    command->fence_value = legacy_fence_value;                          
    command->object_count = object_count;                               
    command->context_count = context_count;                             
    current_pos = (uint8_t *) &command[1];
    ret = dxg_copy_from_user(current_pos, objects, object_size);        
    if (ret) {
        pr_err("Failed to read objects %p %d",
               objects, object_size);
        goto cleanup;
    }
    current_pos += object_size;
    if (context) {
        command->context_count++;
        *(d3dkmt_handle *) current_pos = context;
        current_pos += sizeof(d3dkmt_handle);
    }
    if (context_size) {
        ret = dxg_copy_from_user(current_pos, contexts, context_size);
        if (ret) {
            pr_err("Failed to read contexts %p %d",
                   contexts, context_size);
            goto cleanup;
        }
        current_pos += context_size;
    }
    if (fence_size) {
        ret = dxg_copy_from_user(current_pos, fences, fence_size);
        if (ret) {
            pr_err("Failed to read fences %p %d",
                   fences, fence_size);
            goto cleanup;
        }
    }

    ret = dxgvmb_send_sync_msg_ntstatus(channel, command, cmd_size);

cleanup:
    if (command)
        dxgmem_free(process, DXGMEM_VMBUS, command);
    TRACE_FUNC_EXIT_ERR(__func__, ret);
    return ret;
}

This count that we control is used in multiple locations throughout the IOCTL for buffer length calculations without validation. This leads to multiple integer overflows, followed by an allocation that is too short which causes memory corruption.

Integer overflows:

17) Our controlled value object_count is used to calculate object_size

19) Our controlled value fence_count is used to calculate fence_size

21) The final result of cmd_size is calculated using the previous object_size and fence_size values

25) cmd_size could simply overflow from adding the size of d3dkmt_handle if it were large enough

Memory corruption:

27) The result of cmd_size is ultimately used as a length calculation for dxgmem_alloc. As an attacker, we can force this to be very small.

Since our new allocated buffer command can be extremely small, the following execution that writes to it could cause memory corruption.

33-44) These are all writing data to what is pointing at the buffer, and depending on the size we force there’s no guarantee that there is space for the data.

46,59) Eventually execution will lead to two different calls of dxg_copy_from_user. In both cases, it is copying in user-controlled data using the original extremely large size values (remember our object_count was used to calculate both object_size and fence_size).

Hopefully this inspired you to take a peek at other opensource drivers and hunt down security bugs. This issue was reported to MSRC on May 20th, 2020 and resolved on August 26th, 2020 after receiving the severity of Important with an impact of Elevation of Privilege.

You can view the patch commit here with the new added validation.

EDITORIAL | September 8, 2020

IOActive Labs Blog

Reclaiming Hallway Con

We have several exciting things happening with our blog content. Like many, we’ve been working to replace the value lost with the loss of face-to-face gatherings at meetings, conventions, and informal get-togethers. Many veterans of the conference circuit will tell you that by far the most valuable part of a typical conference is the hallway con, which refers to the informal discussions, networking, and often serendipitous meetings that happen outside the formal conference agenda.

IOActive is helping reclaim hallway con by making some of that valuable content available in a pandemic-friendly format on our blogs and in webinars. We recently launched our Guest Blog series with a post focused on emerging threats in intermodal transportation from Urban Jonson, an accomplished contributor to hallway con and leader of the Heavy Vehicle Cyber Security (HVCS) working group at NMFTA.

Likewise, we are making some more informal technical content available to a larger audience at a higher frequency through our new IOActive Labs blog.

IOActive Labs Blog

The IOActive Labs blog is an organizational innovation proposed by our consultants to support a more agile process for developing, reviewing, and posting technical content. It facilitates lower-latency, higher-frequency posting of technical content, which was more challenging within our prior process.

This new process allows for some interesting new types of content, such as live blogging during a CTF, and more informal content, such as documenting techniques. Furthermore, the organization of the technical content under the IOActive Labs blog will allow the part of our audience, who’s only interested in the (very interesting) bits and bytes, to easily find those posts as we include more diverse, non-technical content and voices in our main blog.

We want to break in the new IOActive Labs blog with an appropriately original and interesting first post.

Breaking in the IOActive Labs Blog with a Look at Aviation Operational Technology

Ruben Santamarta, a Principal Consultant at IOActive, has amassed a considerable body of groundbreaking, original cybersecurity research. He continues his work on emerging threats through a look into airline and airport operational technology (OT) associated with Electronic Bag Tags (EBTs). This post builds on his recent work on warcodes (malicious bar codes) discussed in his recent blog post.

This research takes an empirical look at some of the implementation flaws in a couple of examples of devices and components that support the “tags everywhere” and “sensors everywhere” trends brought about by IoT and the thirst for more sources of data to feed the big data movement. It also illustrates some of the potential supply-chain risks associated with using insecure, but not intentionally malicious, products and components in systems that perform core business operations.

You may also follow the latest posts on the IOActive Labs twitter.

More to Come

We have more exciting innovations to come as we work to recapture more of the value lost without conferences.

INSIGHTS, RESEARCH | September 1, 2020

Breaking Electronic Baggage Tags – Lufthansa vs British Airways

If you are reading this article, I will venture to guess that you have already had the ‘pleasure’ of queuing to check a bag at an airport. In order to improve the checking procedure, Electronic Baggage Tag (EBT) solutions are being introduced on the market that leverage the new technologies most travellers have access to nowadays.

This time I will take a look at the security posture of two of the most prominent EBT solutions: British Airways’ TAG and Lufthansa’s BAGTAG.

First of all, IATA provides an implementation guide for these devices, which I recommend you read before continuing on with this blog post, as well as the IATA resolution 753 on baggage tracking.

Certain parts of the implementation guide are worth noting:

In general terms, an EBT solution is comprised of:

  • Airline mobile app
  • Airline backend
  • EBT device
    • NFC
    • BLE
    • Display (e-INK)

The communication channel between airline mobile app and the EBT is established through a BLE interface.
Now let’s see how close to the expected functionality of these EBT solutions is to the IATA security requirements.

British Airways’ TAG

British Airways’ (BA’s) TAG is provided by ViewTag. As usual, by looking at the FCCID website, we can find a teardown of the device.

The differences between the teardown version and the one being commercialized are not significant.

It is as simple as it looks. The Nordic SoC is in charge of BLE communication with the mobile app. By reverse engineering the BA app, it was possible to easily figure out how the solution is working: Basically, the BA app directly transfers the data that will be used to update the EBT display via BLE without any additional security.

A custom payload written to the ‘PASS_DATA’ characteristic is the mechanism used to transfer the boarding pass/bag tag data that eventually will be rendered on the EBT device’s e-INK display. The device does not validate either the authenticity or the integrity of the data being transferred. The following code receives the booking information collected from the BA backend and generates the payload:

The payload is a string where the following items are concatenated in a specific order.

  • passengerName
  • bookingReference
  • priority
  • sequenceNumber
  • destinationFlightNum
  • destinationFlightDate (ddHHmm)
  • destinationAirport
  • destinationName
  • firstTransferFlightNum
  • firstTransferFlightDate
  • firstTransferAirport
  • secondTransferFlightNum
  • secondTransferFlightDate
  • secondTransferAirport
  • departureAirport
  • euDeparture (enable green bars)
  • fullIdentifier

That’s it, not very interesting I know…

The above diagram sums up the overall implementation:

  1. The BA app downloads the passenger’s booking info and checks the ability to generate a digital bag tag (only available to Executive Club members).
  2. The BA app generates the bag tag payload and transfers it through the PASS_DATA characteristic
  3. The EBT processes this payload and updates the e-INK display with the received information.

As a result of this design, anyone, not only the airline, is able to forge arbitrary bag tags and update the EBT device without any further interaction with BA. Obviously you still require physical access to the EBT device to press the button that will enable the BLE functionality.

The following proof-of-concept can be used to forge arbitrary bag tags and render them on a BA TAG.

File: poc.py

The following is a custom bag tag generated using this PoC:

Lufthansa’s BAGTAG

Lufthansa decided to go with DSTAGS, a company founded by an NXP employee. This company created BAGTAG. I think it is worth mentioning this detail because my analysis revealed that the team behind the device seems to have significant experience with NXP components, although from a security perspective they missed some basic things.

As with the TAG solution, we can access a device teardown on the FCCID site, which is almost identical to the units used during the analysis.

The main components are:

  • Nordic NRF8001 – BLE SoC
  • NXP LPC115F – MCU
  • NXP 7001C – Secure Element

As the following image shows, the Serial Wire Debug (SWD) headers were easily accessible, so that was the first thing to check.

Fortunately, the BAGTAG production units are being commercialized without enforcing any type of Code Read Protection (CRP) scheme in their MCUs. As a result, it was possible to gain complete access to the firmware, as well as to flash a new one (I used a PEmicro Multilink working with the NXP’s MCUxpresso built-in gdb server).

After reverse engineering the firmware (bare metal, no symbols, no strings) and the app, it was clear that this solution was much more complex and solid than BA’s. Basically, the BAGTAG solution implements a chip-to-cloud scheme using the NDA-protected NXP 7001C Secure Element, which contains the cryptographic materials required both to uniquely identify the EBT and to decrypt the responses received from the BAGTAG backend. The MCU communicates with the Lufthansa app through the NRF8001 BLE transceiver.

I came up with the following diagram to help elaborate the main points of interest.

  • 1. The Lufthansa app downloads the passenger’s booking info and checks whether the user wants to generate an EBT.
  • 2. The BAGTAG’s BLE service exposes two characteristics (receiveGUID and transmitGUID) that are basically used to transfer data between the app and the device.

Actually the important data comes encrypted directly from the BAGTAG cloud. In addition to the passthrough channel, there are two publicly supported commands:

The startSessionRequest returns an internal 59-byte ‘ID’ that identifies that specific BAGTAG device. This ID is stored inside the NXP 7001 Secure Element, which communicates with the MCU via I2C using an ISO-7816-compliant protocol.

There are two commands invoked in the firmware for the selected applet:

  • 0x8036: Returns the Session ID and uCode (to generate the IATA’s GUID)
  • 0x8022: Decrypt data (received from the BAGTAG backend through the BLE passthrough)

  • 3. The user needs to register the BAGTAG device and create an account. Then the app serializes the booking info required to generate a valid BAGTAG (pretty much the same fields that were mentioned for BA’s solution) and submits it to the BAGTAG backend to be validated. If everything is correct, the BAGTAG backend returns an encrypted blob that goes through the BLE passthrough channel directly to the BAGTAG device.
  • 4. The MCU receives this encrypted blob and sends it to the 7001C Secure Element to be decrypted. The decrypted data received from the 7001C via I2C is then processed, eventually updating the e-INK display with this information.

It is worth mentioning that I didn’t perform any live tests on the Internet-facing servers to avoid any unexpected behaviors in the production environments.

At the intra-board communication level, the MCU does not implement a secure channel to talk to the NXP 7001C Secure Element. As a result, a malicious component on the I2C bus could provide arbitrary content that will eventually be rendered, as the MCU has no way of validating whether it came from the actual 7001C. Obviously, malicious firmware running in the BAGTAG device may perform the same actions without requiring an external component.

Intra-board attacks (SPI, I2C, etc.) are really interesting in those scenarios where the MCU offloads a specific functionality (network, crypto, radio, etc.) to a certain SoC. If you are into this kind of attacks, stay tuned 😉

For instance, we can see how this lack of intra-board security can also lead to a memory corruption vulnerability in this firmware:

See the issue?

At line 58 ‘transfer_to_i2c’ expects to read v12+2 bytes from the I2C slave, storing them at 0x100011A4. Then at line 63, it is using a byte that may be controlled by an attacker to calculate the number of bytes that will be copied in the memcpy operation. Now, if that byte is 0x00, we will face a wild memcpy corruption scenario.

Conclusions

Lufthansa and British Airways were notified of these issues. Both consider these issues as low-risk threats.

British Airways

“Through our own internal investigations we have validated that there are enough checks in the background that take place to ensure an unauthorised bag would not make it through.

The potential exploitation scenarios are overall really unlikely. The overall risk to the businesses is also considered to be low severity.”

Lufthansa

“From a security standpoint appropriate screening of hold baggage remains by far the most important pillar.

In addition the baggage handling process is robust against many forms of manipulation.

The manipulation by means of EBT is therefore only a supplement to already considered cases and does not mean a significant increase of the attacksurface.

In this respect, there is only a small additional probability of occurrence.

A piece of luggage manipulated via Bluetooth would be identified and transferred to a verification process.

This ensures that no luggage manipulated in this way can get on board of one of our aircrafts.

Lufthansa thanks the researcher for his cooperation.

We will gladly follow up such indications, among other things by means of our BugBounty program.”

I agree, there are serious limitations to turning these issues into a serious attack scenario; however, time will tell how these technologies evolve. Hopefully this kind of research will help to close any security gaps before a more significant attack vector can be discovered.

References

  1. https://www.iata.org/en/publications/ebt-guide/
  2. https://www.iata.org/en/programs/ops-infra/baggage/baggage-tracking/
  3. https://viewtag.com/
  4. https://fccid.io/NOI-VGIDEBT/Internal-Photos/Internal-photos-4145161
  5. https://fccid.io/2AK3S-BAGTAG/Internal-Photos/Internal-photographs-3391010
  6. https://www.nxp.com/docs/en/data-sheet/LPC111X.pdf
  7. http://www.pemicro.com/products/product_viewDetails.cfm?product_id=15320168&productTab=1
  8. https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE
EDITORIAL | August 28, 2020

Principles of the IOActive Guest Blog Series

IOActive has recently begun to post a series of guest blogs. Our first post was an excellent contribution from Urban Jonson, who leads the Heavy Vehicle Cyber Security (HVCS) working group at NMFTA, focusing on emerging threats in intermodal transportation.

Our organization has embarked upon this series because we think it provides additional value to our readers. This is one more thing we’re doing to give back to the security community and help those starting out to gain a broader understanding of cybersecurity.

We have long prided ourselves on telling our clients what they need to know rather than what they want to hear. Likewise, our decades-long, original cybersecurity research program has delivered results that force the security community, industries, agencies, governments, and societies to re-evaluate their assumptions about cybersecurity risks and potential impacts.

Through this series of guest blog posts, we are going to expose readers to carefully curated concepts and perspectives presented by industry thought leaders who don’t happen to (currently) work for IOActive. We’re striving to collect a diverse, high-quality set of perspectives that help the security community think about what’s coming next and how to approach difficult problems in new ways.

These entries will be more focused on risks, emerging threats, potential business impacts, new approaches to solving problems, and other higher-level perspectives rather than our deeply technical blog entries from IOActive Labs.

Of course, in providing a platform for sharing diverse opinions, the opinions expressed in the guest blog posts are not necessarily those of IOActive.

Principles of Guest Blogging

We have lofty goals for our Guest Blog Series. To ensure we achieve the desired results we need to have some guiding principles for the series. It’s unfortunate, but necessary that we need to list some of these.

  • No Paid Placement
    We will never post a blog in exchange for compensation. We’ll post things because they provide a valuable perspective and not because they are profitable.
  • No Advertising
    IOActive will not sell advertising on our blog. We don’t want our readers to be assaulted by the worst use of browser technologies as they ponder a challenging thought.
  • No Products; No Pitches
    No posts which focus on products or sales pitches will be allowed. Everyone gets enough of these already. Literally, a subset of the community is waiting to provide these.
  • Original Content
    Content on the blog will be unique and not previously posted anywhere else. The content may be on a topic about which the author(s) have spoken or written before, but it cannot materially be a repost of prior work.
  • Contributes to the Community
    The content of the post must positively contribute to the security community and hopefully the world, even if it challenges existing assumptions, approaches, or solutions. Ideally, the post will help readers contemplate what’s coming next and how to approach difficult problems in new ways through exposure to a diverse, high-quality perspective.

Hopefully, our readers will value these diverse, broad perspectives of our guest bloggers as much as we do.

GUEST BLOG | August 13, 2020

IOActive Guest Blog | Urban Jonson, Heavy Vehicle Cyber Security Program, NMFTA

Hello,

My name is Urban Jonson, and I’m the Chief Technology Officer and Program Manager, Heavy Vehicle Cyber Security Program, with the National Motor Freight Traffic Association, Inc. (NMFTA).

I’m honored that IOActive has afforded me this guest blogging opportunity to connect with you. The research at IOActive is always innovative and they have done some really good work in transportation, including aviation, truck electronic logging devices, and even satellites. Being among such technical experts really raises the stakes of the conversation. Luckily, I can lean on some of my awesome collaborators like Ben Gardiner at NMFTA, as well as countless industry experts who I’m privileged to call friends.

heavy trucking industry

I feel a profound sense of loss of technical progress in our field this year. All of my favorite technical events, where I can connect with people to discuss and share ideas, have been canceled (heavy sigh). The cancellation of the NMFTA HVCS meetings have been the hardest for me, as they pull together an incredible community in the motor freight industry. Many of the attendees are now my friends and I miss them.

The cancelation of my other favorite industry events, Blackhat/DEF CON, CyberTruck Challenge, and ESCAR, have been hard as well. While I do enjoy many of the presentations at these conferences, my biggest benefit is meet one-on-one with some of the brightest minds in the industry. Where else do I get to sit down with Craig Smith and casually discuss the state of the automotive industry? I remember having great conversations with Colin O’Flynn about wily new ideas on power fault injection at many different events. These one-on-one opportunities for conversations, collaboration, and information sharing are invaluable to me.

This year I had wanted to talk to some of my friends about Triton malware and vehicle safety systems such as lane departure assist, crash avoidance, and adaptive cruise control. Alas, no such luck this year. So, I’m going to dump this discussion out in the open here.

The Triton Malware

First, for the uninitiated, a quick review of the Triton malware. The Triton malware intrusion was a sophisticated attack that targeted a petrochemical plant in the Middle East in 2017. How the attackers first got into the network is a little bit of a mystery, but most likely the result of a misconfigured firewall or spearphishing attack. The first piece was a Windows-based remote access tool to give the attackers remote access to an engineering workstation. What came next was very interesting: according to reports1, a highly specific secondary attack was mounted from the compromised engineering workstation against a specific Schneider Electric Triconex2 safety controller and select firmware versions (10.0 – 10.4) using a zero-day vulnerability. The safety controllers in question are designed to take direct action to initiate shutdown operations for the plant without user intervention in the case of serious safety issues.

Stop and think about that for a second—someone had taken the time to figure out which specific controller and firmware versions were running at the plant, obtain similar hardware to research, find a zero-day vulnerability, then research and compromise the plant’s IT infrastructure, just to install this malware. That is not an insignificant effort, and not as easy as they make it out to be in the hacker movies.

An unplanned “accidental” shutdown of the plant revealed the presence of the malware and the intrusion. It was theorized that the attackers wanted to obtain the capability but not use it, and that the shutdown was an accidental reveal3. Cyberphysical attacks are usually broken into separate cyber and physics packages. Given the effort put into the attack, it is extremely unlikely the attacker would have intended on such a dumb physics package. If you want an example of a well thought out cyber-physical attack read up on Operation Olympic Games which targeted Iranian uranium centrifuges4. This goes to show that if you play around with bombs, physical or digital, they can go off unintentionally.

Another interesting tell occurred as the response team was trying to secure and clean up the intrusion, when the attackers fought back to try to maintain a foothold. Actively engaging blue-team efforts in real-time is risky, as it can quickly lead to full attribution and unwanted consequences. This tells us that the attackers considered this capability a high priority; they had made a large investment in resources to be able to compromise the safety controllers and they were determined to keep it. A great deal of information about this intrusion is still murky and closely guarded, but it is generally considered to have potentially been one of the deadliest malware attacks so far, had the capability been leveraged.

Safety Controllers

The safety controller concept of a contained device taking decisive action without user intervention sounds eerily familiar. The concept is virtually everywhere in the new safety technologies in modern cars and trucks in the form of crash avoidance, lane departure assist, and other features for which we have all seen the ads and literature. FMCSA is even studying how to retrofit existing trucks with some of these promising new safety technologies, which can help reduce accidents and save lives.

These automotive safety systems rely on sensors such as cameras and LIDAR to get the input they need to make decisions affecting steering, braking, and other actions. This brings up some interesting questions. How secure are these components? How diverse is the marketplace; that is, do we have risk aggregation through the deployment of just a few models/versions of sensors? Is there a specific sensor model that is ubiquitous in the industry? Do we have our own version of a Triconex safety controller that we need to worry about?

How secure are these components? How diverse is the marketplace; that is, do we have risk aggregation through the deployment of just a few models/versions of sensors? Is there a specific sensor model that is ubiquitous in the industry?

The short answer seems to be yes. I read an interesting paper on Fault Detection, Isolation, Identification and Recovery (FDIIR) for automotive perception sensors by Goelles, Schlager, and Muckenhuber from Virtual Vehicle Research5. (Note: This paper is an interesting paper worth reading in its own right, and discusses a sensor fault classification system that can be applied to other domains, such as aviation and maritime.) The conclusion of the paper is that, for the most part, systems such as LIDAR treated as black boxes with little or no knowledge of the internal firmware or interfaces. This is mostly due to a small number of companies in fierce competition working hard to protect their intellectual property. In my opinion, that is not a good sign. If we need multiple sensors to cooperatively decide on safety-critical actions, transparency is going to be crucial to designing a trusted system. The present lack of transparency in these systems almost certainly implies a lack of security assurance for their interfaces. This sort of inscrutable interface (aka attack surface) is a hacker’s delight.

All of this is not really new—our own Ben Gardiner discussed similar points in 20176. So, what other truck-specific safety system black boxes can we discuss through the filter of the Triton attack that might not be ready knowledge to you? Enter RP 1218.

RP 1218 – Remote Disablement of Commercial Vehicles

First a little background: the American Trucking Association’s (ATA) Technology Maintenance Council (TMC) develops recommended practices (RPs) for the trucking industry. The council is comprised of representatives from motor carriers, OEMs, and Tier 1 suppliers for the truck industry. They generally do great work and mostly focus on physical truck maintenance-related issues, but they also work on other recommended practices such as Telematics-Tractor connectors (RP 1226). These are, strictly speaking, only recommendations for the industry, but many of them end up being de-facto standards, especially in-vehicle electronics.

The TMC has recently decided to take up RP 1218 and develop an updated version, which is how it came to our attention. Now, why has this RP drawn our attention and ire at the NMFTA HVCS? The title for RP 1218 is “Guidelines for Remote Disablement of Commercial Vehicles.” It consists of a recommended practice on how to implement a remote shutdown and/or limp mode for a heavy truck. The current version is rather old, from 2005. The problem is that cybersecurity was not at the forefront of the trucking industry’s thinking in at that time.

The core security premise of RP 1218 was based around “secret” CAN message instructions sent to the engine controller. Uh-oh. CAN doesn’t include encryption, so there’s no such thing as a secret CAN message. Well, not for very long anyway. Even the existence of the RP was enough to give us the jitters.

We immediately set out to determine if anyone had implemented RP 1218 and did a basic survey of remote disablement technology with the assistance of our friends at CanBusHack. The good news was that we could not find anyone who had implemented RP 1218 as specified. The bad news was that we found plenty of other ways to do it, including messing around with diesel exhaust fluid (DEF) messages and derate limits, among others. I’m not going to dig into those details here.

We also discovered a robust global market for both Remote Vehicle Shutdown (RVS) and Remote Vehicle Disablement (RVD). Luckily for me, most of that market is outside of North America, my primary area of concern. The methods by which the various vendors achieved RVS/RVD varied significantly, but were not as simple as sending a message to the engine using RP 1218. That’s good, but the problem is that companies are building in a full remote stop button on their entire fleet. It seems that the sensitivity of RVS/RVD is well understood, and due to this concern, there’s not a great deal of transparency into these systems; we found it difficult to get even basic information. Another inscrutable black box.

While you can certainly make the case that it’s necessary to be able to disable a vehicle from a national security perspective, to prevent truck hijackings and terrorists turning trucks into battering rams, such a system would need to be absolutely bulletproof. While there are some ideas on how to mitigate such risks using things line Consequence driven, Cyber-informed Engineering (EEC)7, that’s a very hard thing to accomplish when it involves black-box technology with unknown interfaces. It’s worth repeating, black boxes with unknown interfaces are huge flashing targets for threat actors.

If we look at this through the lens of the Triton intrusion, how much effort do you think someone would go through to obtain the ability to affect motor transportation at scale? Do you think they would conduct the same level of research on infrastructure and components, and attempt to compromise these systems so that they can be hit at the most critical time? I certainly do. This whole set of problems is pressing, and I really needed to get some perspective and ideas.

How much effort do you think someone would go through to obtain the ability to affect motor transportation at scale?

This brings us back to meeting up in person with industry experts with extensive expertise in industrial control systems (ICS), automotive, and many other areas. When I’m looking at this massive problem, I don’t get to ask important questions of my friends, many of whom I only see once a year at these events. Are there any lessons from the Triton ICS attack that we can leverage in designing active safety systems for vehicles? Can we develop an attack tree for someone attempting a sophisticated nation-state attack against vehicle safety control systems or remote disablement vendors? How do we best defend against someone who would like to own our infrastructure and unleash disruption on our transportation sector? How do we improve our designs, resiliency, processes, and general security posture against this type of threat?

Multi-mode Transportation Sharing and Support

Unfortunately, in today’s world, I can’t ask my friends in a quiet corner over a drink and strategize on a way to mitigate this risk. I’m sure that I’m not the only one feeling bereft of such opportunities.

By the way, informal collaborations at security events are exactly how the NMFTA Heavy Vehicle Cyber Security program came to be in the first place. After a Miller and Valasek presentation at Black Hat 2014, I sat down with a bright guy from Cylance at the Minus5 Ice Bar at the Mandalay Bay and we “doodled” the attack vectors on a truck. After taking a look at the finished napkin, we were both horrified. When I returned to Alexandria, Virginia, I started doing the research that eventually became our first white paper on heavy vehicle cybersecurity, which we “published” in September 20158. Okay, honestly, we sat on it for a couple of years before we made it public.

So how do I move past this obstacle to fun, as well as to progress and work on keeping trucks secure and moving? In my case, I’ve been endeavoring to create a multi-mode transportation sharing and support group. This is an informal monthly gathering of a few select folks from various transportation modes, sharing resources and connections and generally supporting each other’s missions. Additionally, I’m trying (and mostly still failing) to reach out to those wonderful smart and talented friends to connect with them and see how they are doing personally, and to share whatever resources, references, articles, papers, connections, or technology I can provide to help them be successful in their missions. I ask whether they might be able to give me some advice or novel take on my problem and discuss possible solutions and mitigations. Like most tech geeks, I like technology because it’s easier to understand and deal with than most people. However, the lack of camaraderie this year is a little much, even for me.

So let’s help everyone connect. Think of those people that you see so infrequently at these canceled conferences, and call them to check-in. Don’t text or email, just pick up the phone and give them a call. I am sure that they’d appreciate hearing from you, and you’ll probably find that you have some interesting technical topics to discuss. Maybe you could invite the “usual gang/CTF team” to a Zoom happy hour.

Don’t stop connecting just because you’re stuck in the basement like me. You never know, maybe you’ll solve an interesting problem, find a new, really evil way to hack something cool, help someone find a resource, or just maybe make the world a slightly safer place. Most importantly, if you discover the solution to the problems I’ve discussed here, please let me know.

Urban Jonson


1 Blake Sobczak, The inside story of the world’s most dangerous malware. E&E News, March 2019
2 NSA/CISA Joint Report Warns on Attacks on Critical Industrial Systems. July 27, 2020.
3 Tara Sales, Triton ICS Malware Hits A Second VictimSAS 2019, Published on Threatpost.com, April 2019.
4 Pierluigi Paganini, ‘Olyimpic Games’ and boomerang effect, it isn’t sport but cyber war, June 2012.
5 Goelles, T.; Schlager, B.; Muckenhuber, S. Fault Detection, Isolation, Identification and Recovery (FDIIR) Methods for Automotive Perception Sensors Including a Detailed Literature Survey for LidarSensors, 2020, Volume 20, Issue 13.
6 Ben Gardiner, Automotive Sensors: Emerging Trends With Security Vulnerabilities And SolutionsMEMS Journal, February 2017.
7 For more information on CCE please see the INL website.
8 National Motor Freight Traffic Association, Inc., A Survey of Heavy Vehicle Cyber Security. September 2015.

EDITORIAL | June 30, 2020

Warcodes: Attacking ICS through industrial barcode scanners

Several days ago I came across an interesting entry in the curious ‘ICS Future News’ blog run by Patrick Coyle. Before anyone becomes alarmed, the description of this blog is crystal clear about its contents:
“News about control system security incidents that you might see in the not too distant future. Any similarity to real people, places or things is purely imaginary.”
IOActive provides research-fueled security services, so when we analyze cutting-edge technologies the goal is to stay one step ahead of malicious actors in order to reduce current and future risk. Although the casino barcode hack is made up, it turns out IOActive reported similar vulnerabilities to SICK months ago, resulting in the following advisory: This blog post provides an introductory analysis of real attack vectors where custom barcodes could be leveraged to remotely attack ICS, including critical infrastructure.

Introduction

Barcode scanning is ubiquitous across sectors and markets. From retail companies to manufacturing, utilities to access control systems, barcodes are used every day to identify and track millions of assets at warehouses, offices, shops, supermarkets, industrial facilities, and airports. Depending on the purpose and the nature of the items being scanned, there are several types of barcode scanners available on the market: CCD, laser, and image-based to name a few. Also, based on the characteristics of the environment, there are wireless, corded, handheld, or fixed-mount scanners. In general terms, people are used to seeing this technology as part of their daily routine. The security community has also paid attention to these devices, mainly focusing on the fact that regular handheld barcode scanners are usually configured to act as HID keyboards. Under certain circumstances, it is possible to inject keystroke combinations that can compromise the host computer where the barcode scanner is connected. However, the barcode scanners in the industrial world have their own characteristics. shipping automation From the security perspective, the cyber-physical systems at manufacturing plants, airports, and retail facilities may initially look as though they are physically isolated systems. We all know that this is not entirely true, but reaching ICS networks should ideally involve several hops from other systems. In some deployments there is a direct path from the outside world to those systems: the asset being handled. Depending on the industrial process where barcode scanners are deployed, we can envision assets that contain untrusted inputs (barcodes) from malicious actors, such as luggage at airports or parcels in logistics facilities. In order to illustrate the issue, I will focus on the specific case I researched: smart airports.

Attack Surface of Baggage Handling Systems

Airports are really complex facilities where interoperability is a key factor. There is a plethora of systems and devices shared between different stakeholders. Passengers are also constrained in what they are allowed to carry. luggage scanning In 2016, ENISA published a comprehensive paper on securing smart airports. Their analysis provided a plethora of interesting details; among which was a ranking of the most critical systems for airport resilience. Most modern baggage handling systems, including self-service bag drop, rely on fixed-mount laser scanners to identify and track luggage tickets. Devices such as the SICK CLV65X are typically deployed at the automatic baggage handling systems and baggage self-drop systems operating at multiple airports.

The Baggage Connection – SICK Whitepaper

More specifically, SICK CLV62x-65x barcode scanners support ‘profile programming’ barcodes. These are custom barcodes that, once scanned, directly modify settings in a device, without involving a host computer. This functionality relies in custom CODE128 barcodes that trigger certain actions in the device and can be leveraged to change configuration settings. A simple search on YouTube results in detailed walkthroughs of some of the largest airport’s baggage handling systems, which are equipped with SICK devices. Custom CODE128 barcodes do not implement any kind of authentication, so once they are generated they will work on any SICK device that supports them. As a result, an attacker that is able to physically present a malicious profile programming barcode to a device can either render it inoperable or change its settings to facilitate further attacks. We successfully tested this attack against a SICK CLV650 device.

Technical Details

IOActive reverse engineered the logic used to generate profile programming barcodes (firmware versions 6.10 and 3.10) and verified that they are not bound to specific devices. The following method in ProfileCommandBuilder.class demonstrates the lack of authentication when building profile programming barcodes.

Analyzing CLV65x_V3_10_20100323.jar and CLV62x_65x_V6.10_STD

 

Conclusion

The attack vector described in this blog post can be exploited in various ways, across multiple industries and will be analyzed in future blog posts. Also, according to IOActive’s experience, it is likely that similar issues affect other barcode manufacturers. IOActive reported these potential issues to SICK via its PSIRT in late February 2020. SICK handled the disclosure process in a diligent manner and I would like to publicly thank SICK for its coordination and prompt response in providing its clients with proper mitigations.
ADVISORIES | June 18, 2020

Moog EXO Series Multiple Vulnerabilities

Moog Inc. (Moog) offers a wide range of camera and video surveillance solutions. These can be network-based or part of more complex tracking systems. The products affected by the vulnerabilities in this security advisory are part of the EXO series, “built tough to withstand extreme temperature ranges, power surges, and heavy impacts.” These units are configurable from a web application. The operating systems running on these cameras are Unix-based.

  • ONVIF Web Service Authentication Bypass
  • Undocumented Hardcoded Credentials
  • Multiple Instances of Unauthenticated XML External Entity (XXE) Attacks
  • statusbroadcast Arbitrary Command Execution as root

Access the Advisory (PDF)

ADVISORIES |

Verint PTZ Cameras Multiple Vulnerabilities

Verint Systems Inc. (Verint) sells software and hardware solutions to help its clients perform data analysis. Verint also offers IP camera systems and videos solutions. Most of these cameras are configurable from a web application. The operating systems running on these cameras are Unix-based.

  • DM Autodiscovery Service Stack Overflow
  • FTP root User Enabled
  • Undocumented Hardcoded Credentials

Access the Advisory (PDF)

EDITORIAL | May 27, 2020

File-Squatting Exploitation by Example

This will (hopefully) be a short story about a bug I found some time ago while auditing a .NET service from an OEM. It should be interesting as I have yet to find a description of how to exploit a similar condition.

Our service was running as SYSTEM and needed to periodically execute some other utilities as part of its workflow. Before running these auxiliary tools, it would check if the executable was properly signed by the vendor. Something like this:

public void CallAgent()
{
   string ExeFile = "C:\\Program Files\\Vendor\\Util.exe";
   if (!new Signtool().VerifyExe(ExeFile))
       return;
 
    // Execute Agent here
}

This is where it gets interesting. Of course we can’t control anything at that Program Files location, but what is that VerifyExe method doing?

internal class Signtool
    {
        private const string SignToolPath = "C:\\Windows\\Temp\\signtool.exe";
 
        private void ExtractSignTool()
        {
            byte[] signtool = QuatService.Resource1.signtool;
            using (FileStream fileStream = new FileStream("C:\\Windows\\Temp\\signtool.exe", FileMode.Create))
                fileStream.Write(signtool, 0, signtool.Length);
        }
 
        private void DeleteSignTool()
        {
            File.Delete("C:\\Windows\\Temp\\signtool.exe");
        }
 
        private bool Verify(string arg)
        {
            this.ExtractSignTool();
            Process process = new Process();
            process.StartInfo.UseShellExecute = false;
            process.StartInfo.RedirectStandardOutput = true;
            Path.GetDirectoryName(this.GetType().Assembly.Location);
            process.StartInfo.FileName = "C:\\Windows\\Temp\\signtool.exe";
            process.StartInfo.Arguments = arg;         
            process.Start();
            process.WaitForExit();
            this.DeleteSignTool();
            return process.ExitCode == 0 || process.ExitCode == 2;
        }
 
        public bool VerifyExe(string ExeFile)
        {
            return this.Verify("verify /pa \"" + ExeFile + "\"");
        }
 
    }

The code simply extracts a signature verification tool that it has embedded in C:\Windows\Temp as part of its resources, executes it to verify the target executable, and then deletes the tool as if nothing ever happened.

Did you catch the bug? The issue is in the FileMode.Create flag that gets passed as part of the FileStream call used to write the file.

What is object squatting?

I first read about squatting attacks in “The Art of Software Security Assessment” (which I highly recommend by the way). Essentially, squatting is an attack where you create an object before the legitimate application does. You can then manipulate the object and affect the normal behavior of the application. If an application is not careful and attempts to create a named object (such as Mutex, an Event, a Semaphore, etc.) in the global namespace, it might open an existing object instead, because another application could have already created an object with the exact same name. In this case, the Create method will succeed (https://docs.microsoft.com/en-us/windows/win32/sync/object-names):

This same method can be used for file squatting: the application acts as if it has created a file when it has actually opened an existing file.

There are two conditions necessary for this to be exploitable:

  1. The dwCreationDisposition parameter of the
    CreateFile function must be set incorrectly, leading the application to open an
    existing file instead of creating a new one. An incorrect setting is any
    setting except CREATE_NEW.
  2. The location where the file is being created must
    be writeable by potentially malicious users.

So in C/C++ code, if you see a call to CreateFile using CREATE_ALWAYS, it should raise a flag:

In .NET code, FileMode.Create maps to CREATE_ALWAYS:

Exploitation

At this point, we have a confirmed file squatting vulnerability:

  • The service is not using FileMode.CreateNew.
  • The location C:\Windows\Temp is writable by authenticated
    users.

We also have a race condition because there is a time window between when signtool.exe is extracted and when it is executed.

Therefore, we can exploit this vulnerability by leveraging Hardlinks and OpLocks:

The steps would be the following:

  1. Create
    a directory such as C:\Users\Public\Exploit.
  2. Create
    a file named dummy.txt inside the directory.
  3. Place
    payload.exe inside the directory.
  4. Create
    the Hardlink in C:\Windows\Temp\Signtool.exe to point to C:\Users\Public\Exploit\Dummy.txt.
  5. Set
    an OpLock on dummy.txt.
  6. When
    the OpLock triggers, recreate the Hardlink to point to payload.exe (we can do
    this because the file is ours and the ACL hasn’t changed).

Not so fast! If we check the behavior of the vulnerable “QuatService” with ProcMon, we see there are actually five calls to CreateFile instead of just three:

The first CreateFile is used by FileStream to write the signtool to disk. The second, third, and fourth calls are all part of the inner workings of CreateProcess. The final CreateFile is called with delete access in order to erase the file.

At a practical level, because of the short time window, the two additional CreateFile calls from CreateProcess could interfere with our goal. I found that the best settings for reliable, reproducible results were:

  • Use a second OpLock on dummy.txt after the first
    one is hit.
  • Call Sleep(1) to skip the third CreateFile
    (second one from CreateProcess).
  • Attempt to create the Hardlink to payload.exe in
    a loop. This is necessary because the Hardlink creation could fail due to the
    fact the service could still hold the handle from the third CreateFile.

Here is the code for a functional exploit for the vulnerability (tested on Windows 10 19041 and 18363):
https://github.com/IOActive/FileSquattingExample/blob/master/SquatExploit/SquatExploit/SquatExploit.c

Video demo of the exploit working:

The vulnerable service is included in the same GitHub project in case you want to play with it. If you come up with a better approach to increase the reliability of the exploit, please send me a message.

Mitigations

As already discussed, to avoid this situation there are two options:

  1. Use
    CREATE_NEW / FileMode.CreateNew and
    handle the potential error caused by an existing file.
  2. Write
    to a protected filesystem location.

What about the path redirection mitigations?

  • The Hardlink mitigation doesn’t apply here because we’re creating a link to our own file.
  • The SYSTEM %TEMP% change is not implemented yet. Even though this mitigation will definitely fix the vulnerability, it is worth noting that there will be still room for squatting other directories:
    • C:\Windows\Tasks
    • C:\windows\tracing
    • C:\Windows\System32\Microsoft\Crypto\RSA\MachineKeys
    • C:\Windows\System32\spool\drivers\color
    • C:\Windows\SysWOW64\Tasks\Microsoft\Windows\PLA\System

References

More on Object Squatting mitigations:

ADVISORIES | May 14, 2020

GE Grid Solutions Reason RT430 GNSS Precision-Time Clock Multiple Vulnerabilities

GE Grid Solutions’ Reason RT430 GNSS Precision-Time Clock is referenced to GPS and GLONASS satellites. Offering a complete solution, these clocks are the universal precision time synchronization units, with an extensive number of outputs which supports many timing protocols. including the DST rules frequently used on power systems applications. In accordance with IEEE 1588 Precision Time Protocol (PTP), the RT430 is capable of providing multiple IEDs synchronization with better than 100ns time accuracy over Ethernet networks. Despite being likely to never lose time synchronization from satellites, the RT430 GNSS features a TCXO as its standard internal oscillator ensuring free-running accuracy when clock is not locked. IOActive found that the RT430’s web application exposed several shell scripts that allowed authentication to be bypassed, leading to a full compromise of the device.