Six months ago we published a blog post describing ‘Warcodes’ a novel attack vector against industrial barcode readers used in the baggage handling systems deployed at a significant number of international airports. This time we targeted boarding gate readers used as part of the passenger boarding and flow control.
Year: 2020
Hiding in the Noise | Corey Thuen
Greetings! I’m Corey Thuen. I spent a number of years at Idaho National Laboratory, Digital Bond, and IOActive (where we affectionately refer to ourselves as pirates, hence the sticker). At these places, my job was to find 0-day vulnerabilities on the offensive side of things.
Now, I am a founder of Gravwell, a data analytics platform for security logs, machine, and network data. It’s my background in offensive security that informs my new life on the defensive side of the house. I believe that defense involves more than looking for how threat actor XYZ operates, it requires an understanding of the environment and maximizing the primary advantage that defense has—this is your turf and no one knows it like you do. One of my favorite quotes about security comes from Dr. Eugene Spafford at Purdue (affectionately known in the cybersecurity community as “Spaf”) who said, “A system is good if it does what it’s supposed to do, and secure if it doesn’t do anything else.” We help our customers use data to make their systems good and secure, but for this post let’s talk bad guys, 0-days, and hiding in the noise.
A very important part of cybersecurity is threat actor analysis and IOC generation. Security practitioners benefit from knowledge about how adversary groups and their toolkits behave. Having an IOC for a given vulnerability is a strong signal, useful for threat detection and mitigation. But what about for one-off actors or 0-day vulnerabilities? How do we sort through the million trillion events coming out of our systems to find and mitigate attacks?
IOActive has a lot of great folks in the pirate crew, but at this point I want to highlight a pre-pandemic talk from Jason Larsen about actor inflation. His thesis and discussion are both interesting and hilarious; the talk is certainly worth a watch. But to tl;dr it for you, APT groups are not the only attackers interested in, or capable of, compromising your systems. Not by a long shot.
When I was at IOActive (also Digital Bond and Idaho National Laboratory), it was my job as a vulnerability researcher to find 0-days and provide detailed technical write-ups for clients so vulnerabilities could be discovered and remediated. It’s this work that gives me a little different perspective when it comes to event collection and correlation for security purposes. I have a great appreciation for weak signals. As an attacker, I want my signals to the defenders to be as weak as possible. Ideally, they blend into the noise of the target environment. As a defender, I want to filter out the noise and increase the fidelity of attacker signals.
Hiding in the Data: Weak Signals
What is a weak signal? Let’s talk about an example vulnerability in some ICS equipment. Exploiting this particular vulnerability required sending a series of payloads to the equipment until exploitation was possible. Actually exploiting the vulnerability did not cause the device to crash (which often happens with ICS gear), nor did it cause any other obvious functionality issues. However, the equipment would terminate the network communication with an RST packet after each exploit attempt. Thus, one might say an IOC would be an “unusual number of RST packets.”
Now, any of you readers who have actually been in a SOC are probably painfully aware of the problems that occur when you treat weak signals as strong signals. Computers do weird shit sometimes. Throw in users and weird shit happens a lot of the time. If you were to set up alerts on RST packet indicators, you would quickly be inundated; that alert is getting switched off immediately. This is one area where AI/ML can actually be pretty helpful, but that’s a topic for another post.
The fact that a given cyber-physical asset has an increased number of RST packets is a weak signal. Monitoring RST packet frequency itself is not that helpful and, if managed poorly, can actually cause decreased visibility. This brings us to the meat of this post: multiple disparate weak signals can be fused into a strong signal.
Let’s add in a fictitious attacker who has a Metasploit module to exploit the vulnerability I just described. She also has the desire to participate in some DDoS, because she doesn’t actually realize that this piece of equipment is managing really expensive and critical industrial processes (such a stretch, I know, but let’s exercise our imaginations). Once exploited, the device attempts to communicate with an IP address to which it has never communicated previously—another weak signal. Network whitelisting can be effective in certain environments, but an alert every time a whitelist is violated is going to be way way too many alerts. You should still collect them for retrospective analysis (get yourself an analytics platform that doesn’t charge for every single event you put in), but every network whitelist change isn’t going to warrant action by a human.
As a final step in post-exploitation, the compromised device initiates multiple network sockets to a slack server, above the “normal” threshold for connection counts coming out of this device on a given day. Another weak signal.
So what has the attacker given a defender? We have an increased RST packet count, a post-exploitation download from a new IP address, and then a large uptick in outgoing connections. Unless these IP addresses trigger on some threat list, they could easily slide by as normal network activity hidden in the noise.
An analyst who has visibility into NetFlow and packet data can piece these weak signals together into a strong signal that actually warrants getting a human involved; this is now a threat hunt. This type of detection can be automated directly in an analytics platform or conducted using an advanced IDS that doesn’t rely exclusively on IOCs. When it comes to cybersecurity, the onus is on the defender to fuse these weak signals in their environment. Out-of-the-box security solutions are going to fail in this department because, by nature, they are built for very specific situations. No two organizations have exactly the same network architecture or exactly the same vendor choices for VPNs, endpoints, collaboration software, etc. The ability to fuse disparate data together to create meaningful decisions for a given organization is crucial to successful defense and successful operations.
Hiding Weak Signals in Time
One other very important aspect of weak signals is time. As humans, we are particularly susceptible to this problem but the detection products on the market face challenges here too. A large amount of activity or a large change in a short amount of time becomes very apparent. The frame of reference is important for human pattern recognition and anomaly detection algorithms. Just about any sensor will throw a “port scan happened” alert if you `nmap -T5 -p- 10.0.0.1-255.` What about if you only send one packet per second? What about one per day? Detection sensors encounter significant technical challenges keeping context over long periods of time when you consider that some organizations are generating many terabytes of data every day.
An attacker willing to space activity out over time is much less likely to be detected unless defenders have the log retention and analytics capability to make time work for defense. Platforms like Gravwell and Splunk were built for huge amounts of time series data, and there are open-source time series databases, like InfluxDB, that can provide these kinds of time-aware analytics. Key/value stores like ELK can also work, but they weren’t explicitly built for time series, and time-series-first is probably necessary at scale. It’s also possible to do these kinds of time-based data science activities using Python scripts, but I wouldn’t recommend it.
Conclusion
Coming from a background in vulnerability research, I understand how little it can take to compromise a host and how easily that compromise can be lost in the noise when there are no existing, high-fidelity IOCs. This doesn’t just apply to exploits, but also lost credentials and user behavior.
Relying exclusively on pre-defined IOCs, APT detection mechanisms or other “out-of-the-box” solutions causes a major gap in visibility and gives up the primary advantage that you have as a defender: this is your turf. Over time, weak signals are refined and automated analysis can be put in place to make any attacker stepping foot into your domain stand out like a sore thumb.
Data collection is the first step to detecting and responding to weak signals. I encourage organizations to collect as much data as possible. Storage is cheap and you can’t know ahead of time which logfile or piece of data is going to end up being crucial to an investigation.
With the capability to fuse weak signals into a high-fidelity, strong signal on top of pure strong signal indicators and threat detection, an organization is poised to be successful and make sure their systems “do what they’re supposed to do, and don’t do anything else.”
-Corey Thuen
Low-hanging Secrets in Docker Hub and a Tool to Catch Them All | Matías Sequeira
TL;DR: I coded a tool that scans Docker Hub images and matches a given keyword in order to find secrets. Using the tool, I found numerous AWS credentials, SSH private keys, databases, API keys, etc. It’s an interesting tool to add to the bug hunter / pentester arsenal, not only for the possibility of finding secrets, but for fingerprinting an organization. On the other hand, if you are a DevOps or Security Engineer, you might want to integrate the scan engine to your CI/CD for your Docker images.
GET THE TOOL: https://github.com/matiassequeira/docker_explorer
The idea for this work came up when I was opening the source code for a project on which I was collaborating. Apart from migrating the source code to an open GitHub, we prepared a ready-to-go VM that was uploaded to an S3 bucket and a few Docker images that we pushed to Docker Hub. A couple of days later, a weekend to be more specific, we got a warning from AWS stating that our SNS resources were being (ab)used – more than 75k emails had been sent in a few hours. Clearly, our AWS credentials were exposed.
Once we deleted all the potentially exposed credentials and replaced them in our environments, we started to dig into the cause of the incident and realized that the credentials were pushed to GitHub along with the source code due to a miscommunication within the team. As expected, the credentials were also leaked to the VM, but the set of Docker images were fine. Anyhow, this got me thinking about the possibility of scanning an entire Docker images repository, the same way hackers do with source code repositories. Before starting to code something, I had to check whether it was possible.
Analyzing Feasibility
Getting a list of images
The first thing I had to check was if it was possible to retrieve a list of images that match a specific keyword. By taking a look at the API URLs using a proxy, I found:
https://hub.docker.com/v2/search/repositories?query={target}&page={page}&page_size={page_size}
The only limitation I found with API V2 was that it wouldn’t retrieve anything beyond page number 100. So, given the maximum page size of 100, I wouldn’t be able to scan more than 10k containers per keyword. This API also allows you to sort the list by pull count, so, if we retrieve the repositories with fewer pulls (in other words, the newest repositories), we have a greater chance of finding something fresh. Although there’s a V1 of the API that has many other interesting filters, it wouldn’t allow me to retrieve more than ~2.5k images.
After getting the image name, and since not all the images had the `latest` version, I had to make a second request to get a list of versions for each image to the following endpoint:
https://hub.docker.com:443/v2/repositories/{image}/tags/?page_size=25&page=1
Once I had the `image:version`, the only thing left was to pull the image, create a temporary container, and dump its filesystem.
Analyzing the image dump
This was one of the most important parts of my research, because it involved the engine I used for scanning the images. Before trying to create a scan engine, which is a big enough problem, and with so many options to choose from, I started to look for the most suitable tool which could work on an entire filesystem, which could look for a wide variety of secrets. After doing some research, I found many options, but sadly, most of them were oriented to GitHub secrets, had a high rate of false positives, or evaluated secrets as isolated strings (without considering format var=value).
While discussing this with one of my colleagues, he mentioned a tool that had been published less than a week ago called Whispers, a great tool by Artëm Tsvetkov and Christian Martorella. By doing some tests, I found this tool very convenient for several reasons:
- Contains many search rules (AWS secrets, GitHub secrets, etc.) that assess potential secrets as var=value format
- Findings are classified by type, impact (minor, major, critical, blocker), file location, etc.
- Allows you to add more rules and plugins
- Written in Python3 and thus very easy to modify
Developing the tool
Once I had everything in place, I started to work on a script to automate the process in order to scan thousands of images. By using/testing the tool, I came up with additional requirements, such as allowing the user to provide the number of core processors to use, limit Whispers execution time, store logs separately for each container, delete containers and images in order to avoid filling up the disk space, etc.
Also, in order to maximize the number of findings, minimize the number of false positives, and ease data triage, I made a couple of modifications to the standard Whispers:
- Added a rule for Azure stuff
- Excluded many directories and files
- Saved files with potential secrets into directories
Running the tool
With the tool pretty much ready to analyze bulk images, I signed up for two different DigitalOcean accounts and claimed $100 in credit for each. Later, I spun up two servers, set up the tool in each environment, and ran the tool using a large set of keywords/targets.
The keywords/images I aimed to scan were mainly related to technologies that handle or have a high probability of containing secrets, such as:
- DevOps software (e.g. Kubernetes / K8s / Compose / Swarm / Rancher)
- Cloud services (e.g. AWS / EC2 / CloudFront / SNS / AWS CLI / Lambda)
- CI/CD software (e.g. Jenkins / CircleCI / Shippable)
- Code repositories (e.g. GitLab / GitHub)
- Servers (e.g. NGINX / Apache / HAProxy)
After a month of running the tool, I found myself with a total of 20 GB of zipped data ready to triage, for which I had to develop an extra set of tools to clean all of the data, applying and reutilizing the same criteria. Among the rules or considerations, the most important were:
- Created a list of false-positive strings that were reported as AWS access keys
- Deleted AWS strings containing the string “EXAMPLE”
- Discarded all the potential passwords that were not alphanumeric and shorter than 10 chars
- Discarded passwords containing the string “password”
- Discarded test, dummy, or incorrect SSH private keys
- Deleted duplicate keys /values for each image, to lessen the amount of manual checking
Results
After many weeks of data triage, I found a wide variety of secrets, such as 200+ AWS accounts (of which 64 were still alive and 14 of these were root), 1,500+ valid SSH keys, Azure keys, several databases, .npmrc tokens, Docker Hub accounts, PyPI repository keys, many SMTP servers, reCAPTCHA secrets, Twitter API keys, Jira keys, Slack keys, and a few others.
Within the most notable findings was the whole infrastructure (SMTP servers, AWS keys, Twitter keys, Facebook keys, Twilio keys, etc.) of a US-based software company with approximately 300 employees. I reached out to the company, but, unfortunately, I did not hear back. Also, I identified an Argentinian software company focusing on healthcare that had a few proofs-of-concept with valid AWS credentials.
The most commonly overlooked files were ‘~/aws/credentials’, Python scripts with hardcoded credentials, the Linux bash_history, and a variety of .yml files.
So, what can I use the tool for?
If you are a bug bounty hunter or a pentester, you can use the tool with different keywords, such as the organization name, platform name, or developers’ names (or nicknames) involved in the program you are targeting. The impact of finding secrets can range from a data breach to the unauthorized use of resources (sending spam campaigns, Bitcoin mining, DDoS attack orchestration, etc.).
Security recommendations for Docker images creation
If you work in DevOps, development, or SysAdmin and currently use cloud infrastructure, these tips might come handy for Docker image creation:
- Always try to use a fresh, clean Docker image.
- Delete your SSH private key and SSH authorized keys:
-
sudo shred myPrivateSSHKey.pem authorized_keys
-
rm myPrivateSSHKey.pem authorized_keys
-
- Delete ~.aws/credentials using the above method.
- Clean bash history:
-
history -c
-
history -w
-
- Don’t hardcode secrets in the code. Instead, use environment variables and inject them at the moment of container creation. Also, when possible, use mechanisms provided by container orchestrators to store/use secrets and don’t hardcode them in config files.
- Perform a visual inspection of your home directory, and don’t forget hidden files and directories:
-
ls -la
-
- If you are using a free version of Docker Hub, assume that everything within the image is public.
Update SEPTEMBER 2020: While writing this blog post, Docker Hub announced that by NOVEMBER 2020 it will start to limit the number of downloads by time. But, there’s always a way ;).
GET THE TOOL: https://github.com/matiassequeira/docker_explorer
-Matías Sequeira
CVE-2020-16877: Exploiting Microsoft Store Games
TL; DR. This blog post describes a privilege escalation issue in Windows (CVE-2020-16877) I reported to Microsoft back in June, which was patched in October. This issue allows an attacker to exploit Windows via videogames by directly targeting how Windows handles Microsoft Store games. This issue could be exploited to elevate privileges from a standard user account to Local System on Windows 10.
A journey into defeating regulated electronic cigarette protections
TL;DR: This blog post does not encourage smoking nor vaping. The main focus of this blog will be defeating the protections of a regulated electronic cigarette to assess the ability of it being weaponized via a remote attacker by modifying its firmware and delivering it through a malware which waits for electronic cigarettes to be connected over USB or discovered over Bluetooth.
Password Cracking: Some Further Techniques
A password hash is a transformation of a password using what we call a “one-way” function. So, for example, ROT-13 (rotate by half the alphabet) would be a very, very bad password hash function and would give fairly recognizable results like “Cnffjbeq123!”. The one-way property means it must be essentially impossible to construct the inverse function and recover the original, and functions like MD5 or SHA1 certainly meet that particular criterion. Iterated encryption functions like DES have also been used (for example LAN Manager hashes), but seem to have fallen out of favor a while ago. There are a load of technical, cryptographic details which neither you or I need to know, but essentially we’ll end up guessing at loads of passwords and then performing the same one-way function and checking to see if we’ve arrived at the correct answer.
Uncovering Unencrypted Car Data in BMW Connected App
TL; DR: Modern mobile OSes encrypt data by default, nevertheless, the defense-in-depth paradigm dictates that developers must encrypt sensitive data regardless of the protections offered by the underlying OS. This is yet another case study of data stored unencrypted, and most importantly, a reminder to developers not to leave their apps’ data unencrypted. In this case study, physical access to an unlocked phone, trusted computer or unencrypted backups of an iPhone is required to exfiltrate the data, which in turn does not include authentication data and cannot be used to control or track the vehicle in any way.
Introduction
“While modern mobile operating systems allow encrypting mobile devices, which users can use to protect themselves, it is ultimately the developer’s responsibility to make sure that their software is thoroughly safeguarded. To this end, developers should provide reliable mobile app data encryption that leaves no user data without protection.” — Dmitriy Smetana.
Physical theft is not the only attack vector that threatens the data stored on a mobile phone. Imagine, for instance, a shared computer at home or in the office where a phone has been authenticated and trusted. When the phone is connected and authenticated, a malicious actor with access to this computer would be able to extract its apps’ data. The likelihood is low in the real world, though.
One day during the pandemic I was wondering if my car’s mobile app was encrypting the data or not. So, I decided to analyze it:
Scope and Tools Used
The following navigation-equipped cars were used for this analysis:
- X5 xDrive40i (2020)
- 120i (2020)
- X1 sDrive20iA X Line (2018)
BMW Connected is a mobile app compatible with 2014 and newer navigation-equipped vehicles (BMW ConnectedDrive). It allows the user to monitor and remotely control some features such as:
- Lock/Unlock
- Location tracking
- Lights
- Horn
- Climate control
- Destinations (navigation system)
- Doors and windows status
- Fuel level
- Mileage
BMW Connected App Demonstration (Youtube)
The latest version of the app available on Apple Store was:
- BMW Connected for iOS v10.6.2.1807
I installed the app on two iPhones, neither of which were jailbroken:
- iPhone XS Max (iOS 13.4.1)
- iPhone 8 Plus (iOS 13.3.1)
Then, I found unencrypted data using the following basic tools:
- Windows 10 Home
- Ubuntu Linux 19.10
- strings
- plistutil
- base64
- jq
Analysis and Results
You’ll see how easy it was to extract and decode the stored data.
Data Stored Unencrypted
The cars were added and authenticated within the app:
For both installations, the same behavior was observed: data was stored base64-encoded but unencrypted in .plist files. I used the plistutil command to decode such files, then, I piped the output through other command-line tools to strip empty lines and spaces.
Once I had the base64 strings, I decoded them with the base64 tool and finally, formatted and colorized the JSON output with the jq tool:
- Favorite locations (FavoritesCollection.plist)
- Directions sent to the vehicle (TripStorage.plist)
- Status of doors and windows (VehicleHub.Suite.plist)
- Mileage and remaining fuel (VehicleHub.Suite.plist)
- History of remote actions (VehicleHub.Suite.plist)
- Car color and picture (VehicleHub.Suite.plist)
- Next maintenance due dates (VehicleHub.Suite.plist)
- VIN and model
- Owner’s first and last name and last logged date (group.de.bmw.connected.plist)
Weak Password and PIN Policies
On registration, I noticed the password policy only required eight characters from at least two of the following three charsets:
- Letters (abc = ABC)
- Numbers
- Special characters
Such a policy might seem good enough; however, making the password case-insensitive significantly decreases its complexity. During testing, it was possible to login with any of the following passwords:
- Qwerty12
- QWERTY12
- QwErTy12
Also, the app permits users to select an easy-to-guess PIN, which is used to unlock the car or access the app if the smartphone does not implement FaceID, TouchID, or a passcode. The existing PIN code policy allows users to choose weak combinations, such as consecutive numbers (e.g. “1234”) or the same number (e.g. “0000”).
However, the most commonly used feature for authentication is either FaceID or TouchID.
Recommendations
The takeaways are very simple:
- For end-users:
- Only authenticate your phone on trusted computers.
- Avoid connecting and trusting your phone to shared workstations.
- Use complex passwords and PIN codes.
- For developers:
- Do not put your complete trust in the operating system.
- Encrypt sensitive data on your own.
Responsible Disclosure
One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure.
The following is the communication timeline with BMW Group:
- May 2, 2020: IOActive’s assessment of the BMW Connected App started
- May 15, 2020: IOActive sent a vulnerabilities report to BMW Group following its guidelines
- May 20, 2020: BMW Group replied. They internally sent the report to the responsible departments
- May 26, 2020: IOActive asked BMW Group for any updates or questions and let them know about our intention to write a blog post
- May 28, 2020: BMW Group said to wait for a new app release prior to publishing a blog post and asked for details to include the researcher in BMW Group’s Hall of Fame site
- Aug 07, 2020: BMW Group and IOActive had a call to discuss the technical information that would be published in a blog post
- Aug 13, 2020: BMW Group sent an email explaining how they would fix the issues
- Aug 19, 2020: IOActive sent a draft of the blog post to be published to BMW Group for review
- Aug 24, 2020: BMW Group suggested some modifications
- Sep 08, 2020: IOActive sent the second version of the draft of the blog post to be published to BMW Group for review
- Sep 11, 2020: BMW Group agreed with the final content
- Sep 22, 2020: IOActive blog published.
The Fix
BMW Group’s security statement:
“Thanks to the notification of Alejandro Hernandez at IOActive via our responsible disclosure channel, we were able to change the way the app’s data cache is handled. Our app development team added an encryption step that makes use of the secure enclave of Apple devices, at which we generate a key that is used for storing the favorites and vehicle metadata that Alejandro was able to extract. We appreciate Alejandro for sharing his research with us and would like to thank him for reaching out to us.”
Acknowledgments
I would like to give special thanks to my friend Juan José Romo, who lent me two brand new cars for testing.
Also, I’d like to thank Richard Wimmer and Hendrik Schweppe of BMW Group for their quick response and cooperation in fixing the issues presented here.
Thanks for reading,
Alejandro @nitr0usmx
No buffers harmed: rooting Sierra Wireless Airlink devices through logic bugs
IOActive Labs – Ruben Santamarta, IOActive Principal Security Consultant, explores building a chain of exploits without harming a single buffer, rooting Sierra Wireless Airlink devices.
Cybersecurity Vigilance for a Historic Election
November 3rd is Election Day in the United States. Every election is important, but this election is particularly crucial. It is one of the most important elections in our lifetime—the 2020 election will determine the course of the United States for the next 10 years or more. With so much on the line, every vote counts—but the security and integrity of, and voter confidence in, the election itself are also at risk.
The Senate Intelligence Committee determined that Russia influenced and interfered with the 2016 election, and US intelligence agencies report that Russia and other nations are spreading misinformation and actively trying to hack the 2020 election as well. The COVID-19 pandemic combined with social and political unrest across the country provides cyber adversaries with a larger attack surface for manipulation and exploitation. Heightened cybersecurity awareness and effective cybersecurity are more crucial than ever to ensure our votes are counted.
Heightened cybersecurity awareness and effective cybersecurity are more crucial than ever to ensure our votes are counted.
As the clock winds down to Election Day — and early voting and mail-in voting begin across the country — we need to consider whether we can trust the technology and processes we rely on for voting. Is the supply chain intact? Can we ensure the integrity of voter registration databases? Are media outlets and social media having an adverse influence on the election? Most importantly, can voters trust the process and have confidence that their votes will be properly counted? Clearly, we need around-the-clock vigilance between now and November 3rd to secure our vote.
Covert and Overt Actions that can Influence the Vote
Political campaigns are all about influence and swaying opinion—but those activities should be limited to the candidates and American interests. Foreign nations should not manipulate or interfere with our democratic election process, yet they do. There are foreign influences that are plainly visible, and then there are clandestine methods that are not visible. Spies who operate without being detected may impact the election. They can also steal intellectual property, manipulate technology, and recruit potential agents and mules to deliver exploits and payloads.
Social media is a double-edged sword when it comes to information. It can be a great way to do research, engage in rational discussion, and learn about the candidates and current issues. The problem is that it is also an extremely efficient way to spread misinformation, and many people can’t tell the difference. Deepfake videos and fake propaganda released on social media platforms are part of disinformation campaigns and political agitation that drive a wedge between people and prevent productive dialogue.
The COVID-19 pandemic is driving a spike in demand for mail-in ballots so people can avoid gathering at polling sites and exposing themselves to potential risk from the virus. However, the United States Postal Service is struggling, and there have been a number of cuts and changes that seem specifically intended to make it more difficult to vote by mail. Once ballots get to a post office, both the mail-in ballots and post office sorting machines are ripe for insider threats, human manipulation, and fraud if not managed and monitored appropriately.
Protect the Vote by Managing Cybersecurity and Supply Chain Risk
What can we do to defend the election process and our votes against all of these threats? The challenges of election security span the breadth and depth of the attack surface. Every county in the United States is a potential target and the scope of attacks can range from cyber attacks against voter registration and voting systems to theft of information and everything in between.
Okay, but how difficult is the challenge of election security? Let’s consider it; there are thousands of networks and applications to protect. Every network has thousands of devices, including PCs, laptops, printers, servers, smartphones, tablets, IoT devices, etc. Each of these devices runs several layers of software, and each of these software applications has thousands to millions of lines of code. Software code is complex and, as with any product made by humans, often has errors which includes security problems. In several million lines of code contained in thousands of layers of software, there are thousands of possible cybersecurity problems that need to be identified and fixed. Because of these cybersecurity problems, networks should be protected to prevent exploitation by bad actors.
Because we live in a global economy, technology is built with different small parts made in different parts of the world by people working at different companies. Securing the supply chain is also an important challenge, as backdoors and security problems can be planted in technology and exploited later by state actors.
On top of these cybersecurity problems, we have the human element. Individuals need to be properly trained in secure technology use and how not to be fooled by phishing or other targeted cyber attacks.
The best way to secure our votes and protect the integrity of the election is to engage the security community early and often to get a hacker’s point of view and the best skills working together.
Engage the security community early and often to get a hacker’s point of view and the best skills working together.
We need to properly train all personnel in cybersecurity to make them resilient against cyber attacks. We should make sure technology comes from trusted parties that perform due diligence and security audits on their providers in order to properly secure the supply chain. We also need to audit hardware and software to identify potential cybersecurity problems in order to fix them and/or take preventive actions to avoid their exploitation. Also, we need to conduct continuous or frequent vulnerability scans and penetration tests to gain valuable insight into the overall security posture of the election process and identify weaknesses so they can be addressed proactively.
As the attack surface constantly expands and the threat landscape continually shifts and evolves, ongoing testing and validation of applications and security controls should be a requirement.
The 2020 election is crucial for the future of the United States. It will require around-the-clock vigilance between now and November 3rd to guard against attacks on the election and secure our vote.
Matt Rahman is COO at IOActive, the world leader in research-fueled security services.
Security Makes Cents: Perspectives on Security from a Finance Leader
Recently, it feels like the Internet is filled with stories of cyber-breaches and security breakdowns. As the world is more interconnected than ever, these stories are becoming all too familiar. In fact, there is a malicious web-based hacking event every 39 seconds, and 43% of them target small businesses.
While a breach can occur in any area of a business, a corporate finance department is often uniquely positioned, with touch-points extending further outside the company than other groups. With touch-points up and down the supply chain, the number of potential attack vectors increases, and with cross-functional access, the impact of successful attacks grows exponentially.
Fortunately, there are several small steps any department can take to beef up its policies and awareness to help prevent it from becoming the subject of the next news article. Many organizations overlook the value of programmatic, policy, and procedural controls to manage cybersecurity risks as they purchase the latest, expensive cybersecurity widget. Forward-looking organizations make cybersecurity an integral part of their overall operational resiliency with CISA’s CRR or SEI’s CERT-RMM.
Here are some specific examples where small changes can improve a finance department’s security posture.
Create a Disbursement Process Policy – and Stick to It!
Most of us know that good internal controls are the backbone of preventing fraud within an organization. But what if those controls are circumvented at an appropriate level with the relevant authority? As the pace of business increases, so does the urgency to transact that business and the necessity of off-cycle disbursements. Threat actors know this and take advantage of it. The most popular attack is spear-phishing, often referred to as Business Email Compromise (BEC), where an email is sent by an attacker to a specific person, usually someone with enough authority to transfer money without additional oversight. In many cases, these emails will appear to come from someone high up in a company: an owner, board member, C-Suite, or VP.
It should be the policy of every finance department to individually verify all off-cycle disbursements with a separate email or message to ensure that the request is valid. But usually awareness of simple clues will tell you that the request isn’t valid. For example:
- The sender’s email address doesn’t match the person’s actual email address.
- There are abnormal links within the email message.
- The language doesn’t match the person.
Remember, human intelligence and critical thinking are the best defense against spear-phishing attacks. Making sure you have a good relationship with those that can authorize payments will greatly reduce the likelihood of a successful attack.
Manage Your External Credentials
Depending on the size of your department, you may be more or less able to effectively segregate duties. In most small and medium-sized businesses, the finance department wears multiple hats: accounting, FP&A, tax, treasury, etc. In these cases, there exists an increased need for cross-training. With cross-training and role backups comes the need for passwords to be shared among multiple people.
Your passwords are not always an entry point for your systems,
but weak passwords can jeopardize the information and accounts
That in itself brings inherent dangers. How do you securely share passwords? How do passwords get updated? Many may default to using an Excel spreadsheet or Google doc to keep a list of websites and passwords. While these may be efficient, they are not secure. So what should you do?
- Implement a password management service, such as SecretServer or LastPass. While there is an associated cost, these services allow groups to share passwords in an encrypted and secure environment often with an audit trail.
- Use secure password generators. These services can help you input the password requirements of a website and create the strongest password possible.
- Follow good password hygiene by updating passwords regularly, using random characters, and making them as long as possible. See NIST SP 800-63B Appendix A for additional details.
- Make use of Multi-Factor Authentication (MFA), when possible.
- Don’t reuse passwords. It’s just as convenient for an attacker as it is for your team.
Your passwords are not always an entry point for your systems, but weak passwords can jeopardize the information and accounts stored on third-party systems, like tax agencies or customer portals.
Social Engineering is Real
It is becoming more and more common for threat actors to gain access through means other than technical infiltration. A common way is to get an employee to voluntarily give up information through a pretext. I have personally received phone calls supposedly from our bank asking me to verify my password to them. Remember, banks or other agencies will never ask for sensitive information over the phone. If you ever have doubts as to the authenticity of a request, you can always hang up and call back using verified and published phone numbers. If the request is illegitimate, the caller will do all they can to keep you on the line.
Over 95% of attacks that succeed do so because of human error. It is human nature to want to satisfy the request on the other end of the line, but don’t be afraid to make sure you’re protected.
The Cloud is Safe, Right?
Anyone else remember the days of on-prem hosted accounting software that was clunky and had to be updated every year? Those days are long gone thanks to the proliferation of cloud-based, whole-hosted ERP solutions. And it doesn’t stop there: financial analytics suites, CRMs, and document sharing all have industry leaders that are cloud-only.
Have you asked yourself how safe that data is? Sure, you’ve got high-level password requirements in your environment, but what about your service provider? It’s safe, right?
Is it? Ask yourself how you know. What risks lurk undiscovered in your supply chain?
Technology companies are one of the top three industries to experience an information breach, mainly because they carry a vast amount of very distinct and personally-identifying data. Client names, addresses, and emails are all stored in the cloud and could be prime targets for a cybercriminal. One needs to look no further than the Cloud Hopper campaign to see the risk of using Managed Service Providers (MSPs).
When you are assessing new software, ask for third-party security reports. Almost all storage-based companies can provide you with SOC 2 reports that discuss their practices and policies surrounding IT and IS environments. Have someone who knows how to interpret the contents read those reports and comments so you can make an informed risk assessment.
If you want to feel extra secure, consider having an assessment performed.
If you want to feel extra secure, consider having an assessment performed. At IOActive, we perform security assessments of the key products and providers we utilize in our operations as part of our internal Supply Chain Integrity program. Not every organization has the skills or resources to perform such assessments, but several great third-party assessor organizations exist to help. If specific vulnerabilities are identified, most providers are happy to know about them and, depending on the severity, will work to remediate those vulnerabilities quickly before you deploy the new service.
Protect What You’ve Built
One of the most popular new products in insurance is a cyber insurance policy. Once upon a time, these policies were designed to help the few companies operating within the cyber landscape. But now, everyone operates in that arena. The insurance industry has responded and offers tailor-made solutions to protect companies from multiple angles in case of a breach, including investigation, forensics, and damages. This is a must-have policy in the new connected world of the 21st century and a core part of firm-level risk management.
This is not a big business policy, either. Remember that 43% of attacks target small businesses. Legal damages resulting from a breach at a small business pose an existential threat to that organization. Talk to your agent about adding a cyber incident policy to help mitigate some of the risks associated with a breach.
The world is changing rapidly and our workspaces are changing just as fast. As remote work becomes the new normal for many companies, our digital footprints are expanding and cybersecurity is the responsibility of everyone in the company, not just the IT or IS departments. Do your part and think about how you could be impacted or used to impact others.
Joshua Beauregard is a Certified Public Accountant and the Senior Director of Finance and Administration at IOActive, the world leader in research-fueled security services.