EDITORIAL | September 8, 2020

IOActive Labs Blog

Reclaiming Hallway Con

We have several exciting things happening with our blog content. Like many, we’ve been working to replace the value lost with the loss of face-to-face gatherings at meetings, conventions, and informal get-togethers. Many veterans of the conference circuit will tell you that by far the most valuable part of a typical conference is the hallway con, which refers to the informal discussions, networking, and often serendipitous meetings that happen outside the formal conference agenda.

IOActive is helping reclaim hallway con by making some of that valuable content available in a pandemic-friendly format on our blogs and in webinars. We recently launched our Guest Blog series with a post focused on emerging threats in intermodal transportation from Urban Jonson, an accomplished contributor to hallway con and leader of the Heavy Vehicle Cyber Security (HVCS) working group at NMFTA.

Likewise, we are making some more informal technical content available to a larger audience at a higher frequency through our new IOActive Labs blog.

IOActive Labs Blog

The IOActive Labs blog is an organizational innovation proposed by our consultants to support a more agile process for developing, reviewing, and posting technical content. It facilitates lower-latency, higher-frequency posting of technical content, which was more challenging within our prior process.

This new process allows for some interesting new types of content, such as live blogging during a CTF, and more informal content, such as documenting techniques. Furthermore, the organization of the technical content under the IOActive Labs blog will allow the part of our audience, who’s only interested in the (very interesting) bits and bytes, to easily find those posts as we include more diverse, non-technical content and voices in our main blog.

We want to break in the new IOActive Labs blog with an appropriately original and interesting first post.

Breaking in the IOActive Labs Blog with a Look at Aviation Operational Technology

Ruben Santamarta, a Principal Consultant at IOActive, has amassed a considerable body of groundbreaking, original cybersecurity research. He continues his work on emerging threats through a look into airline and airport operational technology (OT) associated with Electronic Bag Tags (EBTs). This post builds on his recent work on warcodes (malicious bar codes) discussed in his recent blog post.

This research takes an empirical look at some of the implementation flaws in a couple of examples of devices and components that support the “tags everywhere” and “sensors everywhere” trends brought about by IoT and the thirst for more sources of data to feed the big data movement. It also illustrates some of the potential supply-chain risks associated with using insecure, but not intentionally malicious, products and components in systems that perform core business operations.

You may also follow the latest posts on the IOActive Labs twitter.

More to Come

We have more exciting innovations to come as we work to recapture more of the value lost without conferences.

RESEARCH | March 9, 2018

Robots Want Bitcoins too!

Ransomware attacks have boomed during the last few years, becoming a preferred method for cybercriminals to get monetary profit by encrypting victim information and requiring a ransom to get the information back. The primary ransomware target has always been information. When a victim has no backup of that information, he panics, forced to pay for its return.
(more…)

EDITORIAL | January 31, 2018

Security Theater and the Watch Effect in Third-party Assessments

Before the facts were in, nearly every journalist and salesperson in infosec was thinking about how to squeeze lemonade from the Equifax breach. Let’s be honest – it was and is a big breach. There are lessons to be learned, but people seemed to have the answers before the facts were available.

It takes time to dissect these situations and early speculation is often wrong. Efforts at attribution and methods take months to understand. So, it’s important to not buy into the hysteria and, instead, seek to gain a clear vision of the actual lessons to be learned. Time and again, these supposed “watershed moments” and “wake-up calls” generate a lot of buzz, but often little long-term effective action to improve operational resilience against cyber threats.


At IOActive we guard against making on-the-spot assumptions. We consider and analyze the actual threats, ever mindful of the “Watch Effect.” The Watch Effect can be simply explained:  you wear a watch long enough, you can’t even feel it.
I won’t go into what third-party assessments Equifax may or may not have had because that’s largely speculation. The company has probably been assessed many times, by many groups with extensive experience in the prevention of cyber threats and the implementation of active defense. And they still experienced a deep impact cyber incursion.

The industry-wide point here is: Everyone is asking everyone else for proof that they’re secure.

The assumption and Watch Effect come in at the point where company executives think their responses to high-level security questions actually mean something.

Well, sure, they do mean something. In the case of questionnaires, you are asking a company to perform a massive amount of tedious work, and, if they respond with those questions filled in, and they don’t make gross errors or say “no” where they should have said “yes”, that probably counts for something.

But the question is how much do we really know about a company’s security by looking at their responses to a security questionnaire?

The answer is, “not much.”

As a company that has been security testing for 20 years now, IOActive has successfully breached even the most advanced cyber defenses across countless companies during penetration tests that were certified backwards and forwards by every group you can imagine. So, the question to ask is, “Do questionnaires help at all? And if so, how much?”
 
Here’s a way to think about that.

At IOActive we conduct full, top-down security reviews of companies that include business risk, crown-jewel defense, and every layer that these pieces touch. Because we know how attackers get in, we measure and test how effective the company is at detecting and responding to cyber events – and use this comprehensive approach to help companies understand how to improve their ability to prevent, detect, and ever so critically, RESPOND to intrusions. Part of that approach includes a series of interviews with everyone from the C-suite to the people watching logs. What we find is frightening.

We are often days or weeks into an assessment before we discover a thread to pull that uncovers a major risk, whether that thread comes from a technical assessment or a person-to-person interview or both.

That’s days—or weeks—of being onsite with full access to the company as an insider.

Here’s where the Watch Effect comes in. Many of the companies have no idea what we’re uncovering or how bad it is because of the Watch Effect. They’re giving us mostly standard answers about their day-to-day, the controls they have in place, etc. It’s not until we pull the thread and start probing technically – as an attacker – that they realize they’re wearing a broken watch.

Then they look down at a set of catastrophic vulnerabilities on their wrist and say, “Oh. That’s a problem.”

So, back to the questionnaire…

If it takes days or weeks for an elite security firm to uncover these vulnerabilities onsite with full cooperation during an INTERNAL assessment, how do you expect to uncover those issues with a form?

You can’t. And you should stop pretending you can. Questionnaires depend far too much upon the capability and knowledge of the person or team filling it out, and often are completed with impartial knowledge. How would one know if a firewall rule were updated improperly to “any/any” in the last week if it is not tested and verified?

To be clear, the problem isn’t that third party assessments only give 2/10 in security assessment value. The problem is that executives THINK it’s giving them 6/10, or 9/10.

It’s that disconnect that’s causing the harm.

Eventually, companies will figure this out. In the meantime, the breaches won’t stop.

Until then, we as technical practitioners can do our best to convince our clients and prospects to understand the value these types of cursory, external glances at a company provide. Very little. So, let’s prioritize appropriately.

EDITORIAL | January 24, 2018

Cryptocurrency and the Interconnected Home

There are many tiny elements to cryptocurrency that are not getting the awareness time they deserve. To start, the very thing that attracts people to cryptocurrency is also the very thing that is seemingly overlooked as a challenge. Cryptocurrencies are not backed by governments or institutions. The transactions allow the trader or investor to operate with anonymity. We have seen a massive increase in the last year of cyber bad guys hiding behind these inconspicuous transactions – ransomware demanding payment in bitcoin; bitcoin ATMs being used by various dealers to effectively clean money.

Because there are few regulations governing crypto trading, we cannot see if cryptocurrency is being used to fund criminal or terrorist activity. There is an ancient funds transfer capability, designed to avoid banks and ledgers called Hawala. Hawala is believed to be the method by which terrorists are able to move money, anonymously, across borders with no governmental controls. Sound like what’s happening with cryptocurrency? There’s an old saying in law enforcement – follow the money. Good luck with that one.

Many people don’t realize that cryptocurrencies depend on multiple miners. This allows the processing to be spread out and decentralized. Miners validate the integrity of the transactions and as a result, the miners receive a “block reward” for their efforts. But, these rewards are cut in half every 210,000 blocks. A bitcoin block reward when it first started in 2009 was 50 BTC, today it’s 12.5. There are about 1.5 million bitcoins left to mine before the reward halves again.

This limit on total bitcoins leads to an interesting issue – as the reward decreases, miners will switch their attention from bitcoin to other cryptocurrencies. This will reduce the number of miners, therefore making the network more centralized. This centralization creates greater opportunity for cyber bad guys to “hack” the network and wreak havoc, or for the remaining miners to monopolize the mining.

At some point, and we are already seeing the early stages of this, governments and banks will demand to implement more control. They will start to produce their own cryptocurrency. Would you trust these cryptos? What if your bank offered loans in Bitcoin, Ripple or Monero? Would you accept and use this type of loan?

Because it’s a limited resource, what happens when we reach the 21 million bitcoin limit? Unless we change the protocols, this event is estimated to happen by 2140.  My first response  – I don’t think bitcoins will be at the top of my concerns list in 2140.

The Interconnected Home

So what does crypto-mining malware or mineware have to do with your home? It’s easy enough to notice if your laptop is being overused – the device slows down, the battery runs down quickly. How can you tell if your fridge or toaster are compromised? With your smart home now interconnected, what happens if the cyber bad guys operate there? All a cyber bad guy needs is electricity, internet and CPU time. Soon your fridge will charge your toaster a bitcoin for bread and butter. How do we protect our unmonitored devices from this mineware? Who is responsible for ensuring the right level of security on your home devices to prevent this?

Smart home vulnerabilities present a real and present danger. We have already seen baby monitors, robots, and home security products, to name a few, all compromised. Most by IOActive researchers. There can be many risks that these compromises introduce to the home, not just around cryptocurrency. Think about how the interconnected home operates. Any device that’s SMART now has the three key ingredients to provide the cyber bad guy with everything he needs – internet access, power and processing.

Firstly, I can introduce my mineware via a compromised mobile phone and start to exploit the processing power of your home devices to mine bitcoin. How would you detect this? When could you detect this? At the end of the month when you get an electricity bill. Instead of 50 pounds a month, its now 150 pounds. But how do you diagnose the issue? You complain to the power company. They show you the usage. It’s correct. Your home IS consuming that power.

They say that crypto mining is now using as much power as a small country. That’s got a serious impact on the power infrastructure as well as the environment. Ahhhh you say, I have a smart meter, it can give me a real time read out of my usage. Yes, it’s a computer. And, if I’m a cyber bad guy, I can make that computer tell me the latest football scores if I want. The key for a corporation when a cyber bad guy is attacking is to reduce dwell time. Detect and stop the bad guy from playing in your network. There are enterprise tools that can perform these tasks, but do you have these same tools at home? How would you Detect and React to a cyber bad guy attacking your smart home?

IOActive has proven these attack vectors over and over. We know this is possible and we know this is almost impossible to detect. Remember, a cyber bad guy makes several assessments when deciding on an attack – the risk of detection, the reward for the effort, and the penalty for capture. The risk of detection is low, like very low. The reward, well you could be mining blocks for months without stopping, that’s tens of thousands of dollars. And the penalty… what’s the penalty for someone hacking your toaster… The impact is measurable to the homeowner. This is real, and who’s to say not happening already. Ask your fridge!!

What’s the Answer –  Avoid Using Smart Home Devices Altogether?

No, we don’t believe the best defense is to avoid adopting this new technology. The smart and interconnected home can offer its users fantastic opportunities. We believe that the responsibility rests with the manufacturer to ensure that devices are designed and built in a safe and secure way. And, yes, everything is designed; few things are designed well.IOActive researchers spend 99% of their time trying to identify vulnerabilities in these devices for the safety of everyone, not just corporations. The power is in the hands of the consumer. As soon as the consumer starts to purchase products based not only on their power efficiency, but their security rating as well, then we will see a shift into a more secure home.

In the meantime, consider the entry point for most cyber bad guys. Generally, this is your desktop, laptop or mobile device. Therefore, ensure you have suitable security products running on these devices, make sure they are patched to the correct levels, be conscious of the websites you are visiting. If you control the available entry points, you will go a long way to protecting your home.
RESEARCH | January 17, 2018

Easy SSL Certificate Testing

tl;dr: Certslayer allows testing of how an application handles SSL certificates and whether or not it is verifying relevant details on them to prevent MiTM attacks: https://github.com/n3k/CertSlayer.

During application source code reviews, we often find that developers forget to enable all the security checks done over SSL certificates before going to production. Certificate-based authentication is one of the foundations of SSL/TLS, and its purpose is to ensure that a client is communicating with a legitimate server. Thus, if the application isn’t strictly verifying all the relevant details of the certificate presented by a server, it is susceptible to eavesdropping and tampering from any attacker having a suitable position in the network.
The following Java code block nullifies all the certificate verification checks:
 
X509TrustManager local1 = new X509TrustManager()
{
       public void checkClientTrusted(X509Certificate[]
       paramAnonymousArrayOfX509Certificate,
       String paramAnonymousString)
       throws CertificateException { }
 
       public void checkServerTrusted(X509Certificate[]
       paramAnonymousArrayOfX509Certificate,
       String paramAnonymousString)
       throws CertificateException { }
 
       public X509Certificate[] getAcceptedIssuers()
       {
              return null;
       }

 

}
 
Similarly, the following python code using urllib2 disables SSL verification:
 
import urllib2
import ssl
 
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
 
urllib2.urlopen(“https://www.ioactive.com”, context=ctx)
 
These issues are not hard to spot while reading code, but what about a complete black box approach? Here is where Certslayer comes to the rescue.
 
How does it work?
 
Proxy Mode
Certslayer starts a proxy server and monitors for specified domains. Whenever a client makes a request to a targeted domain, the proxy redirects the request to an internal web server, presenting a special certificate as a test-case. If the internal web server receives the request, it means the client accepted the test certificate, and the connection was successful at the TLS level. On the other hand, if the internal web server doesn’t receive a request, it means the client rejected the presented certificate and aborted the connection.
 
For testing mobile applications, the proxy mode is very useful. All a tester needs to do is install the certificate authority (Certslayer CA) in the phone as trusted and configure the system proxy before running the tool. Simply by using the application,  generates requests to its server’s backend which are trapped by the proxy monitor and redirected accordingly to the internal web server. This approach also reveals if there is any sort of certificate pinning in place, as the request will not succeed when a valid certificate signed by the CA gets presented.
 
A simple way to test in this mode is to configure the browser proxy and navigate to a target domain. For example, the next command will start the proxy mode on port 9090 and will monitor requests to www.ioactive.com:
C:\CertSlayerCertSlayer>python CertSlayer.py -d www.ioactive.com -m proxy -p 9090
 
 

Currently, the following certificate test-cases are available (and will run in order):

  • CertificateInvalidCASignature
  • CertificateUnknownCA
  • CertificateSignedWithCA
  • CertificateSelfSigned
  • CertificateWrongCN
  • CertificateSignWithMD5
  • CertificateSignWithMD4
  • CertificateExpired
  • CertificateNotYetValid
 
Navigating with firefox to https://www.ioactive.com will throw a “Secure Connection Failed”:
 
The error code explains the problem: SEC_ERROR_BAD_SIGNATURE, which matches the first test-case in the list. At this point, Certslayer has already prepared the next test-case in the list. By reloading the page with F5, we get the next result.
 
 
At the end, a csv file will generate containing the results of each test-case. The following table summarizes them for this example:
 

Stand-alone Mode

Proxy mode is not useful when testing a web application or web service that allows fetching resources from a specified endpoint. In most instances, there won’t be a way to install a root CA at the application backend for doing these tests. However, there are applications that include this feature in their design, like, for instance, cloud applications that allow interacting with third party services.

 

In these scenarios, besides checking for SSRF vulnerabilities, we also need to check if the requester is actually verifying the presented certificate. We do this using Certslayer standalone mode. Standalone mode binds a web server configured with a test-certificate to all network interfaces and waits for connections.

 

I recently tested a cloud application that allowed installing a root CA to enable interaction with custom SOAP services served over HTTPS. Using the standalone mode, I found the library in use wasn’t correctly checking the certificate common name (CN). To run the test-suite, I registered a temporary DNS name (http://ipq.co/) to my public IP address and ran Certslayer with the following command:
 
C:CertSlayerCertSlayer>python CertSlayer.py -m standalone -p 4444 -i j22d1i.ipq.co
 
+ Setting up WebServer with Test: Trusted CA Invalid Signature
>> Hit enter for setting the next TestCase
 
The command initialized standalone mode listening on port 4444. The test certificates then used j22d1i.ipq.co as the CN. 

After this, I instructed the application to perform the request to my server:
 
POST /tools/soapconnector HTTP/1.1
Host: foo.com
Content-Type: application/json; charset=utf-8
X-Requested-With: XMLHttpRequest
Content-Length: 55
Cookie: –
Connection: close
 
 
{“version”:”1.0″,”wsdl”:”https://j22d1i.ipq.co:4444/”}
 
 
Server response:
{“status”:500,”title”:”Problem accessing WSDL description”,”detail”:”We couldn’t open a connection to the service (Describe URL: https://j22d1i.ipq.co:4444/). Reason: Signature does not match. Check the availability of the service and try again”}
 
 
The connection was refused. The server error described the reason why: the CA signature didn’t match. Hitting enter in the python console made the tool prepare the next test case:
 
+ Killing previous server
j22d1i.ipq.co,Trusted CA Invalid Signature,Certificate Rejected,Certificate Rejected
+ Setting up WebServer with Test: Signed with CertSlayer CA
>> Hit enter for setting the next TestCase
 

Here, the connection succeeded because the tool presented a valid certificate signed with Certslayer CA:

 

Server Response:
 
{“status”:500,”detail”: “We couldn’t process the WSDL https://j22d1i.ipq.co:4444/. Verify the validity of the WSDL file and that it’s available in the specified location.”}
 
Certslayer output:
j22d1i.ipq.co,Signed with CertSlayer CA,Certificate Accepted,Certificate Accepted
xxx.yyy.zzz.www – – [14/Dec/2017 18:35:04] “GET / HTTP/1.1” 200 –
 
When the web server is configured with a certificate with a wrong CN, the expected result is that the client will abort the connection. However, this particular application accepted the certificate anyway:
+ Setting up WebServer with Test: Wrong CNAME
>> Hit enter for setting the next TestCase
xxx.yyy.zzz.www – – [14/Dec/2017 18:35:54] “GET / HTTP/1.1” 200 –
j22d1i.ipq.co,Wrong CNAME,Certificate Rejected,Certificate Accepted
 
 
As before, a csv file was generated containing all the test cases with the actual and expected results. For this particular engagement, the result was:
 

A similar tool exists called tslpretense, the main difference is that, instead of using a proxy to intercept requests to targeted domains, it requires configuring the test runner as a gateway so that all traffic the client generates goes through it. Configuring a gateway host this way is tedious, which is the primary reason Certslayer was created.
 
I hope you find this tool useful during penetration testing engagements 🙂

 

RESEARCH | January 11, 2018

SCADA and Mobile Security in the IoT Era

Two years ago, we assessed 20 mobile applications that worked with ICS software and hardware. At that time, mobile technologies were widespread, but Internet of Things (IoT) mania was only starting. Our research concluded the combination of SCADA systems and mobile applications had the potential to be a very dangerous and vulnerable cocktail. In the introduction of our paper, we stated “convenience often wins over security. Nowadays, you can monitor (or even control!) your ICS from a brand-new Android [device].”


Today, no one is surprised at the appearance of an IIoT. The idea of putting your logging, monitoring, and even supervisory/control functions in the cloud does not sound as crazy as it did several years ago. If you look at mobile application offerings today, many more ICS- related applications are available than two years ago. Previously, we predicted that the “rapidly growing mobile development environment” would redeem the past sins of SCADA systems.
The purpose of our research is to understand how the landscape has evolved and assess the security posture of SCADA systems and mobile applications in this new IIoT era.

SCADA and Mobile Applications
ICS infrastructures are heterogeneous by nature. They include several layers, each of which is dedicated to specific tasks. Figure 1 illustrates a typical ICS structure.

Figure 1: Modern ICS infrastructure including mobile apps

Mobile applications reside in several ICS segments and can be grouped into two general families: Local (control room) and Remote.


Local Applications

Local applications are installed on devices that connect directly to ICS devices in the field or process layers (over Wi-Fi, Bluetooth, or serial).

Remote Applications
Remote applications allow engineers to connect to ICS servers using remote channels, like the Internet, VPN-over-Internet, and private cell networks. Typically, they only allow monitoring of the industrial process; however, several applications allow the user to control/supervise the process. Applications of this type include remote SCADA clients, MES clients, and remote alert applications. 

In comparison to local applications belonging to the control room group, which usually operate in an isolated environment, remote applications are often installed on smartphones that use Internet connections or even on personal devices in organizations that have a BYOD policy. In other words, remote applications are more exposed and face different threats.

Typical Threats And     Attacks

In this section, we discuss the typical threats to this heterogeneous landscape of applications and how attacks could be conducted. We also map the threats to the application types.
 
Threat Types
There are three main possible ICS threat types:
  • Unauthorized physical access to the device or “virtual” access to device data
  • Communication channel compromise (MiTM)
  • Application compromise

Table 1 summarizes the threat types.

Table 1: SCADA mobile client threat list
 
Attack Types
Based on the threats listed above, attacks targeting mobile SCADA applications can be sorted into two groups.
 
Directly/indirectly influencing an industrial process or industrial network infrastructure
This type of attack could be carried out by sending data that would be carried over to the field segment devices. Various methods could be used to achieve this, including bypassing ACL/ permissions checks, accessing credentials with the required privileges, or bypassing data validation.
 
Compromising a SCADA operator to unwillingly perform a harmful action on the system
The core idea is for the attacker to create environmental circumstances where a SCADA system operator could make incorrect decisions and trigger alarms or otherwise bring the system into a halt state.
 
Testing Approach
Similar to the research we conducted two years ago, our analysis and testing approach was based on the OWASP Mobile Top 10 2016. Each application was tested using the following steps:
  • Perform analysis and fill out the test checklist
  • Perform client and backend fuzzing
  • If needed, perform deep analysis with reverse engineering
We did not alter the fuzzing approach since the last iteration of this research. It was discussed in depth in our previous whitepaper, so its description is omitted for brevity.
We improved our test checklist for this assessment. It includes:
  • Application purpose, type, category, and basic information 
  • Permissions
  • Password protection
  • Application intents, exported providers, broadcast services, etc.
  • Native code
  • Code obfuscation
  • Presence of web-based components
  • Methods of authentication used to communicate with the backend
  • Correctness of operations with sessions, cookies, and tokens 
  • SSL/TLS connection configuration
  • XML parser configuration
  • Backend APIs
  • Sensitive data handling
  • HMI project data handling
  • Secure storage
  • Other issues
Reviewed Vendors
We analyzed 34 vendors in our research, randomly selecting  SCADA application samples from the Google Play Store. We did, however, favor applications for which we were granted access to the backend hardware or software, so that a wider attack surface could be tested.
 
Additionally, we excluded applications whose most recent update was before June 2015, since they were likely the subject of our previous work. We only retested them if there had been an update during the subsequent two years.
 
Findings
We identified 147 security issues in the applications and their backends. We classified each issue according to the OWASP Top Ten Mobile risks and added one additional category for backend software bugs.
 
Table 4 presents the distribution of findings across categories. The “Number of Issues” column reports the number of issues belonging to each category, while the “% of Apps” column reports how many applications have at least one vulnerability belonging to each category.
Table 4. Vulnerabilities statistics

In our white paperwe provide an in-depth analysis of each category, along with examples of the most significant vulnerabilities we identified. Please download the white paper for a deeper analysis of each of the OWASP category findings.

Remediation And Best Practices
In addition to the well-known recommendations covering the OWASP Top 10 and OWASP Mobile Top 10 2016 risks, there are several actions that could be taken by developers of mobile SCADA clients to further protect their applications and systems.

In the following list, we gathered the most important items to consider when developing a mobile SCADA application:

  • Always keep in mind that your application is a gateway to your ICS systems. This should influence all of your design decisions, including how you handle the inputs you will accept from the application and, more generally, anything that you will accept and send to your ICS system.
  • Avoid all situations that could leave the SCADA operators in the dark or provide them with misleading information, from silent application crashes to full subverting of HMI projects.
  • Follow best practices. Consider covering the OWASP Top 10, OWASP Mobile Top 10 2016, and the 24 Deadly Sins of Software Security.
  • Do not forget to implement unit and functional tests for your application and the backend servers, to cover at a minimum the basic security features, such as authentication and authorization requirements.
  • Enforce password/PIN validation to protect against threats U1-3. In addition, avoid storing any credentials on the device using unsafe mechanisms (such as in cleartext) and leverage robust and safe storing mechanisms already provided by the Android platform.
  • Do not store any sensitive data on SD cards or similar partitions without ACLs at all costs Such storage mediums cannot protect your sensitive data.
  • Provide secrecy and integrity for all HMI project data. This can be achieved by using authenticated encryption and storing the encryption credentials in the secure Android storage, or by deriving the key securely, via a key derivation function (KDF), from the application password.
  • Encrypt all communication using strong protocols, such as TLS 1.2 with elliptic curves key exchange and signatures and AEAD encryption schemes. Follow best practices, and keep updating your application as best practices evolve. Attacks always get better, and so should your application.
  • Catch and handle exceptions carefully. If an error cannot be recovered, ensure the application notifies the user and quits gracefully. When logging exceptions, ensure no sensitive information is leaked to log files.
  • If you are using Web Components in the application, think about preventing client-side injections (e.g., encrypt all communications, validate user input, etc.).
  • Limit the permissions your application requires to the strict minimum.
  • Implement obfuscation and anti-tampering protections in your application.

Conclusions
Two years have passed since our previous research, and things have continued to evolve. Unfortunately, they have not evolved with robust security in mind, and the landscape is less secure than ever before. In 2015 we found a total of 50 issues in the 20 applications we analyzed and in 2017 we found a staggering 147 issues in the 34 applications we selected. This represents an average increase of 1.6 vulnerabilities per application. 

We therefore conclude that the growth of IoT in the era of “everything is connected” has not led to improved security for mobile SCADA applications. According to our results, more than 20% of the discovered issues allow attackers to directly misinform operators and/or directly/ indirectly influence the industrial process.

In 2015, we wrote:

SCADA and ICS come to the mobile world recently, but bring old approaches and weaknesses. Hopefully, due to the rapidly developing nature of mobile software, all these problems will soon be gone.

We now concede that we were too optimistic and acknowledge that our previous statement was wrong.

Over the past few years, the number of incidents in SCADA systems has increased and the systems become more interesting for attackers every year. Furthermore, widespread implementation of the IoT/IIoT connects more and more mobile devices to ICS networks.

Thus, the industry should start to pay attention to the security posture of its SCADA mobile applications, before it is too late.

For the complete analysis, please download our white paper here.

Acknowledgments

Many thanks to Dmitriy Evdokimov, Gabriel Gonzalez, Pau Oliva, Alfredo Pironti, Ruben Santamarta, and Tao Sauvage for their help during our work on this research.
 
About Us
Alexander Bolshev
Alexander Bolshev is a Security Consultant for IOActive. He holds a Ph.D. in computer security and works as an assistant professor at Saint-Petersburg State Electrotechnical University. His research interests lie in distributed systems, as well as mobile, hardware, and industrial protocol security. He is the author of several white papers on topics of heuristic intrusion detection methods, Server Side Request Forgery attacks, OLAP systems, and ICS security. He is a frequent presenter at security conferences around the world, including Black Hat USA/EU/UK, ZeroNights, t2.fi, CONFIdence, and S4.
 
Ivan Yushkevich
Ivan is the information security auditor at Embedi (http://embedi.com). His main area of interest is source code analysis for applications ranging from simple websites to enterprise software. He has vast experience in banking systems and web application penetration testing.
 
IOActive
IOActive is a comprehensive, high-end information security services firm with a long and established pedigree in delivering elite security services to its customers. Our world-renowned consulting and research teams deliver a portfolio of specialist security services ranging from penetration testing and application code assessment through to semiconductor reverse engineering. Global 500 companies across every industry continue to trust IOActive with their most critical and sensitive security issues. Founded in 1998, IOActive is headquartered in Seattle, USA, with global operations through the Americas, EMEA and Asia Pac regions. Visit for more information. Read the IOActive Labs Research Blog. Follow IOActive on Twitter.
 
Embedi
Embedi expertise is backed up by extensive experience in security of embedded devices, with special emphasis on attack and exploit prevention. Years of research are the genesis of the software solutions created. Embedi developed a wide range of security products for various types of embedded/smart devices used in different fields of life and industry such as: wearables, smart home, retail environments, automotive, smart buildings, ICS, smart cities, and others. Embedi is headquartered in Berkeley, USA. Visit for more information and follow Embedi on Twitter.
RESEARCH | November 21, 2017

Hidden Exploitable Behaviors in Programming Languages

In February 28th 2015 Egor Homakov wrote an article[1] exposing the dangers in the open() function from Ruby. The function is commonly used when requesting URLs programmatically with the open-uri library. However, instead of requesting URLs you may end up executing operating system commands.


Consider the following Ruby script named open-uri.rb:

require ‘open-uri’
print open(ARGV[0]).read

The following command requests a web page:
# ruby open-uri.rb “https://ioactive.com”

 

And the following output is shown:

<!DOCTYPE HTML>
<!–[if lt IE 9]><html class=”ie”><![endif]–>
<!–[if !IE]><!–><html><!–<![endif]–><head>
                <meta charset=”UTF-8″>
                <title>IOActive is the global leader in cybersecurity, penetration testing, and computer services</title>
[…SNIP…]

 

Works as expected, right? Still, this command may be used to open a process that allows any command to be executed. If a similar Ruby script is used in a web application with weak input validation then you could just add a pipe and your favorite OS command in a remote request:

|head /etc/passwd

 

And the following output could be shown:

root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
[…SNIP…]

 

The difference between the two executions is just the pipe character at the beginning. The pipe character causes Ruby to stop using the function from open-uri, and use the native open() functionality that allows command execution[2].

 
Applications may be susceptible to unpredictable security issues when using certain features from programming languages. There are a number of possibilities to be abused in different implementations that could affect secure applications. There are unexpected scenarios for the interpreted programming languages parsing the code in Javascript, Perl, PHP, Python and Ruby.
 
Come and join me at Black Hat Europe[3] to learn more about these risky features and how to detect them. An open source extended differential fuzzing framework will be released to aid in the disclosure of these types of behaviors.
[1] Using open-uri? Check your code – you’re playing with fire! (https://sakurity.com/blog/2015/02/28/openuri.html)
 
[2] A dozen (or so) ways to start sub-processes in Ruby: Method #4: Opening a pipe (https://devver.wordpress.com/2009/07/13/a-dozen-or-so-ways-to-start-sub-processes-in-ruby-part-2/)
[3] Exposing Hidden Exploitable Behaviors in Programming Languages Using Differential Fuzzing (https://www.blackhat.com/eu-17/briefings.html#exposing-hidden-exploitable-behaviors-in-programming-languages-using-differential-fuzzing)
RESEARCH | October 26, 2017

AmosConnect: Maritime Communications Security Has Its Flaws

Satellite communications security has been a target of our research for some time: in 2014 IOActive released a document detailing many vulnerabilities in popular SATCOM systems. Since then we’ve had the opportunity to dive deeper in this area, and learned a lot more about some of the environments in which these systems are in place.

Recently, we saw that Shodan released a new tool that tracks the location of VSAT systems exposed to the Internet. These systems are typically installed in vessels to provide them with internet connectivity while at open sea.


The maritime sector makes use of some of these systems to track and monitor ships’ IT and navigation systems as well as to aid crew members in some of their daily duties, providing them with e-mail or the ability to browse the Internet. Modern vessels don’t differ that much from your typical office these days, aside from the fact that they might be floating in a remote location.

Satellite connectivity is an expensive resource. In order to minimize its cost, several products exist to perform optimizations around the compression of data while in transit. One of the products that caught our eye was AmosConnect.

AmosConnect 8 is a platform designed to work in a maritime environment in conjunction with satellite equipment, providing services such as: 

  • E-mail
  • Instant messaging
  • Position reporting
  • Crew Internet
  • Automatic file transfer
  • Application integration


We have identified two critical vulnerabilities in this software that allow pre-authenticated attackers to fully compromise an AmosConnect server. We have reported these vulnerabilities but there is no fix for them, as Inmarsat has discontinued AmosConnect 8, announcing its end-of-life in June 2017. The original advisory is available here, and this blog post will also discuss some of the technical details.

 

Blind SQL Injection in Login Form
A Blind SQL Injection vulnerability is present in the login form, allowing unauthenticated attackers to gain access to credentials stored in its internal database. The server stores usernames and passwords in plaintext, making this vulnerability trivial to exploit.

The following POST request is sent when a user tries to log into AmosConnect


The parameter data[MailUser][emailAddress] is vulnerable to Blind SQL Injection, enabling data retrieval from the backend SQLite database using time-based attacks.

Attackers that successfully exploit this vulnerability can retrieve credentials to log into the service by executing the following queries:

SELECT key, value from MOBILE_PROPS WHERE key LIKE ‘USER.%.password’;

SELECT key, value from MOBILE_PROPS WHERE key LIKE 

‘USER.%.email_address’;

The authentication method is implemented in mail_user.php:

The call to findByEmail() instantiates a COM object that is implemented in native C++ code.

 

The following C++ native methods are invoked upon execution of the call:
     Neptune::UserManager::User::findByEmai(…)
          Neptune::ConfigManager::Property::findBy( … )
               Neptune::ConfigManager::Property::findAllBy( … )
The vulnerable code is implemented in Neptune::ConfigManager::Property::findAllBy() as seen below:

 

 

Strings are concatenated in an insecure manner, building a SQL query that in this case would look like:
“[…] deleted = 0 AND key like ‘USER.%.email_address’ AND upper(value) like ‘{email}'”

 

Privileged Backdoor Account

The AmosConnect server features a built-in backdoor account with full system privileges. Among other things, this vulnerability allows attackers to execute commands with SYSTEM privileges on the remote system by abusing AmosConnect Task Manager.
 
Users accessing the AmosConnect server see the following login screen:
 
The login website reveals the Post Office ID, this ID identifies the AmosConnect server and is tied to the software license.
 
The following code belongs to the authentication method implemented in mail_user.php. Note the call to authenticateBackdoorUser():
 
authenticateBackdoorUser() is implemented as follows:
 
 
The following code snippet shows how an attacker can obtain the SysAdmin password for a given Post Office ID:
Conclusions and thoughts
Vessel networks are typically segmented and isolated from each other, in part for security reasons. A typical vessel network configuration might feature some of the following subnets:
·         Navigation systems network. Some of the most recent vessels feature “sail-by-wire” technologies; the systems in charge of providing this technology are located in this network.
·         Industrial Control Systems (ICS) network. Vessels contain a lot of industrial machinery that can be remotely monitored and operated. Some vessels feature a dedicated network for these systems; in some configuration, the ICS and Navigation networks may actually be the same.
·         IT systems network. Vessels typically feature a separate network to support office applications. IT servers and crew members’ work computers are connected to this network; its within this network that AmosConnect is typically deployed.
·         Bring-Your-Own-Device networks. Vessels may feature a separate network to provide internet connectivity to guests or crew members personal devices.
·         SATCOM. While this may change from vessel to vessel, some use a separate subnet to host satellite communications equipment.
While the vulnerabilities discussed in this blog post may only be exploited by an attacker with access to the IT systems network, its important to note that within certain vessel configurations some networks might not be segmented, or AmosConnect might be exposed to one or more of these networks. A typical scenario would make AmosConnect available to both the BYOD “guest and IT networks; one can easily see how these vulnerabilities could be exploited by a local attacker to pivot from the guest network to the IT network. Also, some the vulnerabilities uncovered during our SATCOM research might enable attackers to access these systems via the satellite link.
 
All in all, these vulnerabilities pose a serious security risk. Attackers might be able to obtain corporate data, take over the server to mount further attacks or pivot within the vessel networks.
References:
https://shiptracker.shodan.io/
INSIGHTS | October 23, 2017

Embedding Defense in Server-side Applications

Applications always contain security flaws, which is why we rely on multiple layers of defense. Applications are still struggling with their defenses, even though we go through exhaustive measures of testing and defense layers. Perhaps we should rethink our approach to application defense, with the goal of introducing defensive methods that cause attackers to cease, or induce them to take incorrect actions based on false premises.

 

There are a variety of products that provide valuable resources when basic, off-the-shelf protection is required or the application source code is not available. However, implementations in native code provide a greater degree of defense that cannot be achieved using third-party solutions. This document aims to:
  • Enumerate a comprehensive list of current and new defensive techniques.
    Multiple defensive techniques have been disclosed in books, papers and tools. This specification collects those techniques and presents new defensive options to fill the opportunity gap that remains open to attackers.

 

  • Enumerate methods to identify attack tools before they can access functionalities.
    The tools used by attackers identify themselves in several ways. Multiple features of a request can disclose that it is not from a normal user; a tool may abuse security flaws in ways that can help to detect the type of tool an attacker is using, and developers can prepare diversions and traps in advance that the specific tool would trigger automatically.

 

  • Disclose how to detect attacker techniques within code.
    Certain techniques can identify attacks within the code. Developers may know in advance that certain conditions will only be triggered by attackers, and they can also be prepared for certain unexpected scenarios.

 

  • Provide a new defensive approach.
    Server-side defense is normally about blocking malicious IP addresses associated to attackers; however, an attacker’s focus can be diverted or modified. Sometimes certain functionalities may be presented to attackers only to better prosecute them if those functionalities are triggered.

 

  • Provide these protections for multiple programming languages.
    This document will use pseudo code to explain functionalities that can reduce the effectiveness of attackers and expose their actions, and working proof-of-concept code will be released for four programming languages: Java, Python, .NET, and PHP.
Fernando Arnaboldi appeared at Ruxcon to present defensive techniques that can be embedded in server-side applications. The techniques described are included in a specification, along with sample implementations for multiple programming languages.
RESEARCH | September 26, 2017

Are You Trading Securely? Insights into the (In)Security of Mobile Trading Apps

The days of open shouting on the trading floors of the NYSE, NASDAQ, and other stock exchanges around the globe are gone. With the advent of electronic trading platforms and networks, the exchange of financial securities now is easier and faster than ever; but this comes with inherent risks.

From the beginning, bad actors have also joined Wall Street’s party, developing clever models for fraudulent gains. Their efforts have included everything from fictitious brokerage firms that ended up being Ponzi schemes[1] to organized cells performing Pump-and-Dump scams.[2] (Pump: buy cheap shares and inflate the price through sketchy financials and misleading statements to the marketplace through spam, social media and other technological means; Dump: once the price is high, sell the shares and collect a profit).

When it comes to financial cybersecurity, it’s worth noting how banking systems are organized when compared to global exchange markets. In banking systems, the information is centralized into one single financial entity; there is one point of failure rather than many, which make them more vulnerable to cyberattacks.[3] In contrast, global exchange markets are distributed; records of who owns what, who sold/bought what, and to whom, are not stored in a single place, but many. Like matter and energy, stocks and other securities cannot be created from the void (e.g. a modified database record within a financial entity). Once issued, they can only be exchanged from one entity to another. That said, the valuable information as well as the attack surface and vectors in trading environments are slightly different than those in banking systems.

 
Picture taken from http://business.nasdaq.com/list/
 

Over the years I’ve used the desktop and web platforms offered by banks in my country with limited visibility of available trade instruments. Today, accessing global capital markets is as easy as opening a Facebook account through online brokerage firms. This is how I gained access to a wider financial market, including US-listed companies. Anyone can buy and sell a wide range of financial instruments on the secondary market (e.g. stocks, ETFs, etc.), derivatives market (e.g. options, binary options, contracts for difference, etc.), forex markets, or the avant-garde cryptocurrency markets.

Most banks with investment solutions and the aforementioned brokerage houses offer mobile platforms to operate in the market. These apps allow you to do things including, but not limited to:

  • Fund your account via bank transfers or credit card
  • Keep track of your available equity and buying power (cash and margin balances)
  • Monitor your positions (securities you own) and their performance (profit)
  • Monitor instruments or indexes
  • Give buy/sell orders
  • Create alerts or triggers to be executed when certain thresholds are reached
  • Receive real-time news or video broadcasts
  • Stay in touch with the trading community through social media and chats

Needless to say, whether you’re a speculator, a very active intra-day trader, or simply someone who likes to follow long-term buy-and-hold strategies, every single item on the previous list must be kept secret and only known by and shown to its owner.


 

Four months ago, while using my trading app, I asked myself, “with the huge amount of money transacted in the money market, how secure are these mobile apps?” So, there I was, one minute later, starting this research to expose cybersecurity and privacy weaknesses in some of these apps.

Before I pass along my results, I’d like to share the interesting and controversial moral of the story:  The app developed by a brokerage firm who suffered a data breach many years ago was shown to be the most secure one.

Scope

My analysis encompassed the latest version of 21 of the most used and well-known mobile trading apps available on the Apple Store and Google Play. Testing focused only on the mobile apps; desktop and web platforms were not tested. While I discovered some security vulnerabilities in the backend servers, I did not include them in this article.

Devices:

  • iOS 10.3.3 (iPhone 6) [not jailbroken]
  • Android 7.1.1 (Emulator) [rooted]

I tested the following 14 security controls, which represent just the tip of the iceberg when compared to an exhaustive list of security checks for mobile apps. This may give you a better picture of the current state of these apps’ security. It’s worth noting that I could not test all of the controls in some of the apps either because a feature was not implemented (e.g. social chats) or it was not technically feasible (e.g. SSL pinning that wouldn’t allow data manipulation), or simply because I could not open an account.

Results

Unfortunately, the results proved to be much worse than those for personal banking apps in 2013 and 2015.[4] [5] Cybersecurity has not been on the radar of the FinTech space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.

The issues I found in the tested controls are grouped in the following sections. Logos and technical details that mention the name of brokerage institutions were removed from the screenshots, logs, and reverse engineered code to prevent any negative impacts to their customers or reputation.

Cleartext Passwords Exposed

In four apps (19%), the user’s password was sent in cleartext either to an unencrypted XML configuration file or to the logging console. Physical access to the device is required to extract them, though.

In a hypothetical attack scenario, a malicious user could extract a password from the file system or the logging functionality without any in-deptfh know-how (it’s relatively easily), log in through any other trading platform from the brokerage firm, and perform unauthorized actions. They could sell stocks, transfer the money to a newly added bank account, and delete this bank account after the transfer is complete. During testing, I noticed that most of the apps require only the current password to link banking accounts and do not have two-factor authentication (2FA) implemented, therefore, no authorization one-time-password (OTP) is sent to the user’s phone or email.

In two apps, like the following one, in addition to logging the username and password, authentication takes place through an unencrypted HTTP channel:
In another app, the new password was exposed in the logging console when a user changes the password:

Trading and Account Information Exposed

In the trading context, operational or strategic data must not be sent unencrypted to the logging console nor any log file. This sensitive data encompasses values such as personal data, general balances, cash balance, margin balance, net worth, net liquidity, the number of positions, recently quoted symbols, watchlists, buy/sell orders, alerts, equity, buying power, and deposits. Additionally, sensitive technical values such as username, password, session ID, URLs, and cryptographic tokens should not be exposed either.

62% of the apps sent sensitive data to log files, and 67% stored it unencrypted. Physical access to the device is required to extract this data.

If these values are somehow leaked, a malicious user could gain insight into users’ net worth and investing strategy by knowing which instruments users have been looking for recently, as well as their balances, positions, watchlists, buying power, etc.

Imagine a hypothetical scenario where a high-profile, sophisticated investor loses his phone and the trading app he has been using stores his “Potential Investments” watchlist in cleartext. If the extracted watchlist ends up in the hands of someone who wants to mimic this investor’s strategy, they could buy stocks prior to a price increase. In the worst case, imagine a “Net Worth” figure landing in the wrong hands, say kidnappers, who now know how generous ransom could be.

Balances and investment portfolio leaked in logs:

 
Buy and sell orders leaked in detail in the logging console:
Personal information stored in configuration files:
 
“Potential Investments” and “Will Buy Later” watchlists leaked in the logs console:
 
“Favorites” watchlists leaked in the logs too:
 
Quoted tickers leaked:
 
Symbol quoted dumped in detail in the console:
 
Quoted instruments saved in a local SQLite database:
Account number and balances leaked:

Insecure Communication

Two apps use unencrypted HTTP channels to transmit and receive all data, and 13 of 19 apps that use HTTPS do not check the authenticity of the remote endpoint by verifying its SSL certificate (SSL pinning); therefore, it’s feasible to perform Man-in-the-Middle (MiTM) attacks to eavesdrop on and tamper with data. Some MiTM attacks require to trick the user into installing a malicious certificate on the mobile device.

Under certain circumstances, an attacker with access to some part of the network, such as the router in a public Wi-Fi, could see and modify information transmitted to and from the mobile app. In the trading context, a malicious actor could intercept and alter values, such as the bid or ask prices of an instrument, and cause a user to buy or sell securities based on misleading information.

For instance, the following app uses an insecure channel for communication by default; an inexperienced user who does not know the meaning of “SSL” (Secure Socket Layer) won’t enable it on the login screen and all sensitive data will be sent and received in cleartext, without encryption:

One single app was found to send a log file with sensitive trading data to the server on a scheduled basis over an unencrypted HTTP channel.

Some apps transmit non-sensitive data (e.g. public news or live financial TV broadcastings) through insecure channels, which does not seem to represent a risk to the user.

Authentication and Session Management

Nowadays, most modern smartphones support fingerprint-reading, and most trading apps use it to authenticate their customers. Only five apps (24%) do not implement this feature.

Unfortunately, using the fingerprint database in the phone has a downside:

 
Moreover, after clicking the logout button, sessions were still valid on the server side in two apps. Also, another couple of apps enforced lax password policies:

Privacy Mode

One single trading app (look for “the moral of the story” earlier in this article) supports “Privacy Mode,” which protects the customers’ private information displayed on the screen in public areas where shoulder-surfing[6] attacks are feasible. The rest of the apps do not implement this useful and important feature.

However, there’s a small bug in this unique implementation: every sensitive figure is masked except in the “Positions” tab where the “Net Liquidity” column and the “Overall Totals” are fully visible:

It’s worth noting that not only balances, positions, and other sensitive values in the trading context should be masked, but also credit card information when entered to fund the account:

Client-side Data Validation

In most, but not all, of the apps that don’t check SSL certificates, it’s possible to perform MiTM attacks and inject malicious JavaScript or HTML code in the server responses. Since the Web Views in ten apps are configured to execute JavaScript code, it’s possible to trigger common Cross-site Scripting (XSS) attacks.

XSS triggered in two different apps (<script>alert(document.cookie);</script>):

Fake HTML forms injected to deceive the user and steal credentials:

Root Detection

Many Android apps do not run on rooted devices for security reasons. On a rooted phone or emulator, the user has full control of the system, thus, access to files, databases, and logs is complete. Once a user has full access to these elements, it’s easy to extract valuable information.

20 of the apps (95%) do not detect rooted environments. The single app (look for “the moral of the story” earlier in this article) that does detect it simply shows a warning message; it allows the user to keep using the platform normally:

Hardcoded Secrets in Code and App Obfuscation

Six Android app installers (.apk files) were easily reverse engineered to human-readable code. The rest had medium to high levels of obfuscation, as the one shown below. The goal of obfuscation is to conceal the applications purpose (security through obscurity) and logic in order to deter reverse engineering and to make it more difficult.

 
In the non-obfuscated apps, I found secrets such as cryptographic keys and third-party service partner passwords. This could allow unauthorized access to other systems that are not under the control of the brokerage houses. For example, a Morningstar.com account (investment research) hardcoded in a Java class:
Interestingly, ten of the apps (47%) have traces (internal hostnames and IPs) about the internal development and testing environments where they were made or tested:

Other Weaknesses

The following trust issue grabbed my attention: a URL with my username (email) and first name passed as parameters was leaked to the logging console. This URL is opened to talk with, apparently, a chatbot inside the mobile app, but if you grab this URL and open it in a common web browser, the chatbot takes your identity from the supplied parameters and trusts you as a logged in user. From there, you can ask details about your account. As you can see, all you need to retrieve someone else’s private data is to know his/her email only and his/her name:

 

I haven’t had enough time to fully test this feature, but so far, I was able to extract balances and personal information.

Statistics

Since a picture is worth a thousand words, consider the following graphs:

Responsible Disclosure

One of IOActive’s missions is to act responsibly when it comes to vulnerability disclosure, thus, between September 6th and 8th, we sent a detailed report to 13 of the brokerage firms whose trading apps presented some of the higher risks vulnerabilities discussed in this article.

To the date, only two brokerage firms replied our email.

Regulators and Rating Organizations

Digging in some US regulators’ websites,[7] [8] [9] I noticed that they are already aware of the cybersecurity threats that might negatively impact financial markets and stakeholders. Most of the published content focuses on general threats that could impact end-users or institutions such as phishing, identity theft, antivirus software, social media risks, privacy, and procedures to follow in case of cybersecurity incidents, such as data breaches or disruptive attacks.

Nevertheless, I did not find any documentation related to the security risks of electronic trading nor any recommended guidance for secure software development to educate brokers and FinTech companies on how to create quality products.

Picture taken from http://www.reuters.com/article/net-us-internet-lending/for-online-lenders-wall-street-cash-brings-growth-and-risk-idUSBRE96204I20130703
 

In addition, there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports [10] [11] and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews. Security controls should equal a competitive advantage.

Conclusions and Recommendations

  • There’s still a long way to go to improve the maturity level of security in mobile trading apps.
  • Desktop and web platforms should also be tested and improved.
  • Regulators should encourage brokers to implement safeguards for a better trading environment.
  • In addition to the generic IT best practices for secure software development, regulators should develop trading-specific guidelines to be followed by the brokerage firms and FinTech companies in charge of creating trading software.
  • Brokerage firms should perform regular internal audits to continuously improve the security posture of their trading platforms.
  • Developers should analyze their apps to determine if they suffer from the vulnerabilities I have described in this post, and if so, fix them.
  • Developers should design new, more secure financial software following secure coding practices.
  • End users should enable all of the security mechanisms their apps offer.

Side Thought

Remember: the stock market is not a casino where you magically get rich overnight. If you lack an understanding of how stocks or other financial instruments work, there is a high risk of losing money quickly. You must understand the market and its purpose before investing.

With nothing left to say, I wish you happy and secure trading!

 
Thanks for reading,
Alejandro
References
[1] Ponzi scheme
[2] “Pump-and-Dumps” and Market Manipulations
[3] Practical Examples Of How Blockchains Are Used In Banking And The Financial Services Sector
[4] Personal banking apps leak info through phone
[5] (In)secure iOS Mobile Banking Apps – 2015 Edition
[6] Shoulder surfing (computer security)
[7] Financial Industry Regulatory Authority: Cybersecurity
[8] Securities Industry and Financial Markets Association: Cybersecurity
[9] U.S. Securities and Exchange Commission: Cybersecurity, the SEC and You
[10] NerdWallet: Best Online Brokers for Stock Trading 2017
[11] StockBrockers: 2017 Online Broker Rankings