Common Ground Part 1: Red Team History & Overview

Over the past ten years, red teaming has grown in popularity and has been adopted across different industries as a mature method of assessing an organization’s ability to handle challenges. With its widespread adoption, the term “red team” has come to mean different things to different people depending on their professional background. This is part one of a three-part blog series where I will break down and inspect red teaming. In this section, I will address what I believe red teaming is, how it applies to the infosec industry, how it is different from other technical assessments, and the realistic limitations on these types of engagements. In part two, I will discuss some topics important to planning a red team engagement, including organizational fit, threat models, training objectives, and assessment “events.” Finally, in part three, I will discuss red team execution, focusing on the human and strategic factors instead of technical aspects. That post will cover how network red teams supplement their technical testing by identifying human and procedural weaknesses, such as bias and process deficiencies inside of the targets incident response cycle ranging from the technical responder up through the CIO.

Many thanks to my current team in ATD and those in the industry who continue to share in this discussion with me. I learn new stuff from all of you every day and I love that as a red team community, we want to continue honing our tradecraft.

Disclaimer: Before going too much further, it is obvious that this is a contentious topic. My view points are derived from experience in the military, planning and executing operations followed by several years in the commercial industry helping to build and train industry-leading red teams; however, I am constantly learning and by no means think I have all of the answers. If you disagree with any points I discuss in these posts, I respect your viewpoint and experience behind it, but we might have to agree to disagree. In the end, the importance of red teaming isn’t about a single or even a group’s philosophy; rather it’s about best preparing organizations to handle challenges as they arise. If your methodology suits your organization’s needs, I applaud it!

The Roots

History Lessons

It was also around this time that the wargames began to use previously fought battles to re-enact combat and look at various outcomes to study what could have happened differently. With the redesign, Kriegspiel emphasized the themes that are drawn out and applied to modern exercises: realism, multiple levels of simulation, study of adversary techniques during debriefs, and improvisation in decision making.

Skipping forward to the early 1920s, Germany continued to lead the strategic planning and wargaming initiative. A German scholar, Joachim von Stulpnagel, released a study titled “Thoughts on Future War” (translated) which was delivered to a number of leading officers in early Weimar Germany. This study attempted to analyze the possible outcomes of an upcoming war and theorized about the successes of different parties. There were many other examples of analysis like this conducted around the same time and they serve as some of the earliest examples of alternative analysis being conducted to plan for future conflict. The study followed similar constructs and analytical phases that modern planning does and it serves as a great example of the origin of debating different outcomes to proposed scenarios.

During the World Wars, the successes and failures of wargaming by the Germans was observed worldwide and most world powers incorporated it into their planning efforts. This practice continued throughout the second World War and into the modern era. Wargames allowed commanders to make rapid decisions simulating combat and measure the outcomes. The outcomes were used in debriefing to inspect what could have gone better or worse, encouraging reflection and mature decision making.

For more information on these topics, I recommend the following references:

Evolution and formality

Davis’ article was one of the first public accounts of an “official” red team exercise used to test decision-making. Presumably, unknown to the majority of the US due to their sensitivity and continued secrecy (I am guessing here), many simulations were carried out by the DoD, intelligence community (IC), and other government agencies in preparation for different crisis situations that could develop during the Cold War. The outcomes of these simulations were likely provided to the commanders, including the President, in order to refine potential plans. While I have not read evidence of it, I am confident that the same type of simulations were being used across the globe to prepare decision makers.

Entering into the 21st century, red teaming began to show up as more mainstream and became a formal structured method of analysis. It was adopted formally by the US Military to plan for future events, commercial industry to debate courses of action, and largely in the intelligence community to test intelligence estimates. The expansive scope of red team testing in the modern era is far outside of the scope of this blog.

For more information about red teaming and its boom across industries, I would highly highly recommend the book Red Team by Micah Zenko.

So What Is It?

“… a decision support element that provides independent capability to fully explore alternatives in plans, operations, and intelligence analysis”.

Further, JP1–16 specifically calls out adversary emulation as an element stating

“The primary red team role is to reduce risk by helping the staff understand how the other actor might perceive and respond to friendly actions”.

“The practice of viewing a problem from the adversary or competitor’s perspective”

Although short, this definition implies many of the same factors as the joint publication: alternative analysis, adversary emulation, independent review, etc.

How do I define red teaming? Well, first and foremost I’d like to acknowledge that red teaming is a subset of broader discipline known as “alternative analysis”. It can be differentiated from other forms of analytical techniques by its use of an adversarial perspective. For more information on different analytical techniques, the US Gov released an article titled A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis.

If I had to formally sum up my thoughts, I would define red teaming as:

The emulation of adversarial tactics, techniques and procedures (TTPs) to test plans and systems the way they may actually be defeated by aggressors to challenge plans and improve decision making processes

Industry Specific Definitions

Technical Assessment Differentiators

  • Vulnerability Assessment — the intentional discovery and validation of vulnerabilities inside of an environment. Most organizations now have an in-house function for this, and if they don’t they definitely should.
  • Penetration Testing — the process of testing a system (network, application, host) for the ability to gain access and demonstrate impact. Usually includes some component of a vulnerability discovery phase, but continues to measure the impact of successful exploitation. I see these assessments focused on technical content matter: vulnerabilities, exploits, and data.
  • Social Engineering Assessment — A targeted assessment built to measure the susceptibility of individuals within an organization to coercion from outside entities through social manipulation. This might include cold calling (pretext calling) or spear phishing. This also might be a subset of a larger assessment, penetration testing/physical/red teaming/etc.
  • Physical Penetration Testing or Physical Security Assessment — A targeted assessment measuring the susceptibility of an organization against a physical penetration or breach. Physical testing objectives might include gaining access to a certain area, bypassing specific controls, or stealing a certain asset. This also might be a subset of a larger assessment.

Network Red Teaming Overview

Similar to a generic red team exercise, a network red team is focused on testing a hypothesis during the assessment. While not always explicitly stated, the most commonly tested hypothesis on behalf of the blue team is, “My network is secure and I can detect malicious activity.” The job of the red team is to take an adversarial approach to attacking and disproving that hypothesis by identifying flaws that can allow an actual adversary to do the same. In network red teaming, the attack should focus on the different facets of information security and might range across spectrums (physical or electronic) depending on the specifics of the engagement.

Success in a network red team assessment can be quantified in a number of different ways, but is usually measured by the red team’s ability to defeat the hypothesis while inflicting simulated “harm” to the target’s stakeholders. I often stress the importance of target analysis and center of gravity analysis when defining success but ultimately, by exposing an organization’s “crown jewels,” the red team can adequately demonstrate risk to decision makers.

“Playing Adversary”

Going off on a slight rant, I would like to acknowledge that being an adversary does not necessarily mean that you need to be “advanced”. Adversaries are humans, they make mistakes, leave their usernames in malware, fat finger commands, and are subject to deadlines like everyone else. Remember this when developing a threat model and executing the assessment.

Not Just About 1s and 0s

Reality of Red Teams

Problem

  • Limited Scope — A successful engagement relies on a wide and all-encompassing scope that adequately allows the red team to tactically maneuver in an environment just as the adversary would. With that said, there are certain assets that will always be too risky to permit offensive operations against, and will often be removed from the scope. Most organizations are simply not willing to put their existence at risk to measure security.
  • Limited Time — I hear it all the time, “The bad guy has forever to achieve his objective”. While that is true, most organizations will not sponsor their internal team or a consulting team to work an engagement forever; it is simply too costly and limits the training received. The time of an assessment should be enough to allow a tradecraft conscious operation but also constrained to allow for definitive end state where the teams can be debriefed. Additionally, not all adversaries have an infinite timeline, they have people breathing down their necks for results too.
  • Limited Audience — A red team should include as many stakeholders and decision-makers as possible. While I wish that every person in a company would stop and play in the engagements I run, this would simply not make business sense. That is part of the facet of a realistic exercise-in many breaches, C-level executives aren’t even paying attention. In a red team, they will often not want to be involved in the test because they are “too busy.” I encourage as much participation as possible throughout leadership but recognize it will not be possible.

Solution — White Card

Another possibility is to perform a tabletop analysis of all white-carded scenarios. This will allow something taken “off the table” during the technical engagement to still be analyzed for its effect on the overall outcome. Phishing is one specific area I see sometimes see white carded to save time and resources, allowing a red team to focus on the detection of post-exploitation maneuvering in environments. Remember that engagement specifics should always go back to the training objectives (more in Part Two).

This is not to say that detection of initial access isn’t important or shouldn’t be tested and tabled topped at some point, but in mature organizations with advanced threats targeting them, initial access is inevitable. Microsoft has coined the term “Assume Breach” and the methodology is practiced by many organizations today. There have been several and follow on news articles regarding red team operations within Facebook. Through these descriptions, it is clear that Facebook’s team recognizes the limitations above and white cards certain scenarios. Also, a clever and entertaining system of exercise injects were used, including a simulated breach notification from the FBI. The post is definitely worth the read.

By sometimes white carding different phases, the red team can expand the time they have to test other facets of security and exercise the response of the blue team, allowing for a deeper and more thorough test in the same timeframe.

Wrap Up

Originally published at http://www.sixdub.net on June 24, 2016.

Tech: Threat Intel | Photographer @ https://www.justinwarnerphoto.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store