Common Ground Part 1: Red Team History & Overview

Justin Warner
12 min readJun 24, 2016

--

Over the past ten years, red teaming has grown in popularity and has been adopted across different industries as a mature method of assessing an organization’s ability to handle challenges. With its widespread adoption, the term “red team” has come to mean different things to different people depending on their professional background. This is part one of a three-part blog series where I will break down and inspect red teaming. In this section, I will address what I believe red teaming is, how it applies to the infosec industry, how it is different from other technical assessments, and the realistic limitations on these types of engagements. In part two, I will discuss some topics important to planning a red team engagement, including organizational fit, threat models, training objectives, and assessment “events.” Finally, in part three, I will discuss red team execution, focusing on the human and strategic factors instead of technical aspects. That post will cover how network red teams supplement their technical testing by identifying human and procedural weaknesses, such as bias and process deficiencies inside of the targets incident response cycle ranging from the technical responder up through the CIO.

Many thanks to my current team in ATD and those in the industry who continue to share in this discussion with me. I learn new stuff from all of you every day and I love that as a red team community, we want to continue honing our tradecraft.

Disclaimer: Before going too much further, it is obvious that this is a contentious topic. My view points are derived from experience in the military, planning and executing operations followed by several years in the commercial industry helping to build and train industry-leading red teams; however, I am constantly learning and by no means think I have all of the answers. If you disagree with any points I discuss in these posts, I respect your viewpoint and experience behind it, but we might have to agree to disagree. In the end, the importance of red teaming isn’t about a single or even a group’s philosophy; rather it’s about best preparing organizations to handle challenges as they arise. If your methodology suits your organization’s needs, I applaud it!

The Roots

History Lessons

Since the dawn of military conflict, battle simulations and wargames have been conducted in one form or another. While not directly related, red teaming can trace its roots back to these adversary simulations being performed by ancient militaries. Beginning in India around 320 AD, different forms of board gaming and role-playing were used to simulate military conflict and allow officers to prepare for upcoming battles. While rudimentary at the time, the game evolved to mirror the intentions of modern table-top exercises. In the early 1800s, Prussian Lieutenant von Reisswitz redesigned the system of wargaming, now known as Kriegspiel , to be a more elaborate simulation for training of general officers. The redesign included highly realistic terrain on real maps; testing multiple levels of forces ( Strategic, Operational, Tactical ); simulated fog and friction of war (See Von Clausewitz, On War ); and umpires who were tasked with game control. These umpires were typically battle-hardened commanders who could use their experience in battle to present realistic situations that would require commanders to practice making improvised decisions.

It was also around this time that the wargames began to use previously fought battles to re-enact combat and look at various outcomes to study what could have happened differently. With the redesign, Kriegspiel emphasized the themes that are drawn out and applied to modern exercises: realism, multiple levels of simulation, study of adversary techniques during debriefs, and improvisation in decision making.

Skipping forward to the early 1920s, Germany continued to lead the strategic planning and wargaming initiative. A German scholar, Joachim von Stulpnagel, released a study titled “Thoughts on Future War” (translated) which was delivered to a number of leading officers in early Weimar Germany. This study attempted to analyze the possible outcomes of an upcoming war and theorized about the successes of different parties. There were many other examples of analysis like this conducted around the same time and they serve as some of the earliest examples of alternative analysis being conducted to plan for future conflict. The study followed similar constructs and analytical phases that modern planning does and it serves as a great example of the origin of debating different outcomes to proposed scenarios.

During the World Wars, the successes and failures of wargaming by the Germans was observed worldwide and most world powers incorporated it into their planning efforts. This practice continued throughout the second World War and into the modern era. Wargames allowed commanders to make rapid decisions simulating combat and measure the outcomes. The outcomes were used in debriefing to inspect what could have gone better or worse, encouraging reflection and mature decision making.

For more information on these topics, I recommend the following references:

Evolution and formality

In the Post-World War days, the US military continued to advance and stand guard against a distant foe. In the early 1960s the terms “red team” and “blue team” first appeared; Department of Defense (DoD) decision makers used the term when referring to the structured simulations they used to test high level strategies. In 1963, an article in the Journal of Conflict written by Robert Davis described a simulation that was carried out where the blue team (US) worked through an arms control treaty with the red team (Soviet Union). This journal is available for those who have JSTOR or other similar accounts and is also summarized in the book Red Team , by Micah Zenko.

Davis’ article was one of the first public accounts of an “official” red team exercise used to test decision-making. Presumably, unknown to the majority of the US due to their sensitivity and continued secrecy (I am guessing here), many simulations were carried out by the DoD, intelligence community (IC), and other government agencies in preparation for different crisis situations that could develop during the Cold War. The outcomes of these simulations were likely provided to the commanders, including the President, in order to refine potential plans. While I have not read evidence of it, I am confident that the same type of simulations were being used across the globe to prepare decision makers.

Entering into the 21st century, red teaming began to show up as more mainstream and became a formal structured method of analysis. It was adopted formally by the US Military to plan for future events, commercial industry to debate courses of action, and largely in the intelligence community to test intelligence estimates. The expansive scope of red team testing in the modern era is far outside of the scope of this blog.

For more information about red teaming and its boom across industries, I would highly highly recommend the book Red Team by Micah Zenko.

So What Is It?

“… a decision support element that provides independent capability to fully explore alternatives in plans, operations, and intelligence analysis”.

Further, JP1–16 specifically calls out adversary emulation as an element stating

“The primary red team role is to reduce risk by helping the staff understand how the other actor might perceive and respond to friendly actions”.

“The practice of viewing a problem from the adversary or competitor’s perspective”

Although short, this definition implies many of the same factors as the joint publication: alternative analysis, adversary emulation, independent review, etc.

How do I define red teaming? Well, first and foremost I’d like to acknowledge that red teaming is a subset of broader discipline known as “alternative analysis”. It can be differentiated from other forms of analytical techniques by its use of an adversarial perspective. For more information on different analytical techniques, the US Gov released an article titled A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis.

If I had to formally sum up my thoughts, I would define red teaming as:

The emulation of adversarial tactics, techniques and procedures (TTPs) to test plans and systems the way they may actually be defeated by aggressors to challenge plans and improve decision making processes

Industry Specific Definitions

It must be recognized that generic planning exercises and tabletop gaming cannot suffice for every industry. By using industry specific red team analysis and incorporating technical experts to participate as the red and blue forces, the assessment can move to the tactical level. This was done during the days of Kriegspiel by involving junior officers and allowing the umpire to use a starting scenario that was extremely tactical in nature; it was specific to training of those young officers.

Technical Assessment Differentiators

When diving into industry-specific definitions, the most common problems I see are the collision of terms: penetration testing, social engineering, red teaming, physical penetration testing, etc. I have come to form my own definitions of these assessment types, but again these ideas are fluid, and I’m not claiming to have all the answers. Assessments will shift and mold to meet one off demands from various organizations. Below are my viewpoints:

  • Vulnerability Assessment — the intentional discovery and validation of vulnerabilities inside of an environment. Most organizations now have an in-house function for this, and if they don’t they definitely should.
  • Penetration Testing — the process of testing a system (network, application, host) for the ability to gain access and demonstrate impact. Usually includes some component of a vulnerability discovery phase, but continues to measure the impact of successful exploitation. I see these assessments focused on technical content matter: vulnerabilities, exploits, and data.
  • Social Engineering Assessment — A targeted assessment built to measure the susceptibility of individuals within an organization to coercion from outside entities through social manipulation. This might include cold calling (pretext calling) or spear phishing. This also might be a subset of a larger assessment, penetration testing/physical/red teaming/etc.
  • Physical Penetration Testing or Physical Security Assessment — A targeted assessment measuring the susceptibility of an organization against a physical penetration or breach. Physical testing objectives might include gaining access to a certain area, bypassing specific controls, or stealing a certain asset. This also might be a subset of a larger assessment.

Network Red Teaming Overview

When applying the generalized definition of red teaming to the information security industry, the focus shifts deep into the tactical levels and different parties must be involved. Rather than circling a bunch of professional planners or decision-makers around a table, the engagement uses network operators to conduct operations in a real network for adversary emulation.

Similar to a generic red team exercise, a network red team is focused on testing a hypothesis during the assessment. While not always explicitly stated, the most commonly tested hypothesis on behalf of the blue team is, “My network is secure and I can detect malicious activity.” The job of the red team is to take an adversarial approach to attacking and disproving that hypothesis by identifying flaws that can allow an actual adversary to do the same. In network red teaming, the attack should focus on the different facets of information security and might range across spectrums (physical or electronic) depending on the specifics of the engagement.

Success in a network red team assessment can be quantified in a number of different ways, but is usually measured by the red team’s ability to defeat the hypothesis while inflicting simulated “harm” to the target’s stakeholders. I often stress the importance of target analysis and center of gravity analysis when defining success but ultimately, by exposing an organization’s “crown jewels,” the red team can adequately demonstrate risk to decision makers.

“Playing Adversary”

The word adversary is used a lot in red teaming. I tend to not agree with a one-size-fits-all answer when elaborating upon how it should be used in a structured sense. I believe that the type of adversarial approach used throughout the assessment is based on the threat model of the target( the exact type of assessment methodology to use will be discussed in depth in Part Two). Simply put, different organizations have to worry about different threats and are at a different level of maturity to handle testing. Realistic actions must be carried out for an effective simulation, and an organization may perform different types of engagements depending on their goals and capabilities. For example, some assessment methodologies favor hands-on teaching while others focus on a full-on wargame.

Going off on a slight rant, I would like to acknowledge that being an adversary does not necessarily mean that you need to be “advanced”. Adversaries are humans, they make mistakes, leave their usernames in malware, fat finger commands, and are subject to deadlines like everyone else. Remember this when developing a threat model and executing the assessment.

Not Just About 1s and 0s

Rather than focusing solely on technical execution, a chunk of a network red team assessment is about people, processes, technology, and methods. Network red team assessments allow for advanced training of blue teams in their own environment (with their own tools) while also allowing them to practice C3 (communications, command, and control). This is the biggest and most impactful outcome in my opinion. Practicing response will shorten an organization’s disparity between their mean time to detection and their mean time to response, allowing them to increase how quickly they respond to a compromise. Part Three of this series will dive deeper into the red team approach and how red teams can effectively execute it. Another positive benefit of red teaming is that key stakeholders get to witness adversarial attack paths in their environment and identify choke points, which can be used to create plans of action and mitigation strategies. These choke points might also feed hunt teams in their pursuit to identify compromise.

Reality of Red Teams

Problem

Some organizations scope red team engagements to develop elaborate threat models and execute attack paths spanning multiple security domains with an nearly unlimited time frame; however, such engagements often require considerable time, cost, and resources. While this idealistic structure is respectable and desired, it is unrealistic for adoption into most organizations. Here are some limitations I have seen commonly put in place and the reasons why:

  • Limited Scope — A successful engagement relies on a wide and all-encompassing scope that adequately allows the red team to tactically maneuver in an environment just as the adversary would. With that said, there are certain assets that will always be too risky to permit offensive operations against, and will often be removed from the scope. Most organizations are simply not willing to put their existence at risk to measure security.
  • Limited Time — I hear it all the time, “The bad guy has forever to achieve his objective”. While that is true, most organizations will not sponsor their internal team or a consulting team to work an engagement forever; it is simply too costly and limits the training received. The time of an assessment should be enough to allow a tradecraft conscious operation but also constrained to allow for definitive end state where the teams can be debriefed. Additionally, not all adversaries have an infinite timeline, they have people breathing down their necks for results too.
  • Limited Audience — A red team should include as many stakeholders and decision-makers as possible. While I wish that every person in a company would stop and play in the engagements I run, this would simply not make business sense. That is part of the facet of a realistic exercise-in many breaches, C-level executives aren’t even paying attention. In a red team, they will often not want to be involved in the test because they are “too busy.” I encourage as much participation as possible throughout leadership but recognize it will not be possible.

Solution — White Card

To help deal with the challenges of scoping and executing an effective assessment, I often recommend that any limitations imposed on the engagement be formulated as a “white card.” A white card is essentially a simulated portion of the test to overcome limitations and allow continued testing. A white card could be simple such as a simulated helpdesk report of malicious activity but could also be larger and eliminate phases of the attacker methodology. A white card will lessen the ability of the team to identify flaws, but it will allow the red team to focus on specific aspects of the attack methodology in which the stakeholders are most interested.

Another possibility is to perform a tabletop analysis of all white-carded scenarios. This will allow something taken “off the table” during the technical engagement to still be analyzed for its effect on the overall outcome. Phishing is one specific area I see sometimes see white carded to save time and resources, allowing a red team to focus on the detection of post-exploitation maneuvering in environments. Remember that engagement specifics should always go back to the training objectives (more in Part Two).

This is not to say that detection of initial access isn’t important or shouldn’t be tested and tabled topped at some point, but in mature organizations with advanced threats targeting them, initial access is inevitable. Microsoft has coined the term “Assume Breach” and the methodology is practiced by many organizations today. There have been several and follow on news articles regarding red team operations within Facebook. Through these descriptions, it is clear that Facebook’s team recognizes the limitations above and white cards certain scenarios. Also, a clever and entertaining system of exercise injects were used, including a simulated breach notification from the FBI. The post is definitely worth the read.

By sometimes white carding different phases, the red team can expand the time they have to test other facets of security and exercise the response of the blue team, allowing for a deeper and more thorough test in the same timeframe.

Wrap Up

Red teams are an absolutely critical component of strategic planning and tactical information protection. The art of red teaming has developed over thousands of years and is here to stay. While it is easy to be lofty in your expectations of the red team, remember that their goal is simple: challenge a proposed plan or system by being the “devil’s advocate”. If the red team accomplishes this goal, despite their implementation or structure, they are helping the organization for the better and that’s good for everyone.

Originally published at http://www.sixdub.net on June 24, 2016.

--

--

No responses yet