This is part two of a blog series titled: Common Ground. In , I discussed the backgrounds and evolution of red teaming, diving deep into how it applies to the information security industry and some limitations engagements face. In this part, I will discuss common components of red team planning and how they play into execution. There are many publications, documents, articles, and books focused on the structure of red teams, but I’m going to cover facets integral to engagement planning that I don’t see discussed enough.
Planning can be completed formally or informally. Organizations often benefit by being heavily involved in the planning process; however, sometimes the task is delegated to the red team with the organization giving final approvals. Finally, while not every single component here may be thoroughly planned in every engagement, I do not believe that it lessens the validity of the engagement as long as execution strikes back to the central theme or motivation for testing in the first place. I encourage you to read my disclaimer in my previous piece before continuing.
References that are useful for the generic planning process of network red team engagements:
Red teams generally do not fit well inside of an organization, as their job is to find flaws with hypotheses and poke holes in organizational plans and intentions. In the military’s doctrine for “Command Red Teams,” JP1–16 , it is stated:
Cohesive teams may unconsciously stifle dissent and subtly discourage alternative thinking. Helping counter the effects of these institutional influences, while simultaneously presenting alternative perspectives, requires a cautious,balanced approach by the red team. Common forms of institutional bias and influence include group think, tribal think, no think, boss think,silent think, false consensus, death by coordination, institutional inertia, hubris, and conflict of interest.
This quote highlights and reiterates the importance of position the red team to prevent it from being handcuffed and authorizing it to carry out its mission.
Throughout the past couple years working with internal red teams, I have seen different structuring approaches used to fit an organization’s requirements. The most common setup is the red team reporting to the Director of Information Security,or a similar position. This can be risky because the red team will inherently highlight weaknesses in the network, possibly attributed to faults of that very director. If egos get in the way, and they often do, the red team might be scoped in such a way that prevents success.
In a slightly different setup, the red team will report directly to Inspector General organizations or C-Levels. This approach generally removes the limitations the team faces and increases their authority, but also prevents them from working directly with the network defense organizations without traversing multiple layers of bureaucracy. Since a key component of network red teaming is training the blue team, these hurdles might prevent successful outcomes.
I tend to stay neutral on this topic. I think all the approaches have their own benefits. I tend to believe that the red team must be independent in nature to provide valid results and must derive authority from the highest levels inside the company. Further, the red team must have open communication lines and direct access to the blue team for deconfliction and training. The red team should be positioned within the hierarchy to maximize the possibility of reaching the organization’s objectives.. I’ll conclude the topic with a quote from the book Red Team , by Micah Zenko which sums it up nicely:
“[Red Teams] have to balance several competing principals: being semi-independent and objective while remaining sensitive to the organizations operating environment and its available resources”.
Over the past several years, I have seen various implementations of red team engagements and I believe there are considerable strengths in the different forms of engagements depending on the goals of the organization. The maturity of the organization influences the type of engagement they will benefit from most. It does not make much sense to carry out a full-scope, no-knowledge network red team engagement against a small office supplies distributor with no SOC or security personnel other than to show they will fail miserably at responding. This organization could be better served by a hands-on assessment with an abundance of information sharing between red and blue teams. Conversely, a large multinational company with a fleet of SOC personnel and established incident response and hunt teams could benefit greatly by a more independent red team with less blue team contact.
These implementation subsets, while different in their execution and intentions, are variations that draw similar themes from my generic take on red teaming:
The emulation of adversarial tactics, techniques and procedures (TTPs) to test plans and systems the way they may actually be defeated by aggressors to challenge plans and improve decision making processes
There are likely many more categories and spinoffs above and beyond this small list. If you have a different category or methodology, I would be interested in hearing about its success for your team. For more info on the various engagement models, Raphael Mudge ( @armitagehacker )has a great blog post and several presentations on this. He has really lead the industry in shaping and highlighting the different categories of assessments and inspiring me in exploring the possibilities!
A full-scope assessment is what you most often think of when people imagine a network red team. This is an assessment from an adversarial perspective with the purpose of engaging incident response to measure their ability to successfully respond. For this type of engagement, the threat the red team represents is generic rather than specific. As a generic threat, the red team has the freedom to utilize their own TTPs and tools as long as it aligns with the level of sophistication in the threat model (more on that below). This freedom truly exercises the IR process by not drawing a box around the activities of the red team.
In this model, the red team is fairly independent from the blue team other than the process of activity deconfliction. Training will be conducted through the hands-on response rather than in a cooperative sense.
Benefits to this approach include:
- It measures the ability to recognize and respond to unknown threats. The majority of realistic threats an organization faces will be unknown.
- Allows for the comparison of previously unknown tactics with known signatures and intel to practice categorizing activity.
- Training of personnel with their real tools in the real environment and a debrief on the results from the attacker perspective.
- Exercise of the C3 plan and possibly contingency plans depending on the levels of compromise.
Although the benefits are plentiful, there are downsides as well:
- Training is self guided by the blue team and their management. Such training requires buy-in and the willingness to participate in the fullest capacity
- Requires careful coordination with a trusted agent inside of the blue team. This relationship will ensure proper deconfliction and prevent the engagement from being “gamed”
- Sometimes will require trial and error. The red team has to be threat representative and sometimes will underestimate/overestimate the abilities or maturity of the targeted
A key component to this form of engagement is a thorough and complete debrief to both the decision makers and technical staff. I have seen debriefs range from short technical walkthroughs to multi-day hands-on demonstrations of TTPs. The better you educate the blue team about the threats, the better threat you will become.
Red Cell Engagement (Threat Emulation)
In a red cell or threat emulation engagement, the level of sharing and training is similar to a full-scope assessment but instead of being generic, the red team will emulate a specific adversary. In this engagement, the team will need to study in-depth available intelligence on the known adversary and prepare to operate as such. They will mirror the TTPs and tools (if safe) of that adversary. This is a cutting edge area of red teaming and has numerous benefits:
- It measures the ability to respond to known threats that are targeting or have previously targeted the environment. This is the equivalent of doing a retest or reenactment of previous activity (if it had occurred).
- It tests the adequacy of threat intelligence teams, tools, managed defense teams or signatures in the environment.
This form of engagement has a lot of allure but also presents numerous challenges:
- In the private sector, very few threats are properly known or understood. Battling a known threat limits the training opportunity.
- In the private sector, the detailed coverage of tradecraft of known adversaries is generally lacking outside of popular threat reporting. This means that the red team will be restricted to a small set of available data and might not adequately test the organization.
- While sexy to the decision makers, this form of engagement provides less training to the defensive teams because it will restrict the red team from adapting rapidly like a real adversary.
The 57th Adversary Tactics Group of the USAF does this type of assessment and does it very well. They have pilots that have studied in depth the intelligence about the TTPs adversaries use while operating. They use this intel to act as aggressors or the opposing force ( ) in military exercises to measure the ability for friendly forces to combat a specific adversary. The 57th also has a Information Aggressor Squadron that applies this methodology to the information security domain.
For another great presentation on this, check out Adversary Simulation — “Red Cell” Approaches to Improving Security by Chris Hernandez ( @piffd0s ).
This engagement is a subset of a threat emulation exercise. In an adversary simulation, a very specific tactical scenario is devised for the red and blue team to work through. The red team is expected to act exactly as a specific threat while the blue team goes through the motions to train on that threat. These engagements are typically heavily scenario-based (rather than one large assessment) and time-constrained. The focus in this assessment is honing specific defensive TTPs or tools and educating the blue team about threats. This shares many of the benefits and downsides of a red cell.
A cooperative engagement similar to a full-scope engagement with a heavy focus on information sharing and hands-on training. Other teams frequently refer to these as purple teams . Throughout this engagement, the red team will provide debriefings to the blue team and possibly even task a defensive minded team member to sit with the blue team. The purpose of this interaction is to hone in on the malicious activities of the threat in the environment and use the tools at their disposal.
This type of engagement is almost exclusively focused on training rather than measuring the ability to respond but is extremely useful for organizations early in their adoption of information security practices. It is also highly informative for network defenders who have never taken a “peek behind the offensive curtain.”
Microsoft defines threat modeling well:
“Threat modeling allows you to apply a structured approach to address the top threats that have the greatest potential impact”
The act of threat modeling is crucial in properly preparing an organization. During the planning phase, the red team must gain preliminary knowledge of the target in order to properly identify representative threats to the organization. A small office supplies company has different threats to prepare for than a defense contractor, and it is the responsibility of the red team to represent the realistic threat vectors.
Usually this analysis can be conducted with the stakeholders present or with their input to better meet the goals of the assessment. A simple way of looking at this is defining a threat spectrum ranging from script kiddie to nation state, with many levels in between. Also, do not forget that the malicious insider is a threat in almost every organization and is worth modeling at some point. It is not drawn on the chart below but really spans the levels of sophistication depending on motivations and intent.
The threat model will dictate the sophistication of the TTPs the red team will use. It is during this process that the red team must decide if cross-spectrum operations (physical, social engineering, technical) should be used to target the organization. Large organizations that are targets of legitimate nation state actors might benefit from training against the threat of physical or industrial espionage. With that said, the threat model should not necessarily encapsulate every TTP of the threat category in a single engagement. I would argue that the time and resources required for country X to send spies over to steal information is significant in comparison to the effort required to gain network access. The risk is also greater. Therefore, scenarios revolving around network access should be executed first to tackle the more likely vectors inside a specific model. Later engagements can evolve to sophisticated physical tactics.
Threat models will also face the limitations outlined in Part One. While the assessment team can make it a goal to imitate organized crime or espionage, things like kidnapping, blackmail, and bribery are probably off the table. Imposing these limitations is not a negative, but a positive because it forces the team to be selective in which components of the threats will enhance the training the best.
After deciding on the engagement model and identifying realistic threats to the organization, the red team must work with the stakeholders to identify the training objectives of the assessment. Due to the time-constrained nature of an assessment, specific goals for training are useful to maximize the value of the exercise. I define training objectives as:
“ predefined knowledge, skills, and activities that the blue team will gain throughout the course of the assessment.”
Training objectives should be as specific as the stakeholders want the training to be. An example objective might be: “Detect malicious HTTP command and control traffic originating from user workstations.”
The training objectives will be the guiding light throughout the operation and serve as the go-to document for the red team when carrying out certain actions. A well-planned red team will map specific actions to specific training objectives while still remaining adversarial and impartial. Some people fear that defined training objectives limit red team actions but they are not meant to be exclusive in nature. In the example above, the red team is not forced to use HTTP C2 throughout the entire engagement, but they should use it once to hit upon the goal of the blue team. By focusing on training objectives, the red team can utilize time-limited engagements to satisfy the stakeholders while still providing value.
Master Scenario Event List (MSEL)
In formal exercise planning, the teams typically form a Master Scenario Event List which is defined as:
“A collection of pre-scripted events intended to guide the exercise toward a specific outcome.”
This list is used to prescribe actions and direction for the red team to meet the intent of the exercise. An example of a MSEL for a large state disaster exercise provides a good example of what these generically look like. In a large scale network exercise, the list might be divided among the different red teams involved to ensure they are individually tasked and contributing to the centralized theme. These event lists can be useful but also too restrictive as they tend to force a micro objectives down to the red team rather than allow them to decide their course of action within contexts of the larger objective. They are also overkill for a scoped small force assessment.The MSEL works best in very large scale exercises, or in events where separate teams are working as a disjointed force. In the case that a MSEL does not fit your exercise, there are variations that are similar in their intent but more focused.
I believe there is value in having defined specific “events” that are escalatory in nature. These are similar to the events defined in a MSEL but with a focused intent. In a number of major exercises I have participated in, the red team reaches the objective with zero detection. While this could be seen as a success of the red team because they have identified flaws, it also limits the exposure and training the blue team receives.
Escalatory events are actions that are taken during a red team engagement to provoke response and simulate mistakes an adversary makes. It is these very mistakes that often trigger responses in real world breach scenarios and these events serve to demonstrates the fog and friction of an offensive campaign. As part of the planning process, I recommend that the red team document these pre-planned events.
Events should have a definitive start and end date and should note the offensive procedure used as well as expected blue team detection points. An example event might be:
- Action: Lockout Workstation Admin account
- Time/Date: 1930 on June 23rd 2016
- Red Team Procedure:
- Utilize agent to attempt RDP access to a terminal server with a known bad password
- Repeat previous step until lockout occurs
- Indicators and Detection Methods
- User Ticket Creation
- Event ID 4740
- Netflow traffic to/from Terminal Server after hours
This post did not detail every facet of successful network red team planning, rather it attempts to highlight some planning components that will heavily shape a successful engagement. It also attempts to bring light to areas I have not seen discussed in depth related to network red teaming. In the end, the goal should be to execute a well-thought-out exercise with clear objectives and intentions for training all parties involved. This level of coordination and planning is useful for the blue team and allows for greater reception of the results at multiple levels in the organizations.
Originally published at http://www.sixdub.net on June 28, 2016.