Real Red Team Engagements, Most organizations think a clean pentest report means they’re safe. They’re wrong.
Companies ran red team exercises all through 2025. Many got passing grades. And then ransomware still encrypted their servers, supply chain attacks still exposed customer data, and hackers still walked through front doors using default passwords. Now we’re in 2026 and the same mistakes keep repeating.
So yeah, that expensive security assessment? It tested the wrong things.
I’ve spent years watching security teams pour money into red team engagements and walk away with a false sense of security. The reports look great. The executive summaries have nice charts. And six months later, they’re calling incident response because a threat actor found the exact path nobody thought to test.
Nobody’s doing the homework. So I did.
I dug into real red team assessments, including published CISA findings, industry reports from Ponemon Institute and Forrester, and actual engagement data from the past couple of years. What I found isn’t pretty, but it’s exactly what security teams need to hear.
Here are 5 lessons from real red team engagements that most teams learn the hard way.
Real Red Team Engagements
Lesson 1: Your Red Team Is Testing the Wrong Attack Paths
Here’s the thing. Traditional red team engagements focus on finding technical vulnerabilities. Can we exploit this CVE? Can we crack these hashes? Can we get domain admin?
But the biggest breaches of last year didn’t happen that way.
The Marks & Spencer ransomware attack in 2025 that caused over £300 million in damage? It started with social engineering against a third party helpdesk. The Salesloft Drift breach that hit Palo Alto Networks, Zscaler, Cloudflare, and Elastic? Attackers compromised a marketing platform’s GitHub account and stole OAuth tokens that gave them access to hundreds of organizations’ Salesforce environments.
These weren’t sophisticated zero day exploits. They were trust relationship abuses that standard red team exercises never test.
The fix: Stop asking your red team “can you get in?” Start asking them “can you get in the way Scattered Spider got into M&S?” or “what happens if one of our SaaS vendors gets compromised?”
Design your engagements around real threat intelligence, not OWASP checklists.
| Traditional Approach | What Actually Works |
|---|---|
| Test for known CVEs | Simulate real adversary TTPs from recent breaches |
| Focus on your own infrastructure | Test third party trust relationships and supply chain risks |
| OWASP Top 10 scanning | Scenario based testing (“what if our helpdesk vendor gets compromised?”) |
| Annual checkbox exercise | Threat informed engagements updated with current intelligence |
Lesson 2: A “Mature” Security Posture Doesn’t Mean You’ll Catch the Red Team
This one stings.
CISA published findings from a red team assessment of a large critical infrastructure organization. This wasn’t some small company with five employees and a prayer. This was a mature organization that conducted regular penetration tests, invested heavily in network hardening, and had proactive security practices.
The red team scanned over three million external IPs and couldn’t find a single easily exploitable service. Service account passwords were so strong the team couldn’t crack any of the 610 hashes they pulled. No useful credentials on file shares. MFA blocked access to sensitive systems.
Sounds solid, right?
The organization never detected the red team throughout the entire assessment. Not during initial access. Not during lateral movement across multiple geographic sites. Not even when the red team deliberately tried to trigger a security response.
Let’s be real. If a mature organization with millions invested in security can’t detect a red team that’s actively trying to get caught, what chance does your team have against a threat actor that’s trying to stay hidden?
The fix: Detection and response capabilities matter more than prevention. You can have the best firewalls and the strongest passwords in the world, but if nobody’s watching the logs or knows what to look for, it doesn’t matter. Invest in monitoring, log analysis, and regular detection testing. The industry benchmark for Mean Time to Detect is 197 hours according to SANS. Organizations that regularly conduct red team exercises typically get that under 72 hours within two years.
Lesson 3: People Are Still the Biggest Vulnerability (And Your Training Isn’t Working)
47% of mature security organizations use red team social engineering to test their workforce. And they keep doing it because it keeps working.
Research from an Exabeam survey found that 35% of companies said their blue team never or rarely catches the red team. That means over a third of organizations are essentially flying blind when an attacker decides to go after their people instead of their technology.
In one real engagement, a red team conducted a physical penetration test against a company with multiple locations. They impersonated ISP technicians to gain building access. At one location, a guard got suspicious because the tester didn’t fit the expected profile for a service technician. That’s good situational awareness. But at other locations? They walked right in.
The kicker? When the team got burned at one site, the target’s IT department called ahead to the next location. The red team was five minutes away and had to come up with a completely new pretext on the fly. They found another ISP to impersonate through a quick OSINT search, despite wearing badges and polo shirts with the wrong company name on them.
The fix: Stop treating security awareness as a quarterly slideshow. Run social engineering simulations that test real scenarios: phone pretexting, physical access attempts, targeted phishing with context that matters. And when someone falls for it, don’t shame them. The best organizations use red team findings to build awareness without pointing fingers.
As one engagement leader put it, the best client debriefs have zero finger pointing. Just thoughtful questions about how to prevent it next time. That mindset is rare. And it’s exactly where real learning happens.
Lesson 4: Your Expensive Security Tools Are Probably Misconfigured
45% of organizations discovered that at least one major security tool was misconfigured or ineffective during their first red team engagement.
Read that again. Nearly half of all organizations are paying for security tools that aren’t working properly. And they don’t know it until someone actually tests them.
A Forrester analysis found that organizations conducting red team assessments extracted 23% more value from their existing security tool stack compared to those that didn’t. Not because they bought new tools. Because red teaming revealed that the tools they already had were misconfigured, had coverage gaps, or weren’t being monitored properly.
In the CISA critical infrastructure assessment, defenders had EDR solutions deployed. The red team bypassed them. The organization had monitoring in place. The red team moved laterally without triggering alerts. The tools existed. The configurations didn’t hold up under real adversarial pressure.
The fix: Stop buying new tools and start validating the ones you have. Every red team engagement should include a specific focus on whether your existing security stack actually detects and responds to adversary techniques. When a red team finds that your SIEM missed lateral movement, fixing that detection gap increases the ROI of your entire SIEM investment.
| Metric | What It Tells You |
|---|---|
| Mean Time to Detect (MTTD) | How fast your team spots red team activity (benchmark: 197 hours, target: under 72 hours) |
| Attack Path Depth | How far the red team gets before detection (should decrease over successive engagements) |
| Repeat Finding Rate | Percentage of findings that show up again next time (should trend toward zero) |
| Tool Detection Rate | Which security tools actually fired alerts vs. which stayed silent |
Lesson 5: Red Teaming Is Worthless If You Don’t Fix What It Finds
Let’s be real. The most expensive mistake in red teaming isn’t the engagement itself. It’s paying for one, getting a detailed report, and then not doing anything about it.
CISA specifically called this out in their findings. One organization’s leadership “deprioritized the treatment of a vulnerability their own cybersecurity team identified, and in their risk-based decision-making, miscalculated the potential impact and likelihood of its exploitation.”
Translation: the security team found the problem, reported it, and leadership decided it wasn’t important enough to fix. The red team then exploited that exact vulnerability to compromise the entire domain.
This isn’t a technology problem. This is a leadership problem.
Organizations that treat red team findings as a checklist instead of a strategy waste their money. For every dollar invested in red teaming, organizations save an average of $6.40 in prevented breach costs according to Ponemon Institute analysis. But only if they actually act on the findings.
The fix: Treat red team findings like the business risk they represent. Track remediation with 90 day goals for critical and high severity findings. Brief leadership on findings in business impact terms, not technical jargon. And conduct follow up engagements to verify that fixes actually work.
Organizations with documented red team programs received average insurance premium reductions of 12 to 18% according to a Marsh McLennan survey. That alone can offset a significant portion of the engagement cost. But insurers aren’t giving discounts for reports that collect dust.
The Bottom Line
Red teaming delivers the highest ROI of any offensive security investment. Over 56% of organizations have increased or plan to increase their red teaming investments heading into 2026. But the value isn’t in the exercise itself. It’s in what you do with the results.
Here’s what actually matters:
Test real attack paths based on current threat intelligence, not checkbox compliance. Measure detection capabilities, not just prevention. Run social engineering tests that reflect how attackers actually operate. Validate that your existing tools work under adversarial pressure. And for the love of all things secure, fix what the engagement finds before paying for another one.
If your red team report from 2025 is sitting in a drawer, you don’t need another engagement. You need to go read that report and start implementing the recommendations.
And if you’ve never run a red team exercise? Make sure you’ve got the security fundamentals in place first. Red teaming is most valuable when it sits on top of a mature security program, validating that your other investments actually work as intended.
The threats aren’t getting simpler. Attackers moved from disclosure to exploitation within 24 hours for 28% of observed vulnerabilities last year. Ransomware payments jumped to an average of $2.5 million. And AI is making attack automation faster and cheaper every quarter.
So yeah. Get tested. Fix what breaks. Test again. That’s the whole playbook.
FAQ
How often should we run red team engagements?
At minimum, twice a year. The threat landscape evolves constantly, and you need to verify that previous findings have been fixed while testing against new attack techniques. Monthly or quarterly exercises are ideal if your budget allows it. 23% of organizations already run them monthly.
Red team vs penetration testing: what’s the difference?
Pen tests are scoped, time boxed, and focused on finding technical vulnerabilities in specific systems. Red teaming simulates real adversary behavior across the full kill chain, including social engineering, physical access, and multi stage attacks over weeks or months. Pen testing tells you what’s broken. Red teaming tells you if your team can detect and stop a real attack.
How much does a red team engagement cost, and is it worth it?
Full scope engagements typically run 3 to 8 weeks with costs varying based on scope and complexity. But consider this: for every $1 invested in red teaming, organizations save an average of $6.40 in prevented breach costs. Plus potential insurance premium reductions of 12 to 18%. The question isn’t whether you can afford to do it. It’s whether you can afford not to.
Follow Us on LinkedIn
