Magic Quadrant™ for Privileged Access Management 2025: Netwrix Recognized for the Fourth Year in a Row. Download the report.

Resource centerBlog
Mythos and the cost of attacking

Mythos and the cost of attacking

Apr 24, 2026

For twenty years, cybersecurity defense rested on a simple idea: make attacking so expensive that adversaries give up and move on. Cheap, capable AI breaks those economics. Recon, exploit development, phishing, and command-and-control infrastructure now run at model speed and cent-per-million-tokens cost. The detect-and-respond doctrine struggles when an attacker’s OODA loop compresses from weeks to seconds. The prevention bar has to rise from blocking known-bad to predicting intent from behavior.

Why cost still matters in the age of cheap intelligence

Two weeks ago, Anthropic announced Project Glasswing and gave a small group of launch partners, including Microsoft, Google, and CrowdStrike, along with about forty other critical-infrastructure organizations, early access to a model called Claude Mythos Preview. Anthropic said Mythos was too dangerous to release publicly. In the weeks leading up to the announcement, it had already surfaced thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old bug in OpenBSD.1 The market reacted the way markets react to existential news: shares in major cybersecurity vendors fell as much as ten percent on fears that AI-scale vulnerability discovery commoditizes the work of traditional security tooling.2 On the same day, Treasury Secretary Bessent and Fed Chair Powell pulled the CEOs of the largest U.S. banks into a room to talk about it.3

So: is Mythos the end of the cybersecurity industry, the beginning of a new one, or a well-executed piece of Anthropic PR? I think it's none of those. To see why, it helps to go back twenty years.

The oldest idea in defense

In the mid-2000s I was on the security team at General Electric, defending what was then one of the largest defense industrial base suppliers in the country. We talked constantly about one idea, and it shaped everything we did: raise the cost of attacking us.

That idea belonged to a generation of defenders, people like my colleagues Richard Bejtlich and David Bianco, and the Mandiant team I'd later join, who argued that perfect prevention was impossible, but that defense was still a game you could win if you made attacking expensive enough that your adversary gave up and moved on.4 Bianco's Pyramid of Pain5 formalized the intuition. Some indicators of compromise cost attackers almost nothing to work around; a file hash, for example, changes with a single byte. Others cost them dearly, forcing them to retool, retrain, or rebuild infrastructure. The higher you pushed them up the pyramid, the more painful, and therefore rarer, the attack became.

Raising costs took many forms. Publishing high-fidelity indicators that forced attackers to swap toolkits. Exposing malware families so adversaries had to write new ones. Burning infrastructure so command-and-control had to be rebuilt. The best example was the 2013 APT1 report at Mandiant, which named and exposed a Chinese state-sponsored group so publicly that they had to essentially retool from scratch. For years afterward, APT1 was a textbook case of defense as economics.

The whole model, detect fast, respond well, make the attacker pay to come back, rested on one unspoken assumption: that the attacker's work was slow and human. Recon took weeks. Exploit development took months. Infrastructure took money and tradecraft. If you could make any of that more expensive, you won.

What Mythos actually changes

Mythos (and frankly every capable model of the last eighteen months), does the opposite of raising costs. It drops them through the floor. Analyzing a target's software for vulnerabilities, probing a web app for unsanitized input, standing up command-and-control infrastructure, writing the phishing lure: every stage of the attacker lifecycle that used to cost time and specialized talent now runs at model speed.

Though Mythos is getting the press, the security firm Aisle tested Mythos's showcase vulnerabilities against small, cheap, open-weights models and found that most of the underlying analysis was repeatable. All six models they tested detected the flagship FreeBSD exploit in at least one of three tests, including one with 3.6 billion parameters at roughly five cents / million tokens. Their argument is that AI cybersecurity capability is jagged; it doesn't scale smoothly with model size, and the real moat is the system and expertise wrapped around the model, not the model itself.6Mythos isn't an inflection point so much as confirmation that the inflection already happened, quietly, across a generation of cheaper models.

I think this is starting to show up in the data as well. The 2026 Mandiant M-Trends report, drawn from over 500,000 hours of incident response work in 2025, documents global median dwell time rising from 11 days to 14, the first meaningful uptick after a decade of steady compression from 146 days in 2015.7 More striking, the median time between an initial access broker compromising an environment and handing it off to a secondary threat actor (typically a ransomware operator) collapsed from over eight hours in 2022 to 22 seconds in 2025. Twenty-two seconds is not enough time for a SOC analyst to read the alert, let alone act on it. Exploits remain the leading initial infection vector for the sixth year running, accounting for 32% of intrusions, and they also documented malware families like PROMPTFLUX and PROMPTSTEAL that query large language models mid-execution to evade detection. The QUIETVAULT credential stealer even scans compromised machines for local AI tools to use against their owners. To Mandiant's credit, they don't blame AI for all of this. They explicitly say most breaches still come from human and systemic failures, not AI capability. But the trend line is starting to move, as the cost of compromise continues to drop.

These models have been broadly available for eighteen months, but attackers are human organizations; they adapt on human timelines. Playbooks have to be rewritten, tradecraft has to be tested, operators have to be trained. The next few years, not the last eighteen months, are when we'll see what happens when a mature adversary has fully internalized these tools.

Defenders can use these tools too, and they do. The optimistic read is that defenders actually have the advantage: they own their environment, their telemetry, their baselines, and can run models continuously against a known-good state. That’s the truth, and it's why I don't think Mythos ends our industry. But it understates the structural asymmetry that’s always been present on cybersecurity--the attacker only needs one path to succeed; the defender must close all of them. Cheap intelligence makes searching for paths dramatically cheaper, while closing them still requires coordinated change across people, process, and technology. The attacker's cost curve falls faster than the defender's.

But the deeper asymmetry isn’t just about who has the faster model. It’s about who gets to practice, iterate, and learn, the stuff LLMs are really good at. An attacker with cheap, fast intelligence can iterate a thousand times a day. They can try a phishing variant, tweak it, try again, probe a different identity path, retry from a different angle, and keep going at machine cost with no human bottleneck. They get virtually unlimited at-bats. The defender, by contrast, mostly learns from real incidents and limited simulation. They don’t get to run thousands of realistic attacks against their own environment at the same fidelity, and they still often find the cracks the way they always have, after the attacker did. That gap, between the attacker’s iteration loop and the defender’s learning loop, is what AI widens most.

The inflection point

For most of my career, the consensus in our industry has been some version of: compromise is inevitable, so invest in detection and rapid response. It's a good doctrine. It built companies like CrowdStrike and Mandiant, and it saved organizations a lot of money and reputation.

But I think that doctrine is reaching its limit. When the attacker's OODA loop compresses from weeks to seconds, "detect and respond" becomes a race the defender cannot reliably win. Not because the tools are bad, but because the clock runs out before a human can make a decision. To be clear, prevention has always been a large category, EDR, email security, WAF, MFA, patch management. The claim isn't that the industry ignored it. The claim is that the prevention bar has to move, from blocking known-bad signatures to predicting attacker intent from behavior, and that's a different engineering problem than anyone in our category has fully solved.

How this impacts what we’re building

At Netwrix, we've focused on prevention with the best suite of data and identity solutions in the industry: identity governance, privileged access, directory security, and data security posture management. What's changing now is how we use the telemetry those products collect. We're pushing hard on prediction.

Here's what that looks like in practice. Our DSPM platform classifies the sensitive data across a customer's environment: what exists, what's exposed, who has access, and which identities have actually touched it. In a traditional model, we surface that inventory and flag policy violations after the fact.

What we're building toward is different. When a file containing regulated PII or credentials is accessed by an account that has never touched that data class before, that access stands out. Add a recently-escalated permission, or an identity whose role has no reason to reach that data, and the combination is worth interrupting before it becomes exfiltration. Each signal in isolation might be permitted, but together they describe intent.

The actor type is almost secondary. A privileged admin session, a compromised service account, and an AI agent that has been granted more reach than its owners intended can all produce the same behavioral pattern against a sensitive dataset. Our job is to recognize that pattern and act before anything leaves.

That's the kind of signal we think prediction must mean in this era. Delivering it well means using the right model for the right job. General-purpose LLMs are extraordinary at reasoning over messy context, and we lean on them heavily where that matters. Alongside them, our R&D team is creating purpose-built models shaped around the specific use cases we do best. Classifying whether a sensitive data access fits an identity's established pattern, scoring anomalous behavior against a learned baseline, deciding in milliseconds whether to revoke a token. These are tasks where latency, determinism, and the right training data matter more than broad world knowledge. Both workstreams are underway.

Twenty years on

Mythos didn't change what defense is, but it changed the price tags. The things that used to be expensive for an attacker, finding a vulnerability, writing the exploit, crafting the lure, are cheap now. The things that used to be prohibitively hard for a defender, reading intent from behavior before damage is done, are finally becoming possible. Our job, the whole industry's job, is to make sure the second curve keeps pace with the first.

References

1. Anthropic, “Project Glasswing: Securing critical software for the AI era,” April 2026.

2. Fortune, “Anthropic is giving some firms early access to Claude Mythos to bolster cybersecurity defenses,” April 7, 2026.

3. The Hill, “Anthropic’s Mythos puts DC, Wall Street on high alert,” April 2026.

4. Richard Bejtlich, “Full Disclosure for Attacker Tools,” TaoSecurity, June 2010. https://taosecurity.blogspot.com/2010/06/full-disclosure-for-attacker-tools.html

5. David Bianco, “The Pyramid of Pain,” Enterprise Detection & Response, March 2013. https://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html

6. Stanislav Fort, “AI Cybersecurity After Mythos: The Jagged Frontier,” AISLE, April 7, 2026. https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier

7. Mandiant, “M-Trends 2026,” April 2026.

Share on

Learn More

About the author

A man in a suit and white shirt smiles for the camera

Grady Summers

Chief Executive Officer

Grady Summers brings 20+ years of cybersecurity expertise and a proven track record leading product innovation and transformational growth. He’s held leadership roles at pioneering companies like SailPoint, FireEye, GE, and Mandiant, where he drove SaaS transformation and portfolio expansion. With hands-on experience across global markets and customer-facing roles, Grady pairs boardroom strategy with boots-on-the-ground insight. While he is recognized industry leader in cybersecurity, Grady maintains his connection to nature by spending his spare time planting trees on his Pennsylvania farm. He holds an MBA from Columbia University and a bachelor's degree in computer systems management from Grove City College.