The security industry likes to tell itself a comforting story: as attackers adopt artificial intelligence, defenders will respond in kind, and the balance of power will remain roughly equal. AI on both sides, the thinking goes, should cancel out.
This assumption is wrong — and potentially dangerous.
As with most other areas of security, the use of AI in practice is deeply asymmetrical. Attackers benefit disproportionately from automation, while defenders struggle to translate AI adoption into meaningful risk reduction. The result is not an arms race between equals, but a widening gap between the speed at which attacks evolve and the pace at which enterprises can govern, understand, and respond to them.
Automation Favors the Offense
Attackers have always benefited from scale and, unfortunately, AI simply amplifies that advantage.
Modern attack campaigns will utilize automation not to invent entirely new exploits, but to industrialize existing ones. Known vulnerabilities, misconfigurations, and weak identity controls are now stitched together by AI-assisted tooling that adapts quickly, probes relentlessly, and exploits opportunity at machine speed. These campaigns are quieter, more persistent, and harder to distinguish from background noise.
Critically, attackers do not need perfect precision. They benefit from volume, iteration, and probabilistic success. A small improvement in targeting or evasion, multiplied across thousands of attempts, yields meaningful results and AI excels at this kind of iterative optimization.
Why Defensive AI Struggles to Keep Up
On the defensive side, AI is often deployed as an enhancement to existing tools, offering faster detection, better prioritization, and smarter correlation within an enterprise SIEM, for example. These are valuable improvements, but they do not change the fundamental constraints under which defenders must operate.
Security teams are accountable for outcomes, needing to explain decisions, justify actions, and demonstrate control. False positives disrupt business. False negatives create risk. Every automated response must be defensible after the fact to management, auditors, regulators, or customers.
This asymmetry matters. An attacker can afford to be wrong repeatedly, trusting that eventually a shot will be on target. Defenders, just like the poor old goalkeeper in soccer, cannot afford to be wrong once.
AI-powered detection may surface more signals, but without clear governance, visibility, and control, those signals quickly become noise. Automation without accountability simply accelerates confusion.
The Persistence of “Old” Attack Surfaces
One of the more uncomfortable realities emerging from recent incidents is that many AI-enabled attacks still rely on very traditional weaknesses, such as exposed services, misconfigured cloud environments, weak access controls, unpatched software, and multiple evasion techniques. AI does not replace these attack vectors but instead makes them easier to discover and exploit at scale.
To any seasoned security professional this sounds very familiar. The old saying goes: “there is nothing new under the sun” and the danger is not that AI introduces entirely new classes of risk overnight (although it inevitably will – see our white papers “AI Security Beyond the Model: What Enterprises Need to Care About — and Why” and “Evaluating Enterprise AI Security: Questions Every Buyer Should Be Able to Answer”). The real danger in the short term is that it quietly magnifies existing weaknesses until they become systemic failures.
Why Symmetry Is the Wrong Mental Model
The idea of a balanced AI arms race assumes that both sides gain comparable benefits from automation. In reality, the incentives and constraints are fundamentally different.
Attackers optimize for opportunity and speed, while defenders optimize for stability, correctness, and trust. AI aligns naturally with the former, aligning with the latter only when paired with strong governance, observability, and control.
This is why simply “adding AI” to security tools does not meaningfully close the arms race gap. Without clear policies, auditability, and predictable behavior under stress, automation can undermine confidence rather than strengthen it.
Reframing the Defensive Objective
The goal of AI security should not be to match threat actors’ algorithm for algorithm, since that race is unwinnable and misses the point.
AI-enabled systems must be designed and evaluated not just for detection capability, but for how they behave when assumptions break, when inputs are ambiguous, dependencies fail, or automation makes the wrong decision.
This requires shifting emphasis away from novelty and toward discipline and focusing on visibility into how decisions are made, evidence that controls behave predictably under stress, and governance that aligns automation with enterprise risk tolerance.
What CISOs Should Do Now
- Treat AI-enabled security controls as governed systems, not intelligent features. Demand clarity on how automation is authorized, constrained, and audited.
- Insist on observability and accountability. If your team cannot reconstruct why an automated decision was made, you cannot defend it to regulators, boards, or customers.
- Pressure-test failure modes. Ask how controls behave when dependencies degrade, data is ambiguous, or models behave unexpectedly.
- Resist the temptation to equate speed with strength. Automation that outpaces governance increases risk rather than reducing it.
Enterprises that succeed in this environment will not be those that deploy the most AI, but those that govern it best.
The uncomfortable truth is that AI makes security failures more likely when organizations do not understand their own systems. The answer is not less automation, but better control over how automation is introduced, evaluated, and held accountable.
The arms race metaphor obscures this reality. Defense is not about symmetry, but responsibility and accountability. And in an AI-enabled world, responsibility is the hardest capability to automate.