I’ve spent the better part of three decades watching this industry chase the next silver bullet. Firewalls were going to solve everything. Then it was SIEM. Then zero trust. Now it’s AI. And while I’m not here to tell you AI has no place in cyber defence, it clearly does, I am here to ask a question that not enough people in boardrooms seem willing to raise: are we actually making ourselves more secure, or are we just making ourselves feel more secure?
The numbers sound impressive and that’s part of the problem
The stats coming out of the industry in early 2026 paint a picture of near-universal AI adoption in cybersecurity. According to recent reports, 89% of organisations are now using agentic AI in some capacity, with the most common applications being automated incident response and threat hunting. Managing AI cyber risk has entered the top five concerns for CNI organisations for the first time, ranked second overall.
On the surface, this looks like progress. Dig a little deeper and it starts to look like a land grab.
I’ve sat in enough vendor presentations to know the pattern. A new technology arrives, the marketing machine spins up, and suddenly every product in the catalogue has “AI-powered” stamped on the tin. The uncomfortable truth is that most organisations adopting AI for cyber defence couldn’t tell you, with any precision, what their AI tooling is actually doing differently from the rule-based automation they were running two years ago. The label has changed. The powerpoint presentations have changed. Whether the security posture has materially improved is another question entirely.
The attack surface you bought and paid for
Here’s what really concerns me: Every AI agent you deploy is an identity. It needs credentials. It needs access to databases, cloud services, code repositories, APIs. The more tasks it is given, the more permissions are granted and entitlements accumulated. Unlike a human who goes home at the end of the day, these agents are running around the clock with elevated privileges that most organisations haven’t even begun to govern properly.
Traditional identity management systems were built to authenticate people, not machines. As organisations increasingly scale and adopt AI, the number of non-human identities can rapidly overtake human identities, creating precisely the kind of sprawling, poorly-secured access landscape that we’ve spent the last twenty years telling CISOs to get under control.
The NCSC has been reasonably clear about this. Their recent assessments warn that incorporating AI models across the UK’s technology base, particularly within critical national infrastructure, almost certainly presents an increased attack surface for adversaries. That’s not a vendor white paper, that’s the national technical authority telling you to think carefully before you wire autonomous agents into your operational technology.
According to a recent poll of cybersecurity professionals, 48% now identify agentic AI and autonomous systems as the top attack vector heading into 2026. Not ransomware. Not phishing. The tools we’re deploying to defend ourselves.
The skills gap hasn’t gone away, we’ve just hidden it
One of the more alluring promises of AI in cyber defence is that it neatly solves the current skills shortage. Automate the basics, free up the analysts for more complex work. In theory, this makes sense. In practice, it’s something else entirely…
What we’re seeing is organisations using AI to “gloss over” the fact that they still haven’t really invested in their people. Junior analysts who should be learning the trade of incident response by getting their hands dirty are, instead, watching dashboards and rubber-stamping automated decisions they don’t fully understand. And, when the AI gets it wrong, and it will, because the models are only as good as the data they’re trained on, who will still be in the organisation with the depth of experience to spot the error?
Over half of organisations (54% to be precise) identify lack of knowledge and skills to deploy AI for cybersecurity as their main obstacle. That’s not a “technology problem”, it’s an investment shortfall dressed up as one.
What CNI organisations should actually be doing
I’m not anti-AI. I’ve been in this industry long enough to welcome anything that genuinely helps defenders keep pace with the threat. But I’ve also been in it long enough to know that technology deployed without proper governance, adequate human oversight, and honest assessment of its limitations tends to create more problems than it solves over the long run.
If you’re running a CNI organisation and you’re deploying or evaluating AI for cyber defence, here’s what I’d want to see:
- You need to treat every AI agent as a privileged user and govern it accordingly. That means identity lifecycle management, least-privilege access, regular entitlement reviews, and proper logging of what your agents are actually doing. If you can’t explain what your AI agent did at 3am on Tuesday, you haven’t improved your security, you’ve just created an insider threat that runs on electricity, not pizza.
- You need to maintain genuine human oversight. Not the “we have to do it for compliance” tick-box kind where someone glances at a dashboard once a shift, but the one where experienced analysts are actively reviewingautomated decisions, challenging the AI’s decision making, and maintaining the skills to operate without it when it fails.
- You need to be honest about what the AI is actually doing for you. If your “AI-powered” threat detection is fundamentally the same correlation engine you had before with a language model bolted on for the summary, don’t pretend it’s somehow “next generation” and transformational. Measure the outcomes: mean time to detect; mean time to respond; false positive rates, not on how sophisticated the technology sounded in the sales blurb.
And you need to be prepared for the AI to fail. Because it will eventually. Models degrade and training data goes stale. Your adversaries will inevitably learn to game automated systems and will almost certainly be using AI to do so. Your incident response plans will HAVE to account for scenarios where the AI is wrong, compromised, or unavailable, and your team should plan and prepare for it.
The real question
The Cyber Security and Resilience Bill is making its way through parliament, and with it will come new obligations around incident reporting and resilience standards. That’s welcome but regulation alone won’t fix the cultural problem that we have right now. And that’s an industry that’s more interested in ‘automating’ its way out of hard questions rather than putting in the effort to build genuinely resilient organisations.
I’m certain that AI will be a part of the future of cyber defence but what I’m far less certain of is whether we’re deploying it wisely, governing it properly, or being honest with ourselves about its limitations. And, in critical national infrastructure, the systems that keep the civilisation working, it feels – at least to me – that “less certain” isn’t good enough.
We need to be honest about why we’re doing this. As a cyber-security community we owe it to the people who depend on these systems every day.
Sources
- Bridewell, Cyber Security in Critical National Infrastructure Organisations: 2026 — survey of 600 security leaders across 13 UK CNI sectors (89% agentic AI adoption, AI cyber risk ranked second concern). bridewell.com
- Dark Reading readership poll, February 2026 — 48% of respondents identified agentic AI as the top attack vector. darkreading.com
- World Economic Forum, Global Cybersecurity Outlook 2026 — 54% of organisations cite insufficient skills to deploy AI for cybersecurity. weforum.org
- NCSC, Impact of AI on cyber threat from now to 2027 — AI models in CNI “almost certainly” present an increased attack surface. ncsc.gov.uk
Leave a comment