Six days ago I published a piece asking whether we were automating ourselves into a corner in CNI cyber defence. The short version, for those who missed it: yes, we probably are, for reasons that ranged from non-human identity sprawl to skills erosion to an industry-wide reluctance to measure what our AI tooling is actually doing differently from the rule-based automation it replaced.

This morning, the NCSC published version 1.0 of new Cross Domain guidance: a unified piece pulling together the concepts, architecture, and design principles for managing data flows between zones of different trust. It is, on first read, the most substantive piece of UK policy writing on this topic in several years, and I want to start by saying something the cyber community doesn’t say often enough: the NCSC has done something quietly important here. They’ve stopped talking about cross domain as a product category and started talking about it as an architectural discipline. That is a hard shift to make in print, and it was overdue. Credit where credit is due.

Now for the awkward part. The new guidance lands directly on a number of the same pressure points I wrote about last week, and in several cases it goes further than I did. So rather than pretend I was prescient, let me walk through where the NCSC reinforces the argument, where it sharpens it, and what CNI leaders should actually do differently as a result.

Where the guidance reinforces the argument

Start with Design Principle 3, “Only export understood and expected data”. Within its worked examples — specifically Example 2: Security control points for export flows — sits this paragraph, verbatim:

“Deciding whether freeform, human or AI-generated content is safe cannot be achieved with absolute assurance. Such checks often rely on heuristic measures, such as keyword scanning, data loss prevention tooling, or manual second-person review. These checks are individually low-assurance, subjective and therefore imperfect.”

I spent nearly 1,500 words last week trying to make a version of that point. The NCSC has just made it in three sentences with the weight of the national technical authority behind them. If your current cyber defence strategy treats a large language model as the arbiter of what is safe to release, route, or act on, that paragraph is the one your board should be holding in its hand the next time the topic comes up.

Design Principle 5 is equally useful: “Key security components should only perform a single function to reduce the risk of compromise.” That is the clean, plain-text version of my argument that every AI agent needs to be treated as a privileged user with narrowly-scoped entitlements. A general-purpose agent that writes code, interprets alerts, makes routing decisions and signs release packets is not a single-function component. It is a large, poorly-bounded identity with privileges we would not give a human and we would never resource in a job of that ‘shape’.

The Architecture page gives us the third reinforcement. “Assurance is not gained at a single point, but through the pipeline of layered controls distributed between the source and destination of the data flow,” it says. That is a crisp policy statement of why “plan for the AI to fail” isn’t pessimism. It is load-bearing architectural hygiene, and the NCSC has just made it explicit.

Where the NCSC goes further than I did

This is the part I want to highlight, because for CNI operators it matters most.

My last piece made a general governance argument. The new guidance does something more specific. The Concepts page names “syntactic validation” as a logical control point in a cross domain pipeline, and lists “FPGA verifiers” and “hardened TLS termination components” as concrete implementation examples. Said plainly: for the security-critical boundary decisions in your most sensitive data flows, the NCSC is pointing you toward hardware-enforced controls, and doing so in print. That is a more opinionated architectural steer than CNI has had from the national authority in a long time, and it is, in my view, the right one.

It is also a position that some of us in this industry have been advocating for years, quietly and without much ceremony, while the fashion went the other way. I take no pleasure in being proved slowly right. I take a lot of pleasure in watching the national authority make the call in print, so that the rest of the sector can get on with it.

What the guidance doesn’t do

A couple of honest caveats.

The Cross Domain guidance is narrower in scope than my last piece. It does not speak to non-human identity proliferation, to the skills-gap problem, or to the governance of autonomous agents inside a SOC. Those remain open questions, and the pressure on CISOs to answer them has not eased. If anything, it has increased because the NCSC has just raised the bar on what “proper” looks like at the cross-domain layer, and the rest of the defensive AI stack now has to live up to that standard.

And the guidance is v1.0. There will be revisions. The architectural opinions will sharpen. The AI-specific text will, I suspect, become more explicit in future versions as the real-world deployment experience comes back. If you work in CNI, this is the moment to read it properly, map it against what you actually deploy today, and identify where the gaps are, before v1.1 arrives with sharper teeth.

Where this leaves us

Last week I wrote that “less certain isn’t good enough” for the systems that keep civilisation working. The NCSC has just handed CNI a piece of more certainty, and in doing so has made my position considerably easier to hold.

The question now is whether the industry uses it, or carries on stencilling “AI-powered” onto the tin and hoping nobody asks what’s inside. My money, cautiously, is on the former. The guidance is too good to ignore, and the alternative is too expensive to sustain.

Sources

Leave a comment

I’m Peter

Welcome to my personal site.

Here, I invite you to enjoy my musings on a variety of topics.

I’d love to get feedback too to help me improve things. You can reach me via the ‘contact’ link in the menu at the top of this page.

At some point, I’ll be launching a modest newsletter – if you want to receive that, and other news in your inbox, please fill in the registration box below.

Thank you!