On the 21st of April, the NCSC published new guidance on cross domain approach and architecture. If you work anywhere near defence, intelligence or critical infrastructure it’s worth reading in full.
For years, “cross domain solution” has been shorthand in procurement circles for a single appliance from a single vendor sitting between two networks doing a compendium of security enforcing functions inside one monolithic stack. The new publication correctly describes cross domain as an end-to-end architecture of layered and sequential controls distributed with purpose between the source and destination of a data flow. Controls are applied where they’re meaningful before business logic touches anything, with the end goal of data being validated for ingest or release before a consuming system touches it. The following points stand out for anyone building or buying in this space…
Key takeaways of the updated NCSC Cross Domain guidance
Single-purpose components beat monolithic platforms
The guidance is explicit about a component performing a security-critical function. It should be single purpose, so its security properties can be reasoned about independently. That’s a direct challenge to the “one app does it all” mindset and an endorsement of 4Secure’s long standing approach of composable architectures where a hardware diode, a verification engine, a third-party format parser and protocol handler are independently configurable and deployable tools that can be reasoned about or replaced.
Flexibility is a requirement.
Cross domain started in defence and government for document ingest, but the use cases have multiplied and the established way of doing things can’t necessarily address them all. The NCSC is effectively saying stop forcing every problem into the same pattern and reason about the security implications of the components in your pipeline.
Architectural choices and trade-offs are valid.
Consideration of security, cost and usability within cross domain architectures to reduce threats based on the threat model, as well as enable the capability required for the system, have a framework within which those architectural decisions can be made. The layered security approach of Cross Domain is about deliberate decisions at every stage to reduce the threat in the context of your system.
How we got to the monolithic appliance
10 years ago the cross domain problem was well-bounded by national classification levels, where the data to move was mostly documents and the flows were lower volume. This meant that the operators were a small set of defence and intelligence organisations with deep assurance capability. A single hardened appliance built by a specialist vendor and evaluated as a whole was and remains a reasonable answer in those circumstances. The attack surface is manageable and the use cases are predictable which means that the assurance model of evaluate the box, approve the box and deploy the works in that context.
What’s changed isn’t the core technical principles underpinning cross domain. The data types have sprawled and it’s no longer just documents transversing low to high or high to low. Real-time telemetry from sensors, full-motion video, voice, industrial protocols, geospatial streams, database replication, AI model artefacts and weird binary formats that no single vendor’s parser library will ever fully cover all need an answer. Cross domain is now a live requirement for CNI operators and cloud, not just the traditional point to point consumers. The threat model has sharpened, with supply chain attacks and zero-days affecting components now part of the baseline assumption. NCSC’s latest guidance highlights the power of architectures whose properties can be reasoned about component-by-component rather than bundles you must reason about as a whole because they cannot be decoupled. The monolithic appliance makes sense when there is one problem to solve. There are now many problems with their own threat model and the honest answer is that no single product or pattern can optimally solve all of them. The new guidance is the formal acknowledgement of that reality.
What this means in practice
At 4Secure we’ve built toward this architectural model for a long time, which is why the guidance reads less as a disruption to us and more as an articulation and vindication of how we already think the problem should be solved. Our approach is one of hardware diodes and flow control components from multiple vendors. These components are selected per use case rather than inherently tied to an inflexible software stack. Enforcement of one-way flow at the physical layer and where the threat model demands it syntax validation in the hardware. Sovereignty and supply chain resilience are at the heart of our design decisions.
TrustedFilter® is our own purpose-built filtering engine for content validation, redaction and interfacing with flow control components. TrustedFilter® is a security control component with a defined scope within our Cross Domain Solutions, defined interface contracts with which data can be presented to it for validation and with which data can be presented to the flow control component. Its properties can be reasoned about in isolation, independently of the flow control components and other applications in the pipeline.
Third-party software and open-source tooling integrated into an open, extensible pipeline is at the heart of the solutions 4Secure have developed for over a decade. Our approach has long been to build glue code and extensibility outside of the filtering engine, decoupled from the flow control component, to prepare data in a way that we can present it to TrustedFilter®. 4Secure’s library of open-source tooling and integration patterns has been built over several years of challenging integration activity where complex data types need to be parsed, simplified and prepared for validation. Likewise, third-party integration is at the heart of the solutions we build, and partnerships with industry renowned vendors such as Glasswall, Oakdoor and Owl have been key to our own development roadmap and solutions design.
Architecture is assembled to the threat model. Sometimes that’s a tightly integrated set of components on shared hardware; sometimes it’s distributed with components in different trust zones under different administrative control. Sometimes it includes hardware flow control components. Sometimes hardware flow control components are all that are required for the use case. The common mistake we’ve seen for several years is treating cross domain as a magic bullet product decision where users often either boil the ocean with security enforcing functions or assume a single control is enough to be ‘secure’. Cross Domain is an architectural approach driven by the threats your organisation faces and the risk you’re willing to accept. Products and components are how you implement a Cross Domain architecture to manage the risk.
The takeaway for risk owners and integrators
If you’re procuring, designing or considering Cross Domain capability, the new guidance is an opportunity to push back on assumptions that have hardened over the last decade. The four questions below aren’t checklist items to tick off but a vendor’s answers tell you whether they’re selling you an architecture or a box.
Can your vendor articulate each security-critical component’s purpose, independently?
A component level conversation about what each one does, what it doesn’t do, what its interface contract is a minimum. If a vendor can’t separate the diode from the filter from the parser from the policy engine when they explain the product, they almost certainly can’t separate them when something goes wrong either.
Where do the controls sit in the pipeline, and why there rather than somewhere else?
Control placement is not arbitrary, for example protocol validation belongs where networks physically connect and format verification belongs before the data reaches anything that will interpret it semantically. A good vendor can walk you through the placement decisions and the reasoning.
Is each control required for your use case, and why?
Controls cost latency, complexity, maintenance and assurance effort. An architecture that includes every possible control “just in case” is not a good architecture it’s an expensive one with a larger attack surface. The correct answer to “why is this control in the pipeline?” is a sentence about a specific threat from the threat model, not a sentence about vendor scale.
Can your vendor identify a replacement for any given component?
This is the supply chain question, and it matters more every year as components get deprecated and projects and vendors go dormant or get compromised. A vendor who treats their pipeline as a set of replaceable components with defined interfaces can answer this question concretely but a vendor whose filtering engine, policy logic and flow control components are entangled inside a single binary and inflexible product cannot.
Closing remarks
Cross Domain is a risk-reduction approach. Products like those from 4Secure, our partners, and other vendors in our space are designed to help you implement a Cross Domain architecture. The NCSC guidance is the clearest articulation, yet of what that architecture should look like and how users of cross domain should approach architectural decisions.