Design or default? This is the significant choice in 2026 regarding artificial intelligence, says Jonny Williams, Chief Digital Adviser for the UK Public Sector at Red Hat
In 2026, the UK faces a choice. We can deliberately shape how AI is adopted across public services, or we can drift into dependency with whatever tools we implement first.
The UK Government’s AI Action Plan and AI Playbook gave us a positive strategic starting point. Emphasis on responsible AI, cross-government reuse and skills development are all vital to value delivery. But these principles alone will not determine success.
Our biggest risks relate to implementation, particularly around skills, infrastructure and the increasing concentration of a small number of proprietary technology providers in government. The gap between intent and execution will tip the balance between lasting value or a new generation of expensive legacy.
We can already see early warning signs. In a session I recently attended, one UK government department summarised its AI strategy in a single word: “Copilot”. No product can replace a plan, unless you are willing to quietly accept someone else’s roadmap for public services.
The UK Government’s ambition for AI in 2026
The government’s ambition for AI extends far beyond productivity capabilities. For the first time, we are embedding technology into core public services that shape decisions about citizens, services and rights, at scale, in ways we cannot fully inspect or explain.
Parliamentary scrutiny has already raised questions about transparency in public-facing AI tools, such as GOV.UK Chat, including concerns about explainability and the training data used. Increasingly, citizens are asking whether opaque decision-support systems will weaken democratic accountability.
These challenges will be compounded by the way we are building services. Departments are responding to uncertainty by assembling their own individual AI stacks, often guided by incumbent suppliers and outdated procurement processes. Each decision may be rational in isolation, but collectively, they leave us on the brink of AI sprawl as a nation. In the absence of shared foundations, opacity is not accidental. It is an emergent property of duplication at scale.
We have been here before. The last decade of cloud adoption left the government with a fragmented, high-cost estate of overlapping platforms and long-term dependencies.
The UK is now at risk of repeating the same pattern with AI. This time, the systems involved won’t just host services. They will inform decisions.
Duplication absorbs the time of talented teams and the public funding that should be directed towards differentiated challenges. Our best people should not be rebuilding near-identical platforms when there are such drastic improvements that could be made to countless public services.
With shadow AI on the rise, recent research showing it’s present in 85% of UK organisations, and interoperability an afterthought, we need a significant shift in strategic thinking if we want to move away from a landscape where pilots proliferate but rarely scale.
AI: The choice between design and default
The choice between design and default does not mean every choice is binary: buy or build; makers or takers. We aren’t trapped at either extreme. There’s a third way that allows us to adopt mature, globally utilised capabilities built on open standards, supported commercially where needed, and governed collaboratively rather than owned exclusively by a single vendor. This middle path is enterprise open source.
We accept this logic elsewhere in society. We don’t insist on bespoke measurements or unique energy standards. AC/DC, Kilograms, Centimetres (even if the world hasn’t accepted our indisputably better plugs). Agreement around open standards enables us to focus our efforts on innovation, rather than reinventing the wheel a million times in a million different ways.
This is how digital sovereignty is built in practice. Open standards give us the power to choose what we depend on, under which conditions, and for how long.
This commoditisation of capability would enable the delivery of a public sector landing zone for AI. Vendor neutral. Department agnostic. Designed to scale. With almost immediate benefits.
Interoperability would improve with systems built to work together from the outset. Auditability becomes achievable rather than aspirational. Resilience increases as dependency on singular suppliers is reduced. Procurement becomes more accessible, enabling start-ups and scale-ups to sell to the government, per recent Budget announcements, without navigating thousands of incompatible contexts.
The value of these decisions compounds. When our technologists stop rebuilding commodity tooling and start delivering differentiated services, AI can genuinely become a pillar of economic growth and better public sector outcomes, while defining beneficial patterns for maturing technologies such as quantum.
By setting expectations around open standards, we create space for innovation within guardrails that protect citizens. Wide roads, high curbs. We enable public services to evolve and shape the tools we depend on rather than simply consume them. In doing so, we can influence global practice instead of passively importing it.
AI policy priorities
Policy must focus on outcomes and strategy, not outputs and tools. If we embrace truly commoditised capabilities, we retain the ability to switch or augment providers, avoiding lock-in to black box proprietary tech that lacks openness, auditability, and explainability.
The UK has an incredible history of digital government. But on AI policy, many would say that we are currently adrift. And, in the absence of clarity, departments are filling the vacuum with local decisions that will be hard to roll back.
So, we face a simple question with complex repercussions. Do we design now, agreeing on common foundations and preventing AI sprawl, or do we accept the default of fragmentation, duplication, and long-term dependency?
Design or default. The choice is still ours.











