Alf Franklin, Area Vice President Public Sector International at Elastic, discusses the pressing need for European governments to enhance their technological sovereignty amidst a rapidly changing geopolitical landscape
There’s no denying that the world is experiencing the most heightened geopolitical landscape in decades.
The reality is that, as technological dependencies deepen, governments without domestic IT capabilities are the ones most vulnerable in these challenging times.
Today, many critical infrastructure systems use IT components from international suppliers. Maintaining robust security and protecting these systems, which power the majority of modern society, has become a key priority. Even when working with reliable international partners, changes in the political landscape can influence the environment faster than expected.
As evidenced at the recent European Sovereignty Summit in Berlin, European governments are beginning to recognise the importance of strategic autonomy in their technological capabilities. We saw political leaders commit to fostering open architectures and strengthening domestic capability pipelines.
Of course, pledges mean nothing if they aren’t supported by skills and capability in Europe. That presents the next real challenge: governments, defence organisations and national security partners translating their bold visions into operational capability. How can they do this, exactly?
Advanced AI becomes mission-critical
As the world navigates a complex and unpredictable geopolitical landscape, the role of our intelligence and security services has never been more important. However, let’s not forget that this isn’t an easy task. Many of these crucial agencies are battling both homegrown and international threats, while budgets are dwindling and governmental oversight is growing.
With the evolution of AI-driven systems built with auditability and accountability in mind – principles increasingly embedded in government and defence AI governance frameworks – many of the traditional concerns around trust and oversight are beginning to be addressed. Responsible AI design emphasises traceability of data and decisions, documentation throughout system lifecycles, and clear human accountability for outcomes, helping to build confidence in AI-enabled capabilities. When applied to intelligence workflows, such technologies can streamline the integration of large volumes of diverse data, from satellite imagery to signals intelligence, enabling teams to focus on strategic analysis rather than on data wrangling.
Equipped with advanced AI, intelligence professionals no longer need to sift through vast datasets manually, risking missed details or wasted time on minor or false alerts. Instead, AI-powered operational intelligence surfaces the most relevant information, allowing humans to focus on analysing and addressing real threats. For data too complex and time-consuming for people to process and interpret, techniques such as cryptoanalysis ensure that critical intelligence is identified and acted upon, leaving nothing important unchecked.
Beyond data analysis, AI systems can also leverage the diverse range of information – both current and historical – they’re fed to predict potential threats to the country’s national security by identifying key patterns. Through predictive analytics, logistics teams within the intelligence and security services can quickly and appropriately detect and respond to threats while ensuring that already- strained resources and budgets aren’t wasted. Defence and intelligence logistics are further strengthened by innovations such as route optimisation, meaning key resources are sent quickly to where they’re urgently needed.
By equipping intelligence professionals with AI-fuelled dashboards, they’ll never miss key information. By automating routine tasks, such as writing budget reports and other bureaucratic documents, AI can save intelligence professionals crucial time to focus their efforts on the most important task: protecting the public.
Interoperable, open systems are essential
Clearly, AI has the potential to strengthen the security and defence of the UK and Europe in the face of all manner of threats. However, for this to happen, interoperable AI technologies that run on open systems are paramount.
That’s why governments must focus on fostering homegrown AI innovation and building a resilient, sovereign digital infrastructure that can sustain it. The British government is already making strong progress here through initiatives such as the AI Opportunities Action Plan, and a £1 billion investment package aimed at improving the UK’s cyber defence capabilities. Similarly, the European Union has a Defence Industry Transformation Roadmap that aims to foster ‘disruptive innovation for defence readiness’.
Of course, developing and implementing advanced AI defence technologies requires significant effort and investment, particularly in challenging economic conditions. That said, legacy systems often remain in use for decades, as replacing them outright is costly and disruptive. Adopting open and interoperable digital solutions can help protect existing investments, improve interoperability across platforms, and enable adoption of emerging technologies. Over time, this approach can provide better value for money by allowing defence organisations to integrate cost-effective alternatives and reduce vendor lock-in, without needing to overhaul entire systems at once.
Being able to make changes to an IT stack isn’t just good for managing costs, but also means legacy components can be quickly replaced as they approach the end of their lifecycle, or if they’re simply no longer good enough or stop working. It’s this agility that will keep the security, intelligence and defence services fighting ready as AI innovations rapidly advance.
A preference for open source technology does more than provide agility and future-proofing. By enabling shared development, community-driven improvements, and easier integration with existing systems, it serves as a fundamental enabler for innovation and skills development.
The importance of skills sovereignty
Although a self-sovereign AI defence infrastructure is crucial to upholding national security in today’s day and age, it’ll be hard to create, implement, and maintain without highly skilled IT professionals.
In the face of a well-reported IT skills shortage worldwide and competition from other industries that also need their own fair share of technology professionals, it’s understandably challenging for the defence sector to achieve skills sovereignty. Nonetheless, it’s essential to achieve long-term technological independence.
So, what should the defence sector be doing differently? For starters, defence organisations need to look at the talent they already have. Creating a supportive, low-stress environment for IT professionals and providing ample opportunities for career growth will help retain staff and prevent skill shortages from worsening. Looking beyond the IT department is also worthwhile: offering staff in other areas opportunities to upskill in technology can help close internal gaps and build a loyal workforce. Additionally, tools such as AI assistants can support teams by accelerating learning, aiding decision- making, and enabling the rapid adoption of new technologies – all while helping to address critical skill shortages across the organisation.
The other way to close internal skill gaps is to support the IT professionals of tomorrow by investing heavily in domestic talent pipelines, which I believe will be a major trend in 2026. That means collaborating with regional educational institutions on training programmes that equip candidates with the skills they truly need to succeed in the modern defence industry, and giving them opportunities to apply these skills in real-world settings through work experience placements and internships.
As we fast approach 2026, it’s time for governments not just to realise that strategic autonomy is vital in shoring up their defences, but to actively prepare for it.
But technology alone isn’t enough. To avoid wasted investment, these AI-driven technologies need to be supported by interoperable, open systems that are trustworthy and transparent. And their successful creation, implementation, and ongoing upkeep depend on a strong IT workforce.

This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.











