Artificial intelligence UK policy analysis

United Kingdom map geometric network polygon graphic background.
Image: © blackred iStock

In this interview, Camden Woollven, the Group Head of AI Product Marketing at GRC International Group, shares her insights on UK artificial intelligence policy

Tell us about the first grant fund established by the AI Security Institute, which will focus on research related to AI security and the protection of critical systems.

This is the AISI Challenge Fund. It’s their first official grant scheme focused on artificial intelligence (AI) security and protecting critical systems. They’re looking at areas like misuse in sectors such as healthcare, finance and energy to ensure humans stay in control of autonomous systems, build resilience against new threats, and work on alignment so models don’t go off course.

Depending on the project, grant sizes range from £50,000 to £200,000. It’s open to researchers at universities and non-profits anywhere in the world. The application process is in two stages. First, you send in an eligibility statement and an expression of interest. You get invited to submit a complete application if you make it through. The fund opened in March, and successful bids will be announced around 12 weeks after submission.

It’s all part of the UK government’s push to build trust in AI and support research that makes systems safer. It is not just theory, but also practical applications that can be translated into policy or industry practice. It’s also a way to get academic work feeding into the real-world problems that AI is starting to create.

How does the AI Opportunities Action Plan highlight the importance of shaping AI applications within a modern social market economy?

The Plan basically outlines that if we want AI to work in a modern social market economy, we need to build it on the right foundations and steer it in the right direction. It’s not just about throwing money at innovation. It’s about ensuring the tech fits into the kind of society and economy we want.

There is a strong focus on building the basics first, such as proper infrastructure, access to talent, and smart regulation. This allows AI to be used safely and at scale across different sectors. They also want AI to improve things that matter to people daily, such as public services, healthcare, and how businesses operate.

The other significant point concerns ownership. The UK doesn’t want to just consume AI tech that has been built elsewhere. The Plan pushes for the UK to lead in developing core AI technologies so it can shape the values, safety and standards that come with them.

It also talks about getting the balance right, such as encouraging innovation but ensuring proper oversight so things don’t spiral. And much of it comes back to people, training, jobs, and new opportunities. It aims to ensure that AI creates value across society, not just for a few large players.

Does this reflect the Department for Science, Innovation and Technology’s commitment to guiding the AI revolution based on principles of shared economic prosperity, improved public services, and enhanced personal opportunities?

Yes, it does – that’s pretty much the core of what DSIT’s aiming for. Through the AI Opportunities Action Plan, they’ve set out a strategy that’s not just about pushing innovation but ensuring AI’s benefits are spread fairly.

On the economic side, they’re backing AI as a growth driver. The plan discusses adding £400 billion to the UK economy by 2030. They’re investing money into infrastructure like AI Growth Zones to get the private sector more involved and help smaller businesses adopt AI without being left behind.

For public services, they’re trying to modernise the way government works. A new digital centre is being set up inside DSIT that’s meant to pilot and scale AI tools. It’s focused on automating routine stuff so services run faster and with less friction. They’re using a “scan, pilot, scale” approach in areas such as health and education, where AI can make a tangible difference for people.

Then there’s the personal angle. They’re investing in skills through Skills England and scholarship schemes to ensure people can move into AI-related jobs and aren’t shut out of the shift. They’re also working to attract and keep top AI talent so the UK doesn’t fall behind.

To what extent does the UK set global standards for secure innovation, ensuring that AI is developed and deployed in an environment that safeguards critical systems and data?

The UK is actually doing quite a lot here. It’s not just talk. There’s a proper focus on making sure AI gets built and deployed in a way that protects critical systems and data, and that’s showing up in the standards they’re pushing out.

One of the big ones is the AI cybersecurity standard. It’s the first of its kind and is already used as the starting point for a global standard through ETSI. That alone puts the UK in a leading position. Then there is the Code of Practice for AI Security. It’s got 13 principles covering everything from how you design and deploy systems to how you shut them down safely. It’s not legally binding, but it’s already starting to influence how the industry thinks about risk.

The UK is also working internationally. A new coalition focused on cybersecurity skills, working with countries like Japan, Singapore and Canada. That’s about building the expertise needed to back up all these standards and ensure they stick globally.

So in terms of shaping the rules of the game, the UK’s definitely out in front on AI security. It’s not just about regulation, but also about providing other countries and industries with a working model they can adopt.

Is this vital to implementing the Plan for Change?

Yes, it is. The whole point of the Plan for Change is to drive economic growth, improve public services and boost national security. None of that really works unless people trust the AI that’s being used to do it.

The AI Security Institute’s work is central to that. It’s not just about pushing innovation – it’s about ensuring it is safe. If AI systems are secure and people know they’re being used responsibly, it’s much easier to roll them out across things like healthcare, finance and infrastructure without pushback.

It also feeds directly into public confidence. If you don’t tackle the security risks correctly, you lose trust, which slows everything down. By getting ahead of threats like fraud and cyber attacks, the UK is creating a safer space for AI to grow, precisely what the Plan is trying to achieve.

Contributor Details

OAG Webinar

LEAVE A REPLY

Please enter your comment!
Please enter your name here