artificial intelligence revolution
© Kiosea39 |

Civica recently sat down with central government leaders to discuss whether the public sector is prepared for the artificial intelligence revolution and the ethics behind the technology. Steve Thorn, Executive Director, Civica shares his views from the event

By 2035, AI is estimated to add £630 billion to the UK economy. In many ways, AI is already a key feature of our everyday lives and its capabilities are expanding quicker than ever. From spotting lung cancer before a doctor is able to identify it to better predicting traffic routes, as demonstrated by Highways England. AI is undoubtedly improving UK citizens’ lives already, but its adoption doesn’t come without challenges across all sectors. The UK government is no exception.

We’re already looking ahead with our partnership with the University of Bath training the next generation of AI leaders and Professor Eamonn O’ Neill from the University was in attendance at the event.

How is AI changing the government landscape?

One of the key AI challenges identified at the event was data and the risks it poses if not handled responsibly. Our own research has shown that 43% of citizens don’t trust the government to handle their data. Organisations looking ahead to explore how AI can enhance citizen-facing services will have to tread carefully around their use of citizen data.

Early adopters are now deploying ‘assistive AI’ in back-office systems helping to improve productivity whilst containing AI to smaller, low-risk deployments. As we’ve seen in the private sector, AI represents a huge opportunity to drive cost savings, efficiencies and improve services – but we must acknowledge that it comes with considerable responsibilities.

Perhaps this is why there is some reticence within central government when it comes to implementing AI along with the largely unknown potential of the technology and also the risks around job losses. Confined assistive AI deployments are removing those fears and helping employees feel more optimistic about their jobs. Looking ahead, measures are already in place, such as the National Retraining Scheme, to help reskill employees whose roles will inevitably change.

Can we trust machines to make decisions?

Another increasing area of concern for central government is also around AI and ethics. What do we trust these ‘robots’ to do? How much freedom can we give them to gather and analyse sensitive data? How do we prevent bias creeping in to automated systems?

For most government organisations the 2017 UK Digital Strategy is still a major digital guide, however, it only states that “we must ensure citizens and businesses can trust the outcomes of processes that use AI technology.” Less well known is the recent guide to using artificial intelligence in the public sector published by the GDS, covering how to asses AI, help meet user needs and how to implement AI ethically, fairly and safely.

As we saw last year from Amazon’s sexist AI recruitment tool, bias can creep into any stage of the deep learning processes, discriminate and affect people’s lives. From data collection, preparation, training, modelling and analysis, it is especially difficult to detect with existing computer science and so the responsibility must lie with the users. To tackle this, governments must create environments for unbiased decision making and set an example on how to tackle bias in algorithms – be it through more diverse recruitment practices or developing an ethical framework for AI.

Pushing ahead with an AI strategy

The AI adoption challenges facing the public sector are well documented. Organisations shouldn’t let this stop them from pressing ahead with technology innovations. Professor Eamonn O’Neill and I have outlined three AI strategies to help government departments better prepare for AI in the next five years:

  1. Assign a data steward into a data governance framework. Building access models within and across departments can take time, especially given the majority hold vast amounts of data. Without a strong data environment, organisations will not be able to drive meaningful insights or value from their AI implementation. A data framework, embedded into an organisation through a steward will layer in the necessary practices and ensure data is robust and the responsibilities and risks are well defined.
  2. Trial AI as a decision-support technology. AI is already stripping out repetitive tasks and automating processes, helping to improve capacity and supporting better human based decisions. Find use cases where systems can support decision making at work, enabling low-risk deployments without taking away employee decision making responsibilities.
  3. Gain citizen trust early. Trust in the government is as difficult to win as it’s easy to lose, so departments must ensure they are clearly communicating how and why they are using citizen data. Getting citizen buy-in during the early stages of implementation will help to maintain trust in public-facing services. Outline the benefits clearly and early on, making sure there are no major surprises.

Supporting organisations on their AI journey

Following the roundtable, there was a consensus that central government departments need to invest more time in understanding their data maturity. Ensuring data is accurate, secure and has been gathered with consent still is an enormous challenge for central government departments. Departments can start preparing for more advanced AI deployments today by focussing on people, skills, technology and a robust data foundation underneath it all.

The roundtable reinforced the fact that there is no silver bullet with AI. Whilst the technology opens up incredible opportunities to enhance our day-to-day lives, our clients are now taking advantage with low-risk assistive AI deployments to support capacity and decision making. Before there is widespread adoption in the public sector, we should all better understand the potential, the risk and the ethics. As an innovation-first nation, we undoubtedly have a duty to evolve our services alongside new technologies and continue to discuss how we design and build an ethical AI-led future.

If we don’t someone else will?

LEAVE A REPLY

Please enter your comment!
Please enter your name here