Government must make AI systems transparent to build trust

build trust
© Wrightstudio |

Nisha Deo, Policy Lead at Rainbird discusses why Government must turn artificial intelligence (AI) from a ‘black box’ into a ‘glass house’ in order to build trust

AI systems now being widely used across the public sector to inform everything from immigration to parole decisions. Trust in public institutions depends not just on their effectiveness but on their transparency yet the state is beginning to adopt forms of AI whose inner workings are not widely understood and cannot be easily vetted. For example, revelations the Home Office has used an ‘algorithm’ to stream visa applicants has raised concerns over accountability and fairness in public services. The Home Office refused to publicly disclose the workings behind the system but we do know AI systems can be vulnerable to bias. Unconscious discrimination can be difficult to surface even in humans but it is much harder to surface in machines when the mathematical wizardry behind their decisions is inexplicable to most laymen.

The key challenge

The issue is that as a society we may have inadvertently shunned so-called ‘symbolic’ AI which mirror human thought patterns in favour of complex neural networks that operate on the basis of obscure calculations understood only by data scientists. As AI increasingly assists tasks within the public sector, machines need to be publicly accountable for their decisions in the same way as other public servants. This means they need to be designed to be modifiable and auditable by the public service professionals they will work with. AI technologies can only be audited by ordinary human professionals if they emulate the thought processes of the humans who typically take those decisions; if an AI platform replicates the thought process of a Home Office worker then it’s easier for the Home Office to audit and justify its decisions.

Who audits the machines?

The neural networks currently being used do not think or explain their decisions in human terms. They operate on the basis of complex correlations and probabilities designed by data scientists which are difficult for anyone outside of this field to understand. This means it can be difficult to establish what variables the software has based its decisions on or how and why each of those variables are weighted. For example, Conventional Neural Networks are a network of weighted nodes used for many ‘image-recognition’ cameras, yet it is very difficult to know what features of an image they extract in order to classify images into categories.

Because neural networks are trained to look for patterns in data but do not follow human rules of logical inference they can be prone to spurious correlations. For example, a visa-application AI trained on EU migrant data could rate people with EU qualifications higher than better-qualified people from the Commonwealth.

AI could produce unequal access to services

Building public services around algorithms understood only by data scientists is like basing trading decisions around complex financial instruments that nobody but a genius can understand. The 2008 financial crash was partly caused by the fact that very few people understood the financial instruments banks were buying and selling; when trading was no longer based on differences in attitudes and preferences but on differences in understanding, then risk was concentrated not on those most able to afford it but on those least able to understand it. Similarly, if Government algorithms are understood only by a privileged few then access to vital services will be decided not by differences in need but by differences in public understanding.

Those able to understand AIs might be able to ‘game’ the system and write applications for everything from Government loans to visas that get approved ahead of more worthy recipients. Those least able to understand the workings of AI, including vulnerable groups such as the disabled, may lose out by inadvertently including or omitting details in their applications which the machines identify as ‘red flags.’

Putting the human back into AI

The Science and Technology Committee recently told the government to be transparent with the public about where algorithms are being used within Central Government. The only solution is to transform the algorithms behind our public services from a black box into a glass house. We need AI technologies to become more human-centric so that they both make and justify their decisions in human terms. This not only means they can be audited and improved by ordinary professionals but that their decisions can also be explained to the general public, restoring trust in public services.

Leading organisations across the private sector, from credit card companies to law firms, are now working with their best experts in key department and mind-mapping their thought processes so that they can be reproduced by machines. Crucially, these rules-based AI systems produce a ‘human-readable’ audit trail showing how their decision-making criteria is weighted so any prejudice can be exposed and cleansed from the system.

This process of mind-mapping human decisions for machines also enables ethical and compliant decision-making processes to be visualized and taught across the public sector. In this way, increasing algorithmic accountability helps ensure more fair and ethical decisions among all public sector employees. A civil service AI trained with a ‘mind map’ to reproduce typical civil service recruitment processes might expose unconscious biases; for example, an office policy of ‘hot-desking’ that may unwittingly discriminate against people with autism who prefer set routines.

Encoding human expertise for machines also empowers exemplary public service workers to turn their ethical and transparent decision-making into a ‘blueprint’ for best practice across their organisation. This can improve machines and humans alike, bringing fairness, consistency and accountability to everything from immigration to parole decisions.

LEAVE A REPLY

Please enter your comment!
Please enter your name here