artificial intelligence technologies

Dr Emma Carmel, Associate Professor, University of Bath, offers her expert insight into the use of artificial intelligence technologies (AITs) within government to transform public policy

Automation of human decision-making through the use of artificial intelligence technologies (AITs) in government has the potential to transform the nature of public policy, politics and statehood in a democratic society. AITs may fundamentally alter where decision-making takes place, as well as how, by whom and when. It opens a radically new field of political and institutional relationships in public policy and services. These developments significantly challenge our systems of political responsibility and accountability, and create potentially new roles for state, political and private sector actors. Nor is this a distant future: Automation and the use of machine-learning in governmental decision-making is growing rapidly.

AITs, automation and public policy

Automation has for more than a century in fact been a commonplace feature of government decision-making. It is expressed in forms, checklists, conditionality criteria, decision rules, and regulatory guidelines that create smooth-running government and bureaucratic machines. So far, so normal.

What is different now is that the diverse new forms of computerised automation – from algorithmically automated decision-trees, to probabilistic risk evaluations, to unsupervised machine-learning – signal a qualitative shift in the meaning, significance and practice of automation. AITs promise efficacy, accuracy and reliability in public policy. Yet in practice, they risk rigidity, de-contextualisation and opacity in decision-making.

Indeed, AITs may not always be appropriate, effective or available to use in government decision-making. The current wave of AIT adoptions raises problems of: Explainability and accountability in automated decision-making; how uncertainty and context are accommodated when decision recommendations are automatically generated through risk-based calculations; bias, and the capacity of policymakers and front-line staff to interpret AITs into their professional judgement.

When considering the social, political and ethical implications of introducing and managing the use of AITs in public policy, it is essential to take account of two things. First, the diversity and limits of the technologies themselves, and especially the variety of automation that is now available. Second, the need to explicitly manage the complexity of decision-making – that increases, rather than decreases – when they are adopted in practice

Types of automation using AITs

  1. Single-stage algorithmic automation can be based on pre-defined decision-models, using regulatory criteria for example. This is often based on ‘imperative programming’, of the classic step-by-step, ‘if…then’ type. By itself, this not AIT-based automation. However, where an algorithm is programmed to produce a particular outcome (decision), but not instruct the software about how it should reach that outcome, this might use classic AI-based ‘declarative programming’;
  2. Two-stage algorithmic automation uses machine-learning on data from previous decisions to generate a model of those decisions. This model is then used to design an algorithm that is applied in actual type (1) automation. Like all machine-learning, this type of automation is in practice heavily dependent on the quality, detail, appropriateness, and format of the data used for training the algorithm. Most technological developments are in this area;
  3. Concurrent automation uses machine-learning and/or neural networks in ‘real-time’ to make decision-recommendations. As such, the decision-recommendations use past and current data to inform the risk-based recommendation;
  4. Autonomisation is where machine learning and neural network systems make decisions based on patterns in data, and these decisions have policy and legal effect.

AITs and existing decision-making processes

AITs involve increased complexity and transformation in the importance of different parts of decision-making processes, even though the decision to act on recommendations in most cases remains in the hands of humans. Key elements of AI-based policy decision-making include:

  • Decision to adopt AITs, including impact assessment and ethical review;
  • Procedures for, and terms and conditions of, procurement;
  • Model, algorithm, system design and iteration;
  • Data selection, cleaning, harmonisation, storage and formatting, requiring improved data collection for all government systems;
  • Learning-system functioning in practice, its application and outcomes in specific service contexts;
  • Evaluation and audit of system adoption, relevance, and use over time, in specific services and wider institutional and policy settings;
  • System revision and termination.

As such, AITs are being introduced into bureaucratic processes and institutional contexts that are shaped by human, organisational, digital, temporal and financial resources and capacities. These can generate conflicts between:

  • Technological demands for speed, data, and sustainable computing capacity;
  • Policy and legal requirements for fair, rigorous and legible decision-making;
  • Human, technological and financial resource capacities that promote sustainable and accessible decision-making over the long-term.

These conflicts involve a wide range of political actors. Senior public servants who approve, procure and regulate AIT adoption and use; public technology and data specialists who oversee the design, use and revision of the technologies; and corporate actors who develop, sell and provide AITs to governments; as well as wider policy stakeholders and citizens who encounter AITs, whether knowingly or unknowingly in their day-to-day dealings with the state.

So what to do?

Policy leaders must urgently address the challenge of increased automation and particularly the use of AITs for government decision-making. There are five inter-connected priorities:

  1. Draw on, develop, and make widespread use of the ethical frameworks around AIT adoption and use in government that have been emerging internationally in the last 12-18 months;
  2. Design new processes to manage accountability between data specialists, technology designers and professional/ ‘front-line’ teams;
  3. Revise and clarify procurement processes specifically for AITs, particularly around contractual obligations for openness, rigour and accessibility;
  4. Train non-specialist staff in interpreting the risk-based calculations of automated systems, and the limits of AIT-based decision-recommendations;
  5. Acknowledge their own knowledge gaps and reflexively engage inter-disciplinary teams to oversee AIT design, adoption and use, both generally, and in specific policy sectors.

 

Please note: This is a commercial profile

Contributor Profile

Associate Professor, & Director, MSc in Public Policy
Faculty of Humanities and Social Sciences, University of Bath
Phone: +44 (0)122 538 4685
Website: Visit Website

LEAVE A REPLY

Please enter your comment!
Please enter your name here