Cambridge Consultants today unveiled five missing ingredients to responsible governmental artificial intelligence (AI)
A new report, to be published at Mobile World Congress, states that while the research and application of AI techniques is quickly coming to the attention of governments across the globe, it often lacks the holistic framework to appropriately govern such adoption.
It confirms the key to successful collaboration, resulting in responsible AI deployment in government, lies with the following five factors:
- Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
- Explainability: It needs to be possible to explain to people impacted why the behaviour is what it is.
- Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
- Transparent: It needs to be possible to test, review, criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained.
- Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded.
Commenting on the report, Michal Gabrielczyk, Senior Technology Strategy Consultant states: “These principles, however they might be enshrined in standards, rules and regulations, give a framework for the field of AI to flourish within government whilst minimising risks to society and industry from unintended consequences.
“Only by laying the groundwork and guidelines for effective, reliable AI today can we build consumer faith and enable an exciting future, while maintaining a firm control of costs as AI-based outputs evolve.”