How the public sector can use AI ethically

use AI ethically
© BiancoBlue

Asheesh Mehra, CEO and Co-Founder, AntWorks, advises how governments can adopt artificial intelligence (AI) in an ethical and measured way

Humane, responsible AI is the future, and its applications to solve for some of the greatest challenges the world, societies and enterprises face today are limitless. The private sector has benefited greatly from this powerful technology, from helping businesses achieve scale, to increasing labor productivity, driving the bottom line and in the midst of COVID-19 pandemic, it’s allowed many companies to maintain operations. While the private sector has made significant inroads with the use of advanced technologies, a great deal can also be gained by its ethical use in the public sector.

Examples of AI adoption in the public sector can already be found in governments across the world. An Accenture survey of public sector leaders found that 83% of senior public sector leadership teams are both able and willing to adopt intelligent technologies. In Singapore, where I live and where my company is headquartered, the government uses chatbots to respond to queries from the public, taking away the need for citizens to scroll through pages and pages of government sites to find answers to some of their most frequently asked questions. In the UK, healthcare systems are beginning to implement a remote monitoring system which harnesses AI to detect if symptoms have gotten worse, and the UK ranked second globally in terms of AI readiness.

Yet alongside AI’s massive potential, there are also clear challenges to avoid, both at the public and private sector level. First and foremost, AI must be developed and implemented with care and consideration, particularly to avoid misuse and unintended consequences.

The public sector needs to focus on ethical principles and practices when considering implementing AI into their systems. Here’s how focusing on ethical AI can transform the public sector, and why the government needs to be smart in regulating AI applications.

Smart regulation

The AI industry in the UK is the third-largest by investments, behind only the US and China. This means the UK government has a significant responsibility to take a lead on creating and implementing rules on AI use. Some people, including myself, refer to that responsibility as ethical AI, and this means that companies and governments need to be accountable. Legislators and regulators have a key role to play in the area of AI accountability. That involves specifying the applications for which AI can and cannot be used, but AI as a technology should not be regulated. Instead, there should be governmental regulations put into place to help standardise how the technology can be used and in which context. For example, regulations should indicate that applying AI is appropriate for particular purposes in specific industries, while other laws or rules should make clear what applications of AI are not allowed.

 There are certain regulations and codes of conduct that need to be revised for the AI era, however. Take the Nolan Principles for example, which set the ethical standards and principles upon which public service should be conducted, rules that must be upheld by those working in public service to instill confidence with the public. The Nolan Principles are a good framework for serving the public sector, but they do need to be put into context as AI goes mainstream. For instance, honesty, integrity and openness, key Nolan principles, need to be deployed in the context of AI, which means every public body that uses AI must use it responsibly.

How the public sector can deploy ethical (and effective) AI

How good an AI application is depends on the data it uses, and how effectively it uses. This data includes structured data, like names, dates, stock information and geolocation, and unstructured data, which includes email files, images, video and social media posts.

Using a cognitive-based data digitisation platform that can process unstructured and structured data, which makes decisions based on a holistic view and broad insights, means that an organisation has a much bigger picture of what’s going on, vastly reducing the risk of bias when looking at datasets. An AI system that functions like the human brain can do this, looking analytically at unstructured data in order to create a balanced and ethical interpretation of that data.

For ethical AI to work, the government must work with the private sector to ensure that AI solutions allow for auditability and traceability. These traceability capabilities exist on your laptop today. If you gave your computer to forensic investigators, they could dig deep and see every website you’ve ever visited. They could identify what information you downloaded, what data you copied and everything else you did on the machine. However, AI algorithms and engines don’t always work that way. Such technologies can immediately sweep up these identifying footprints. In the process, they erase the record of what occurred and the ability to assess what happened and when.

That makes it difficult to learn from mistakes, police for possible infractions and identify those who don’t follow organisational or regulatory rules. Governments using AI should consider the importance of auditability and traceability to their enforcement and compliance efforts, and only then will AI be able to deliver transparent services to the public.

The future of AI in government

With individuals like Dominic Cummings striving to transform and modernise the public sector, AI could be UK public servants’ greatest ally moving forward past the COVID-19 era. First, ministers need to address how to ensure that AI is used ethically in public services, whilst reassuring civil servants that their jobs are secure.

This ties into a wider discussion around giving civil servants more responsibility in the workplace, and using technology as a way to become more productive, rather than it taking over the job itself. Developing an algorithm or any AI system that is ethical first requires ethical hiring practices which means diversity in recruitment so that biases can be combated more naturally. The future productivity of the government relies on its readiness to adopt ethical AI, and time will tell whether the UK government chooses to implement next-gen tech, which will be all the more important to make citizens’ lives for the better, in particular with issues like Brexit and managing recovery through a post-COVID economy.

LEAVE A REPLY

Please enter your comment!
Please enter your name here