govern artificial intelligence, europe
© Michele Ursi

Gaurav Kapoor, Chief Operating Officer at MetricStream, discusses how to govern artificial intelligence in the contemporary age

The world has evolved rapidly in the last few years and artificial intelligence (AI) has often been leading the change. The technology has been adopted by almost every industry with companies wanting to explore how AI can automate processes, increase efficiency, and improve business operations.

AI has certainly proved how it can be beneficial to us all, but a common misconception is that it is always objective and avoids bias, opinion, and ideologies. Based on this understanding, there has been a rise in recent years with companies utilising AI-based recruiting platforms in a bid to make the hiring process more efficient and devoid of human bias.

Yet, a Financial Times article quoted an employment barrister who doubted the progressive nature of AI tools and said that there is “overwhelming evidence available that the machines are very often getting it wrong”. A high-profile example of this being the case is when Amazon had to abandon its AI-recruiting tool in 2018 after the company realised it was favouring men for technical jobs.

However, AI has continued to advance at a rapid pace and its adoption by businesses has been further accelerated following COVID-19’s arrival. With debates of whether AI can be relied upon to behave impartially still ongoing, how can the technology be governed so organisations continue to act ethically?

Grappling a complex beast

During a press conference in Brussels earlier this year, the European Commission said it was preparing to draft regulation for AI that will help prevent its misuse, but the governing body has set itself quite the challenge. The technology is developing constantly so after only a few weeks any regulation that is introduced may not go far enough. After a few months, it could become completely irrelevant.

Within the risk community however, there is no doubt that policies are needed as a study found that 80% of risk professionals are not confident with the AI governance in place. At the same time, there are also concerns from technology leaders who believe tighter regulations will stifle AI innovation and obstruct the potentially enormous advantages it can have on the world.

A certain level of creative and scientific freedom is required for companies to create innovative new technologies and although AI can be used for good, the increasing speed with which it is being developed and adopted across industries is a major consideration for governance. The ethical concerns need to be addressed.

How forward-looking risk management can help businesses

Given the current and ongoing complexities that the global pandemic brings, as well as the looming Brexit deadline, we will likely have to wait for the EU’s regulation to be finalised and put in place. In the meantime, businesses should begin to get their own houses in order if they haven’t already with their use of AI and governance, risk and compliance (GRC) processes to ensure they are not caught out when legislation does arrive.

By setting up a forward-looking risk management program around implementing and managing the use of AI, organisations can improve their ability in handling both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for further risk. A governance framework should also be implemented around AI both within and outside the organisation to better overcome any unforeseen exposure to risk from evolving AI technologies and an ever-changing business landscape.

Unlike the financial services sector where internal controls and regulators require businesses to regularly validate and ‘manage’ their own models, AI model controls are already being put in place, reflecting the abundant usage of AI within enterprises. It won’t be long before regulators begin to demand proof points of there being the right controls in place, so organisations need to monitor where AI is being used for business decisions and ensure the technology operates with accuracy and is void of inherent biases and incomplete underlying datasets.

When an organisation is operating with such governance and a forward-looking risk management program towards its use of AI, it will certainly be better positioned once new regulation is eventually enforced.

Importance of centralising information

Too often, businesses are operating with multiple information siloes created by different business units and teams in various geographic locations. This can lead to information blind spots and a recent Garter study found that poor data quality is responsible for an average loss of $15 million per year.

Now more than ever, businesses need to be conscious of avoiding unnecessary fines as the figures can be crippling. Hence, it is important that these restrictive siloes are removed in favour of a centralised information hub that everyone across the business can access. This way, senior management and risk professionals are always aware of their risks, including any introduced by AI, and can be confident that they have a clear vision of the bigger picture to be able to efficiently respond to threats.

Another reason for moving towards centralisation and complete visibility throughout the business is that it often gets touted that the reason AI fails to act impartially is that AI systems learn to make decisions based on training data that humans provide. If this data is incomplete or contains conscious or unconscious bias or reflects historical and social inequalities, so will the AI technology.

While an organisation may not always be responsible for creating AI bias in the first place, by having a good oversight and complete centralised information to hand at any time, it becomes a lot easier to see where there are blind spots that could damage a company’s reputation.

Ultimately, it is down to organisations themselves to manage their GRC processes, have a clear oversight of the entire risk landscape and strongly protect their reputation. One of the outcomes of the pandemic is the increased laser focus on ethics and integrity, so it is critical that organisations hold these values at the core of their business model to prevent scrutiny from regulators, stakeholders and consumers. Until adequate regulation is introduced by the EU, companies essentially need to take AI governance into their own hands to mitigate any risk and to always perform with integrity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here