Is insurance the answer to Europe’s AI worries?

insurance
© SJMPhotos

Saar Yoskovitz, Co-Founder & CEO at Augury,  argues that insurance is the key to monitoring artificial intelligence (AI), not regulation

Businesses are entering a new chapter in the adoption of AI technologies as the capabilities of the solutions improve, and development and implementation become easier.

The pandemic has undoubtedly been an accelerant here, with business leaders in three key industries – financial services, technology, and retail – reporting significant increases in the use of the technology so far this year, in comparison to 2020.

Although AI is clearly an exciting development, business leaders have sounded caution on the speed at which the new technology is being deployed in industries. To achieve improved confidence, adoption and ultimately efficiency in using AI, businesses need reassurance in mitigating any associated risks.

AI guardrails

Governments around the world have started to develop strategies for AI, focussing on ways to better govern the use of the technology to protect and benefit society.

The latest is from the European Commission, which outlined new proposals that state “unacceptable” uses of AI will be banned, and companies in breach will be hit with large fines. AI that is deemed to impact the safety, livelihood or rights of people will be assigned the highest risk category, with a particular focus on systems that mimic human behaviour or use any kind of state-controlled social scoring.

These moves are to be welcomed and regulation like this is vital for putting in place tighter controls on the technology’s use in society. However, when it comes to the commercial use of AI, businesses can’t rely on government regulation to protect them against potential losses in the event it fails to live up to its promise.

A complex business

As AI penetration and sophistication matures across industries, the tech will increasingly make high-risk decisions. But AI models are often brittle, do not deal well with edge cases and may have been trained on a dataset with inherent biases.

This is especially prevalent with AI systems used to predict human behaviour or utilise human behaviour as an input. The margin of error on these decisions can be very slim, and errors can have serious consequences.

But It’s not always clear who is responsible when AI systems fail. I can sue my doctor for malpractice, but who do I sue if an algorithm makes an inaccurate clinical recommendation to that doctor? Do I sue the doctor, the hospital that purchased the software, the software vendor or perhaps the provider of the training dataset?

For enterprises where the stakes are not as high as in some cases (featuring AI and human behaviour, for example) the pursuit of using the technology to improve efficiency can be balanced with mitigating the risk. But how can this be achieved? Do AI vendors guarantee the accuracy of their algorithms? Do insurance companies cover the risks associated with AI products?

The insurance policy need

The use of any high-investment and high-risk product needs the reassurance of a safety net should the worst happen. This is important to spur increased adoption, particularly in industries such as manufacturing. Analysis of global manufacturing companies suggests 74% of manufacturing businesses are stuck in ‘pilot purgatory’ – having not yet successfully scaled digital transformations as part of the wider industry 4.0 movement. AI plays an important role in digital transformation use cases here, and greater support for businesses looking to implement new solutions could help to improve the adoption rate.

Insurance companies have quantified and mitigated new types of risk for centuries, and there’s no reason this trend shouldn’t continue with AI. More broadly, insurers can help enterprises at three stages of AI adoption by:

  • Choosing an AI solution: Many vendors offer AI solutions. Which ones can enterprises trust to work for a particular use case? For example, the industrial sector has been slow to adopt AI technology at scale due to inexperience and perceived AI risks. Insurers who perform due diligence on specific AI solutions, and then back validated solutions with insurance products, can become trusted advisers.
  • Deploying an AI solution: Once an AI solution is deployed, what if it doesn’t deliver the promised outcomes? What if it makes mistakes that result in losses? Again, insurers have the skills to validate that a particular solution delivers results in production and can reimburse customers if they suffer losses when a validated solution does not perform as expected. In this scenario, the interests of the insurer, AI vendor and enterprise customer all align.
  • Scaling AI solutions: As AI becomes more widely used within enterprises, AI solutions will make a higher number of high-risk decisions, potentially with less human oversight. This increases AI risks and makes insurance for AI even more crucial to support scaling up adoption.

Alongside regulation, insurance can help to mitigate the risks to enterprises of deploying AI at scale. This helps businesses to then concentrate on maximising the benefits provided by AI technology.

LEAVE A REPLY

Please enter your comment!
Please enter your name here