EU’s AI legislation will help businesses realise the full potential of AI

AI legislation
© Narong Rotjanaporn

Rachel Roumeliotis, Vice President of Data and AI at O’Reilly, explores how the EU’s recent AI legislation will help businesses realise the full potential of artificial intelligence (AI)

For years there has been speculation about how AI will be regulated as more organisations integrate it into their systems and processes. In April this year, the EU proposed AI legislation and since then many had questioned how this will impact businesses. At the moment, the overarching feeling is that the legislation will benefit society as a whole, but it might impact how companies adopt and develop AI.

Despite a bit of uncertainty, recent ‘AI in the Enterprise’ research from O’Reilly shows more businesses than ever are using or considering AI as part of their long term strategy. However, just over half (52%) are checking for bias or issues of fairness in their AI systems.

While so much focus has been in how to design and implement AI technology, trust has become a key inhibitor to progress. This is even more of an issue in the public sector, where AI-assisted decisions can have a huge impact on the day to day lives of people. The new AI legislation from the EU hopes to remedy some of these issues and provide the public with more reassurance about AI. Putting ethics front and centre will in time build trust and allow organisations to expand where and how they can implement AI.

Getting the AI ball rolling

The AI-train has rapidly been gaining momentum in recent years, both in terms of business usage and results. We’re now seeing the technology being used for cancer detection, climate change analysis, the control of traffic and marketing for businesses. Globally, a quarter (26%) of businesses have reached the ‘mature’ stage of AI usage. This means that they have AI products in production. In the UK, this figure is even higher, with 36% classifying their AI usage as ‘mature’.

Looking at the industry breakdown, retail came out on top, with 40% claiming that their usage of AI was mature. This was closely followed by financial services (38%) and telecommunications (37%). Comparatively, education (10%) and government (16%) were the least mature in their usage of AI.

The stats suggest that, while AI adoption in the private sector is snowballing, the public sector is struggling to keep up. The question is: why?

Pulling it all together

There is likely more than one factor as to why the public sector is struggling in its uptake of AI. Budgetary concerns could certainly be a key issue, but perhaps not enough to account for such a large difference between the public and private sector. The other glaring issue is public trust.

The general public already had their guard up against the use of AI in the public sector. Their worst fears were then proven correct in 2020 when A-Level and GCSE grades were predicted using an AI algorithm that faced accusations of bias. This led to the results being scrapped and replaced by predicted grades given by teachers. It’s examples like these which damage public trust in AI.

In terms of checking AI models for bias, the UK is ahead of the global standard. Across the globe, just 52% of companies are checking their algorithms for bias. Meanwhile, in the UK, this figure rises to 56%. However, when it comes to decisions that impact people’s lives and their futures, a little better than half isn’t enough. This counts for both the public and the private sector. Private sector companies, such as banks, also have the power to make decisions that can impact people’s lives.

The EU’s AI legislation, which focuses heavily on AI ethics, should force companies to confront these shortcomings and be the starting point for organisations to build public trust and, in time, release the handbrake which is holding AI back. A more educated approach to AI will be key to achieving this.

Letting it loose

It’s clear that not enough businesses are checking for bias in their AI models. However, research suggests that this isn’t necessarily negligence but, instead, a lack of training and skills. Globally, the biggest bottlenecks to AI adoption are a lack of skilled people (19%) and data quality (18%). In the UK, a quarter (25%) labelled a lack of data/data quality as a major hindrance and 14% said the same about skills within the organisation.

This skills gap is already having a huge impact on the adoption of AI and, with the introduction of the EU’s AI legislation, will have an even greater impact if businesses do not act soon. Half of UK businesses admitted that only about 50% of their AI projects are actually completed. Meanwhile, as we’ve seen, those that are completed run a risk of being biased. Moving forward, neither of these options will be profitable for companies.

To close this skills gap, businesses must ensure that they are providing adequate training for their AI-handling employees. This means equipping them with the necessary knowledge to develop and train an algorithm that is highly functional and ethical. Feeding the algorithm with high quality and unbiased data is the first step, but employees must also be trained to consistently check the algorithm for bias or inconsistencies and make the necessary changes.

With the introduction of the new AI laws, some employees may be nervous to make a mistake. Businesses can take this fear away by empowering their employees to learn in the flow of work. This means allowing them to ask questions and receive quick answers, based on the most up-to-date guidance, which they can apply to their work. The learning platforms to enable this exist, and it’s now time for employers to start leaning on them. Or they could be one of the first organisations to feel the sting of the new AI legislation.

It would be easy to see these AI regulations as curtailing technological advancement, but really it should be seen as an opportunity to get the most out of AI and education on its potential. By working within the guardrails set out by these new laws, organisations will be able to continue to push the boundaries of AI while also instilling public trust in the process. This allows for testing with more confidence and less bias. AI is undoubtedly a technology that can make a big impact, but doing it responsibly will ultimately increase its impact.

LEAVE A REPLY

Please enter your comment!
Please enter your name here