Building trustworthy AI: Governance, legal transformation and the ethics of emerging tech

Data flowing with high speed showing Futuristic information technology and computer science concept
image: ©Cinefootage Visuals | iStock

Kay Firth-Butterfield, the previous inaugural Head of AI and Machine Learning at the World Economic Forum, explores how to build trustworthy AI and the changing pros and cons of generative AI

Ranked among the world’s foremost artificial intelligence speakers, Kay Firth-Butterfield is a trailblazer in AI ethics and governance. She was the world’s first Chief AI Ethics Officer and served as the inaugural Head of AI and Machine Learning at the World Economic Forum.

Named to the TIME100 Impact List and Forbes 50 Over 50: Innovation, her influence spans policy, law, business, and education. In this exclusive interview with Champions Speakers Agency, she shares her insights on responsible AI adoption, the promise and pitfalls of generative AI, and how technology is set to transform the legal profession and beyond.

Q: As AI capabilities rapidly expand, what key principles should guide businesses in deploying artificial intelligence responsibly and effectively?

That’s a huge question, because there are so many ways AI can be used in a business. But I’d say to use it successfully, you need to be very aware of the responsible—or trustworthy—aspects of artificial intelligence.

You shouldn’t deploy AI—or, if you’re creating it yourself, design and develop it—without keeping ethics in mind. We call it ‘responsible’ or ‘trustworthy AI’ now, because those factors might affect successful deployment if you don’t get it right.

There’s the possibility of serious damage to a company, not just brand or customer loss, but financial loss too. More and more regulators are starting to sue those using AI irresponsibly, or without built-in trust. No one wants to be seen as untrustworthy with AI—it’s not a good look.

Where to deploy AI? Some common uses are in human resources, such as helping with talent spotting. However, there are big problems with using AI in HR because it can bring in human biases. We’re seeing some lawsuits in the US where companies unwisely bought AI for HR and are being sued for using discriminatory tech. You have to be really careful—it’s a balance between AI’s benefits and thoroughly thinking through buying and deploying these systems.

Other AI business uses? Manufacturing companies across factory floors, and drug companies to help design pharmaceuticals. For example, DeepMind’s AlphaFold enabled big advances in using AI for biological work.

There’s generative AI, which everyone’s talking about. You could use it in business, but be really aware that if you use models like ChatGPT, the data you feed it goes in and could come out anywhere. Don’t give it trade secrets. We saw a confidential Samsung memo get leaked globally when an employee had ChatGPT transcribe it.

So, if you’re using generative AI in business, understand what AI is. It just predicts the next word—it’s not actually intelligent. Let teams play with it after your legal department greenlights it, and your C-suite understands AI and how you use it, with guidelines from your CTO or CIO.

Q: In the context of accelerating digital transformation, why is it essential for organisations to engage with emerging technologies, and how can they do so without falling into common implementation pitfalls?

I think it’s important to make use of the latest technologies in the same way as it would have been important for Kodak to notice there was a change coming in the photography industry. Businesses that don’t at least look at digital transformation are going to find themselves on the back foot.

But a word of caution here: you can also go hell for leather and then find that you have the wrong AI or the wrong systems for your business. I would say that it’s really important to practise caution, keep your eyes open, and think about this as a business decision every step of the way.

It’s particularly important when you decide that yes, you’re ready to use artificial intelligence to, I would say, hold your suppliers’ feet to the fire—ask the right questions, ask detailed questions. Make sure you have somebody in-house or a consultant who will enable you to ask the right questions. Because, as we all know, one of the greatest wastes of money in digital transformation is if you don’t ask the right questions and do it correctly.

Q: Generative AI is transforming how we interact with content and data. In your view, what are the most significant benefits it offers, and what risks should we be paying attention to?

All of us can use AI now—it’s a hugely democratising tool. It means small and medium-sized enterprises that could not have used AI in the past can now do so.

When we talk about it, we also need to recognise that all the data—and the bulk of the data in the world—is created in America first, and then Europe and China.

There are several data challenges in terms of the data that these large language models are using. They’re not actually using the world’s data—they’re using a small subset—and we are beginning to talk about digital colonisation. We’re projecting content derived from data from America and Europe to the rest of the world, and also expecting them to use that content.

Obviously, different cultures need different answers. So, there are lots and lots of really beneficial parts of generative AI, but also some really big challenges ahead of us.

Q: From your perspective, how will technologies like generative AI reshape the legal profession—and what implications could this have for training, ethics, and access to justice

When we talk about the legal system, I think we’re not only talking about the legal system, but other professions as well—like medicine, accountancy, insurance, early and middle management in business.

With generative AI, it can do tasks we currently give to young lawyers, like taking notes and transcribing recordings—so you won’t need them for that anymore.

It can research, if it has access to the right databases—but there’s a lot of discussion about it potentially breaching copyrights and IP. If it only has open internet data, it won’t access proprietary sources behind paywalls. You have to be careful when deploying it that it has the data you actually need. But if so, it can research quicker than humans.

It can write emails; we may see paralegals and junior lawyers disappear across professions and businesses.

Is that a good thing? It cuts legal costs, helping access, but challenges training and experience building. How do you become a senior partner without that junior role?

Clients will cite machine advice and question human lawyers. We must interact carefully—like pilots using autopilot. Blind faith in AI causes issues, as we’ve seen with sentencing tools—judges just following them, versus combining strengths.

There are major changes coming—for law and beyond.

This exclusive interview with Kay Firth-Butterfield was conducted by Mark Matthews.

Contributor Details

OAG Webinar

LEAVE A REPLY

Please enter your comment!
Please enter your name here