The dangers of AI are becoming increasingly more overt to governments and institutions globally. Experts at Oxford demonstrate the ‘profound risk’ of AI to humanity

What are the dangers of AI? While artificial intelligence has proven to have numerous benefits to key sectors like health, innovation, government and more, experts are becoming increasingly worried about its negative consequences.

Writing about this is Professor Brent Mittelstadt and Professor Sandra Wachter from the Oxford Internet Institute, who warn others of the dangers of AI, and how it needs to be regarded as a present issue and not a future one.

So far, in 2023, an open letter has circulated by Elon Musk and others talking about the ‘profound risk’ of AI to humanity and calling for a temporary halt to development. Other large organisations, such as the EU, have also expressed their deep concern with AI models and their lack of regulation.

2023 is not the first time concerns have been raised about AI use

Experts in AI and technology have been warning of the dangers of AI for many years, including those who engineered these models themselves.

They note the opinions of key creators like Geoffrey Hinton, the Google engineer and ‘godfather of AI’, who himself described his concerns with new large language models and artificial general intelligence.

Now, it has become a common feat that algorithms can mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment.

Algorithms can mediate social processes, business transactions and governmental decisions

While AI is already exacerbating many challenges already faced by society, such as bias, discrimination and misinformation, more dangers of AI include debates over consumer privacy, biased programming, and unclear legal regulation.

The authors from the Oxford Internet Institute state: “We are not currently on a path towards ‘intelligent’ machines that can surpass and supplant human intelligence, assuming such a thing is even possible.

“What is worrying is that dwelling on imagined future catastrophes diverts attention from real ethical dangers posed now by AI.”

cyber crime concept on a computer
Image © Userba011d64_201 | iStock

Bias and discrimination

There is no such thing as neutral data, the experts write. While learning machines can learn our biases, such as in healthcare, they can also reinforce them or introduce new ones in their code.

For instance, an AI recruitment programme might favour men for a job in computing because it has been learned most computer experts are men. Another example is in hospitals, where ethnic minority patients receive a lesser standard of care due to bias.


AI, and especially large language models, lower the costs and effort needed to create and spread misinformation.

An example of this can be seen with StackOverflow, a website used by developers that had to ban answers and code generated by ChatGPT because it looks like a human answer but is often incorrect.

‘Traceability leading to moral responsibility’ is discussed in the researcher’s paper, addressing the potential lack of moral agency of algorithms, meaning AI is allowing misinformation to spread without responsibility or consequence.

Exploring the environmental impact of AI

Other dangers of AI include its heavily environmentally-taxing use of natural resources and use of hardware.

Large language models produce more emissions than the aviation industry

Increasingly large language models produce more emissions than the aviation industry and have extremely resource-intensive datasets and models.

The experts state how a medium-sized data centre is estimated to use 360,000 gallons of water a day for cooling.

Of all the dangers of AI, we cannot be sure of the true size of its impact because the necessary data is not public and, therefore, cannot be fully scaled.


Please enter your comment!
Please enter your name here