sustainable development, algorithm
© Daniil Peshkov

New research suggests that the use of Artificial Intelligence could complicate sustainable development when left unregulated

Everyone is thinking about Artificial Intelligence (AI) since the A-levels fiasco in the UK, earlier in August. Real questions were raised about how algorithms were asked to make nuanced decisions about academic achievement, based on elements that would never be considered in an exam. This was a localised and highly specific scenario. So, how does an intricate technology like AI work in tandem to sustainable development?

In addition, AI has been creating intense headlines as it becomes more incorporated into policing tactics. The use of AI in identifying criminals raised interesting and ongoing debates about racial identification. However, one unifying fear across the political spectrum is the use of private data without appropriate regulation.

Multiple scandals in the recent past have confirmed the threats of this – such as the infamous Cambridge Analytica scandal of 2018, in which Facebook conducted a major privacy breach by leaking confidential user information to a data-mining company. Moreover, although AI should be developed in a socially responsible way, governments often do not impose strict laws on AI development, which may be detrimental to the society.

Unregulated AI versus the SDGs

In a new study published in Sustainable Development, Dr Jon Truby of Qatar University talks about how unregulated AI is a threat to the Sustainable Development Goals (SDGs)–a set of guidelines created by the United Nations (UN) for the sustainable development of all countries.

Dr Truby points out that this threat is especially prevalent in developing nations, which often relax AI regulations to attract investments from the Big Tech. Dr Truby explains, “In this study, I propose the need for proactive regulatory measures in AI development, which would help to ensure that AI operates to benefit sustainable development.”

In his study, Dr Truby discusses three examples to show how unregulated AI can be detrimental to SDGs.

SDG 16: Tackling corruption, organised crime and terrorism

To begin with, he focuses on SDG 16, a goal that was developed to tackle corruption, organised crime, and terrorism. He explains that because AI is commonly used in national security databases, it can be misused by criminals to launder money or organise crime. This is especially relevant in developing countries, where input data may be easily accessible because of poor protective measures. Dr Truby suggests that, to prevent this, there should be a risk assessment at each stage of AI development. Moreover, the AI software should be designed such that it is inaccessible when there is a threat of it being hacked. Such restrictions can minimise the risk of hackers obtaining access to the software.

SDG 8: Public access to financial services

Then, Dr Truby takes the example of SDG 8, a goal that seeks to increase public access to financial services. AI is regularly used in financial institutions to make banking simpler and more efficient. But, while learning, AI might inadvertently develop certain biases, such as reducing financial opportunities for certain minorities. Dr Truby explains that to avoid such biases, we need transparency in AI-driven processes. Human review and intervention at each step can ensure that such discrimination does not go unnoticed. Moreover, it is necessary to train software developers to recognise the harmful implications of biases, so that it can be regulated more efficiently.

SDG 10: Equal opportunity

Finally, Dr Truby explains how AI is a threat to SDG 10, a goal that focuses on equal opportunity. He explains how AI can be used by big firms to generate employment opportunities in developing countries and that this might threaten smaller businesses and local companies. However, if designed with sustainable development in mind, AI can create better job opportunities and increase productivity by removing labour-intensive jobs.

AI is a powerful technology that needs to be used carefully and efficiently.

Although Dr Truby is optimistic about the future implications of AI, he believes that developers and legislators should exercise caution through effective governance. He commented:

“The risks of AI to the society and the possible detriments to sustainable development can be severe if not managed correctly. On the flip side, regulating AI can be immensely beneficial to development, leading to people being more productive and more satisfied with their employment and opportunities.”

Read the study here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here