Georgia Shriane, Senior Associate at Boyes Turner, illuminates the realities of the fourth industrial revolution: Where is AI due to take us?
AI has been heralded as the fourth industrial revolution, and no doubt has a huge number of applications with the potential to have a great impact on every aspect of daily life. This includes helping to remove the drudge but will also create challenges through the unclinical use of data and loss of jobs. The US official government strategy is to fire up the private-sector to initiate research and development and compete to develop AI, subject to “self-regulation”. This development has full backing from the government and quite a sizeable R&D budget underpinning it.
The power of AI in America
In February 2019, President Trump issued an Executive Order launching the American AI Initiative, stressing that the government would play an important role in facilitating research and development in this field, as well as training people for the changing workforce and improving outcomes for workers (acknowledging that many menial jobs may fall to AI “robots”). His emphasis is on the “paramount importance to maintaining the economic and national security” of ensuring America remains the leader in the development of AI. This view must be seen in the global context of Putin’s famous quote; “whoever becomes the leader in this sphere [of AI] will become the ruler of the world”, and the interference (allegedly) by the Soviet Union to influence the outcome of the US elections (and some say, Brexit). As a result, AI is the second highest priority for research and development funds after national security for 2020 – and a pledge of over $2 billion over the next five years.
In 2018, President Trump said “to the greatest degree possible, we will allow scientists and technologists to freely develop their next great inventions right here in the United States,” with the removal of regulatory barriers for the deployment of AI technologies. US policy focuses on the radical improvements in society, quality of life and security that AI could effect and his vision includes the use of AI in medicine, national security, industrial robotics and the autonomous car industry.
Tempering this development of AI, the US has signed up to the OECD Principles on AI in May 2019, which identifies five non-legally binding general principles for the trustworthy development of AI, and there have been hearings, including a hearing in June 2019 considering National security, manipulated media and “deep fakes” .
The cautious use of AI in the EU
In contrast to all this, the EU is taking a very cautious approach to the development and application of so-called artificial Intelligence software (AI), which of course is just sophisticated software, programmed to enable conclusions to be drawn rapidly from the data collected. Computers can, of course, no more think now than they could before; they simply draw on the comparison data they have been inputted with to interpret new data and make predictions or give results using pre-programmed processes.
Some of the more useful and less controversial uses of AI already in use in European smart cities are traffic control software (to improve traffic flows and reduce pollution), or energy-saving software (controlling use of light and heat in public spaces) for example. Here, automated decision making by AI is likely to be seen as overwhelmingly beneficial for people and the environment through the efficient interpretation of large volumes of data.
The negatives of AI implementation
There are other examples though, of AI being used in less socially beneficial ways, notably the Cambridge-Analytica Facebook scandal, which erupted in March 2018 and went a long way to dent public trust in the use of AI. Here software algorithms were used to harvest personal information from people’s Facebook profiles without their knowledge or consent and to use this data for political purposes (targeting political messages to ‘sway voters’).
Since then we have had other, equally unpleasant allegations concerning the use of AI, such as race bias in live facial recognition software (LFR).
Due to unreliable and biased data being used as the comparison data in LFR, it has been found to work more accurately on white men than on any other social or ethnic group, leading to the mis-identification (and in some cases, repeated arrests) of people from non-white ethnic backgrounds.
Such serious consequences arising from unregulated – or inadequately regulated and tested – AI has meant that a number of EU governments (including the UK) have established committees to research the security of data as well as the uses (or misuses) of AI and until very recently, the EU was even considering a five-year moratorium on LFR.
The EU, applying its core principle of human-centric policy making and the protection of individual citizens’ rights, has grave concerns over the balancing of human rights, personal freedoms (such as privacy) against the perceived benefits of widespread AI use. As a result, the EU became the global frontrunner when, in 2019, it published a framework of ethical rules for trustworthy AI. However, this was non-binding, so its effect is limited because there is no enforcement mechanism.
The EU set out essentially seven ethical rules, one of which is to require an impact assessment for each proposed use of AI, with the focus of the evaluation being the effect on the individual’s fundamental rights (such as privacy, freedom). To a degree some of AI is likely to be regulated by the existing GDPR (although the suitability of the GDPR regulation that was formulated in 2014 with a view to more traditional data uses is questionable, as AI develops and is refined). The EU ethical rules also require that use of AI must be transparent, with human agency (and not reliant on automated decision making) and the focus of use for AI should be societal and environmental wellbeing .
The UK Office for Artificial intelligence has published its UK strategy in line with the EU and it will be interesting to see how the UK’s approach develops as it leaves the EU and whether it becomes more closely aligned to the US approach. What is not in doubt is that AI is a technology to watch closely, as it has the power to benefit society immeasurably or to do great harm.
By Georgia Shriane, Senior Associate, Boyes Turner
Editor's Recommended Articles
Must Read >> The rise of cyber-stalking