relationship with AI, pegasystems
© Vitalii Mamchuk

Peter van der Putten, assistant professor in AI, Leiden University, Director of decisioning at Pegasystems, explores the ‘AI Winter’ and our relationship with AI

The last ten years have been buzzing with stories about how AI will transform our world.

However, as we begin the 2020s, some commentators are predicting the onset of an ‘AI winter,’ when advances slow down or even stall. For example, talking to the BBC in January, Katja Hofmann, a principal researcher at Microsoft Research, said she felt AI was transitioning to a new phase. She was joined by one of the biggest proponents of AI, Yoshua Bengio, who recently admitted deep learning still has some serious shortcomings.

And the critique has not just been about limitations in algorithms themselves, but also on the dangers of a too naive application of artificial intelligence. Books such as Artificial Unintelligence by Meredith Broussard and Weapons of Math Destruction by Cathy O’Neil cover how if sufficient care is not taken these systems can pick up, learn and perpetuate the bias that exists in the real world.

That said, as a tool for automating intelligence, AI can also quickly come to the rescue in uncertain times.


On March 16th several organisations published the COVID-19 Open Research Dataset, a set of over 24,000 research papers related to coronaviruses and related diseases, and immediately thousands of professional and amateur data scientists went to work to mine this data for answers to key questions. Subsequently, the Kaggle website, an online community of data scientists and machine learning practitioners, quickly followed suit with a range of related tasks.

AI is used to predict and understand the structure of the virus to develop tests and vaccines, and identify candidate medicines that can be repurposed; or to understand the factors and policies that influence the spread of the disease.

In terms of financial services, banks immediately launched programs to help customers in difficulty by using AI in their customer engagement engines to decide and prioritise which message matters most for a particular client. They were able to serve these out through mobile apps as assisted channels got swamped.

So, should the expectations about how AI will change business and society really be scaled back or not?

Let’s go back to definitions first.

What is an AI winter?

An AI winter refers to how advances in AI are cyclical, often cruelly so. It was coined in 1984 by two AI researchers who had suffered from the collapse of US government funding for AI in the 1970s. They argued that AI researchers should be wary of how private or public sector enthusiasm for AI could suddenly plummet without warning. Indeed, they proved to be right by how the 1980s excitement around expert systems peaked and collapsed within a few years.

2010 is regarded as the beginning of what has been a decade long AI summer, characterised by excitement around deep learning and natural language processing. Smart chat bots have become more commonplace, and there have been some big leaps in AI – from AI that beat humans at Go to AI that can spot cancer tumours faster and more effectively than human clinicians.

By contrast, an AI winter arrives when the hype begins to stall and breakdown. The advances seem less frequent or significant. And, attitudes toward AI become more critical, for example more of a focus on AI that takes jobs from humans or makes discriminatory decisions that affect people’s welfare.

To be fair, you can find some of these signs of an AI winter currently, and the intensely hot hype of the last ten years is cooling off. Such signs of a subsiding hype aren’t necessarily a bad thing, of course.

We do need to separate the excitement, perception, and expectations from the underlying developments and scale of the application of AI, which is growing at a steady and continuous pace. When technologies are applied more and more, they become more normalised and some of the hype disappears. This reduced visibility of the positive AI that’s quietly enabling our society and economy to function smoothly is then actually a sign of more widespread, settled application.

What about the human factor?

But more fundamentally missing in any current talk of AI winter is the human factor. In 1967, John M. Culkin stated ‘we shape our tools and thereafter the tools shape us.’ We see that happening now with AI and the various outcries around ethical bias or other applications of AI that are perceived as not always in the best interest of the customer or citizen.

We need to find new ways to relate to this technology, especially because it is so ‘personal’ – AI is a natural extension of humans, so before we know it we will become accustomed to using it for a variety of everyday tasks without even realising. This process may be painful at times, but it is part of the maturing of the technology. Our relationship with AI will follow a natural course from the sound and fury of techno-utopianism and dystopian outcries to a settling down into a much quieter, more widespread application on our own terms.

Now this may sound as if we have completely cracked AI, but that’s not the case. It is right to point out that we don’t need artificial general intelligence (the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being is able to) to produce useful AI applications for good either. In fact, much of the mixed feelings about AI can be traced back to how overhyped the quest for advanced general intelligence was.

When I reflect on the most valuable progress of the last decade, much of it was related to advancing machine learning. In the 2020s, there is still work left to combine machine learning with the more traditional tasks of good old-fashioned AI, such as logic and reasoning. This will be key to keep the machine learning under control, set ethical boundaries, implement business strategies, and express human values.


Please enter your comment!
Please enter your name here