Here, James A. Ingram dives into an ongoing topic of intrigue in modern society: The rise of deepfake technology
The manipulation of moving images of people for malign intent is nothing new. It goes back to at least 1940. After France signed the instrument of surrender to Germany in the infamous railway carriage at Compiègnes, Adolf Hitler, ecstatic at the completion of his stunning military success, stamped his foot in joy for the ever-rolling news cameras. The Führer was in high spirits.
In London, however, two British filmmakers realised that the snippet of footage could be repeated and looped to make it seem as if Hitler was executing a rather ridiculous little dance, and the “Hitler jig” was born.
Making the German leader dance was an inspired and innovative piece of propaganda, but it was relatively crude, using the basic technology available to mid-20th century cinematography: scissors, glue and a stopwatch. But deepfake would not really come to the forefront of the computer-driven world until the mid-1990s, when an early project called Video Rewrite altered existing video footage of people to make them seem as if they were mouthing the words which appeared on a new audio track: essentially, putting words in their mouths. Although the full breadth of the applications was not immediately obvious, the direction of travel was.
In effect, with hard work and good luck, it was possible to create apparently real footage of someone saying anything you liked, however implausible, outlandish, outrageous or incriminating. Once the idea was born, it was simply a matter of driving improvements in the technology to make the fake footage more and more realistic and life-like. It became a matter of degree, not kind.
The critical breakthrough which programmes like Video Rewrite had made was to create software which allowed machines to learn the relationship between the sounds produced during speech and the shape of the face of the speaker, and to correlate them. The floodgates were now open, and the inundation began. By the 2010s, the technology was ubiquitous. In 2017’s “Synthesizing Obama” program, the former president appeared to speak words from an alternative soundtrack, and the correlation was achieved with photorealistic effect.
At the end of that year, a Reddit user entitled “Deepfakes” began to share doctored footage he or she had created and encouraged others to do the same. Another boundary had been breached and amateurs were now in on the act.
a Reddit user entitled “Deepfakes” began to share doctored footage
Deepfakes are everywhere now. They are widely encountered (so I’m told!) in fake celebrity pornography, which has raised serious ethical questions for actors and actresses who are “faked” in this way. What rights do they have over their own image? What if it is, in fact, not their own image, but a digitally manipulated version of that image, doing things to which they would not themselves give consent, nor with which they would wish to be associated? Does a celebrity’s former behaviour have any bearing? That is, if a celebrity has in the past appeared naked or in a sexual situation, does he or she become “fair game” for digital hoaxers? Is there reputational damage, can this be prevented, and, if it is not, can it be quantified, monetised and then recouped in some way?
if a celebrity has in the past appeared naked or in a sexual situation, does he or she become “fair game”
You can see how many moral questions could stem from the apparently simple act of making an actress or singer appear to perform in a pornographic setting. And where there are celebrities, there is money. The potential costs of the development of this technology are near-unimaginable.
The manipulation of your image must be upsetting for celebrities, especially if in a context of which you don’t approve. It is a form of violation, unquestionably, and if it flies in the face of deeply held principles, then it is more upsetting still. In the end, while those who experience this humiliation have my sincere sympathy, it is a matter of personal image and feelings. It is not, dare I say it, the end of the world.
deepfakes have blossomed like some poisonous flower in the world of politics
For me, far more worrying is the way in which deepfakes have blossomed like some poisonous flower in the world of politics. It is no coincidence that, once deepfakes crossed the Rubicon into the world of the amateur programmer and manipulator of images, politicians were among the first victims. At the head of the queue was that bête noire of the hard right, former president Barack Obama. The opportunities for mischievous conservatives! To make your hate-figure speak, and speak convincingly, the very words of your choice! Rep. Nancy Pelosi was another early victim of this process, and the attention she has attracted since resuming the position of Speaker of the House of Representatives in January 2019 has only intensified.
Within months of her election, deepfake footage appeared making it seem as if the 79-year-old Speaker was slurring her words; one such video was shared on Twitter by none other than President Donald J. Trump.
This is important. It matters.
It’s not simply a matter of mocking or ridiculing political figures, making them seem awkward or clumsy or incapable (though that would be dangerous enough).
Deepfakes represent something much more corrosive and much more threatening to the fundamental pillars of our democracy. As the quality of technology improves, deepfakes are becoming indistinguishable from genuine footage. That spells danger. If you see footage of a politician saying something which can be used to damage his or her reputation, fundamentally misleading the audience as to his or her beliefs or intentions, and to turn a politician’s moral and ethical Weltanschauung on its head, then a very serious situation has been reached.
The moving image is all-powerful in 21st-century politics.
If an audience with a sharp appetite for controversy and a short attention span sees a juicy, counter-intuitive clip which kindles a political wildfire, it can have desperately serious consequences. Not least, it chips away at the credibility of politicians as a class. The belief that they will “say anything” is already widespread; think how grave a matter it is if you combine that toxic belief with an ingrained suspicion of the authenticity of the footage of any politician. What message does that send to the electorate? What lesson does it teach us about what our representatives say, and what they mean?
What role can the law play in countering the baleful influence of deepfakes? As is almost always the case with advanced technology, the legislative framework which surrounds it is struggling to keep up.
In the United States, the creation and propagation of deepfakes have been prosecuted as identity theft, cyberstalking and revenge porn, but discussions are going on about creating a bespoke statute to deal with deepfakes. In the United Kingdom, the production of deepfake material can be challenged under harassment legislation, but there is also a conversation about a new, specific law. The Voyeurism (Offences) Act 2019 outlawed “upskirting” but consideration was given to including in this act production and exploitation of deepfake material.
I’m ambivalent. I think the current legislative situation is unsatisfactory, and the law, as is so often the way, is fighting yesterday‘s battles.
It would be fantastic to think that new legislation could be drafted which encapsulated the malign parts of this technology and render them open to prosecution and that this legislation would be sophisticated, comprehensive and proportionate. But it’s rarely the case. The old saying goes that there are two things you should never see being made, sausages and laws, and I certainly agree on the latter. No doubt criminal codes will be updated to capture some of the most egregious offences involving deepfakes, but I do not expect a comprehensive solution in the near future.
There are half a dozen other issues on which I could dwell. I could talk about the lack of digital and computer expertise in governments and legislatures which makes them unable to react adequately to advances in technology. I could consider the ethical issues of identity ownership, identity theft and the value of a reputation, the extent to which individuals, even rich and famous ones, can ever fully own their images and likenesses. I could discuss the morality of image manipulation and propaganda, the value and role of truth in society and what commitments we, as a people, give to adherence to what is factual and true. But this would become an article ten times the length.
Misleading messages can be concocted without advanced technology. The footage of a slurring Nancy Pelosi, for example, involved only slowing down the existing tape to create a wholly inaccurate impression – it required nothing so sophisticated as AI or machine learning. Rep. Adam Schiff, Chairman of the House Intelligence Committee, dismissed the video on CNN as “very easy to make, very simple to make, real content just doctored”, declaring it a “cheap fake.”
But I want to consider the role and reputation of artificial intelligence for a moment. This more advanced technology was required to alter footage of President Trump in the Oval Office, mocking his skin colour and general appearance. So what we are seeing is a malign application of AI, a wholly mischievous and damaging use of its extraordinary power.
Part of me finds that really sad. I’m fascinated by AI and what it can do for us, for the human condition, for the way we interact with each other and the rest of the world. It can achieve remarkable and outstanding results when it is applied properly, results which will, in the fullness of time, change our lives.
But, of course, like any technology, it is inherently neutral. It has no moral bias. It is at the mercy of those who understand and use it, and the ends which they wish to achieve. The danger, and this concerns me greatly, is that the increasing proliferation of deepfakes will tar AI with the same malign brush, taint it with public mistrust, and that would be a catastrophe.
We cannot regard with suspicion, let alone ignore, this incredible technology. AI is a great story – and we need to tell it.
So what does the future hold?
I think the legal systems of liberal, democratic countries will continue to grapple awkwardly with the ethical and technological issues presented by deepfakes. Some countries will get it right and some will get it wrong. Campaigners, thought leaders and advocates in the tech field like me will have to bang the drum for AI, tell this great story, and relentlessly promote its benefits and advantages.
Malign actors in the political space and elsewhere will continue to exploit the possibilities if AI and deepfakes, and it will become more widespread and much, much more sophisticated. It is possible that this will entrench people’s instinctive distrust of politicians and others in positions of authority, but I am by nature an optimist, and I prefer to believe it will instead spur people on to be more observant, more analytical, more critical and more thoughtful in their observations.
I hope they will take the time to learn how to assess information presented to them, to weigh it up carefully, to consider its context and intention and likely effect. It may be that, by necessity, we will raise the best-informed and most critical generation ever, engaged with politics and public life in an active and sophisticated and considered way, really thinking about their opinions, what they’re based on, how reliable they are and what their idols and inspirations really think and believe.
All of that would be a huge win. I can certainly tell you what won’t happen. Deepfakes won’t go away. The malign manipulation of personal image won’t lose its power or go out of fashion, it won’t become passé, at least until it’s superseded by a more sophisticated technology. This strange, sinister blurring and obscuring of the truth, the creation of new, manipulable ‘truth’, is here to stay, because it is too potent a weapon to be forsworn.
So we must learn to live with it, to control it, to understand it, and to work with it. Paradoxically, fake is the new reality, and that’s the world we have to negotiate now.
James A. Ingram
Editor's Recommended Articles
Must Read >> New UK divorce law could reduce conflict