Insights Deepfakes and the law

Contact

Introduction

In March this year, a video circulated on the internet appearing to show President Zelenskyy announcing Ukraine’s surrender to Russia. No sooner had the video found its way into social media feeds and news bulletins, than it was debunked as fake. A so-called “deepfake”.

To some, this episode barely registered on their radar, or was dismissed as a failed Kremlin gimmick. Besides, as deepfakes go, it was a particularly unsophisticated one. Few would have been convinced by the video’s authenticity.

To others, the video (and its dissemination) signalled something much more dangerous. The latest shot across the bow of democracy.

To them, it was confirmation that deepfakes were no longer the little-known phenomenon confined to the digital backwaters of internet message boards and pornography websites, left to fester by a political class that was either unaware of their existence or unable to comprehend their potency. Now they had emerged into the mainstream and threatened to become one of the most effective and dangerous weapons of informational warfare.

Unsurprisingly, there have been the inevitable calls for action that come when we are confronted with technological innovations that have the ability to subvert our institutions but which we don’t yet fully understand: legislate or regulate.

Often such clamour is premature: we’ve got plenty of laws as things stand. But in the case of deepfakes, the law may well need an update.

What are they?

At their most basic, deepfakes are artificially-produced audio or video clips in which an individual’s (usually a celebrity’s) image or voice is “cloned” such that it can be manipulated to say or do whatever the deepfake creator wants.

The creators of deepfakes employ artificial-intelligence by feeding a deep-learning system a dataset of the subject’s image or voice in order to produce the digital clone. Simultaneously, a separate system tests this clone against the original to detect any flaws and further hone its likeness.

The result is an uncanny reproduction of the original subject which is not a copy of any one image, but instead an amalgamation of them all. And it can be manipulated at the creator’s whim.

The danger of deepfakes

The attraction of deepfakes is simple: you can make the subject do or say whatever you want them to. You are literally putting words into the mouth of another person.

What you want them to say or do is another matter. It can range from the trivial (superimposing someone’s face onto movie clips as offered by popular apps such as Zao) to the malign. Examples include blackmailing women by uploading deepfakes of them on pornography websites, or tricking the public to invest in a product by creating a deepfake of a celebrity appearing to endorse it. Then there’s the political arena where, as we’ve seen, deepfakes can be used as a weapon of war.

Deepfakes are particularly dangerous in the world of politics given the nature of information wars. Whereas a scammer needs to produce a highly sophisticated deepfake to convince you that the cloned celebrity actually said particular words, the propagandist needn’t aim so high. A rudimentary deepfake, such as the one featuring President Zelenskyy, can do just as much damage.

That is because for those engaged in informational wars, it is not critical to deceive the public into believing the authenticity of the deepfake. Just as important is simply sowing a seed of doubt: hope that the video goes viral, maybe gets picked up by news bulletins, and even is quickly debunked. In this way, the public is made aware that doctored videos are being disseminated across the internet and is primed to treat any video they watch in future with scepticism.

This phenomenon has been referred to by academics as the “liar’s dividend”: saturate the internet with sufficient misinformation and disinformation that nothing will be believed and everything can be questioned. It is in this context that the Russian ambassador to the UK can say to the BBC that the independently-verified CCTV images from the massacre at Bucha were computer generated as part of a video game.

Before we think that this is exclusively the tool of the Kremlin and fellow autocratic regimes, it should be noted that in the last major elections in both the UK and USA, we saw politicians being prepared to share altered and distorted videos of their political opponents. In the UK, a video of the shadow Brexit Secretary and Labour-leadership hopeful, Sir Keir Starmer, was crudely manipulated so as to make it appear that he stumbled when asked a question on daytime television. It was later posted by the official Twitter account of the Conservative Party Press HQ. In the USA, a video of Speaker of the House, Nancy Pelosi, was slowed down to make it appear that her speech was slurred, and then shared by her political opponents.

What can be done?

Unsurprisingly, there have been calls for action. In the run up to the 2020 US Presidential Election, Facebook, Google, and Twitter all announced steps to remove or label potentially harmful or misleading deepfakes.

At the same time, engineers are using the very same technology that deepfakes rely on to detect and remove them. Very recently, Google prevented the use of one of its Google Research products, Colab, for the purpose of creating deepfakes.

This game of digital whack-a-mole will go some way to stop the proliferation of dangerous deepfakes. But what about the law? What can governments do beyond merely leaning on the tech companies to do more? And what about the victim of a deepfake – what recourse is open to the person who never said those words or did those things?

The law

It seems natural to suppose that there must be laws that can protect against malicious deepfakes. Analysed one way, they feel akin to a form of identity theft. However, just as in the case of identity theft it is not the stealing of identity per se that brings liability but the fraudulent actions that follow, so, in the case of deepfakes, it’s necessary to identify an appropriate cause of action that arises from their production or proliferation which a claimant can resort to. As we shall see, that’s not always an easy task, particularly in the law of England and Wales.

Intellectual Property

The first step might be to analyse deepfakes from an IP perspective. At first blush, given that we are in the realm of the production of audio or video works, one might think that the law of copyright can come to the aid of the subject of the deepfake. However, on closer inspection, problems quickly emerge. For example, the subject matter of the deepfake will in many cases not own the copyright in either the base image onto which their face is placed, nor the original images of their face which contributed to the production of the digital clone.

Furthermore, even if they did have a claim in copyright in any of the images contained within the deepfake video, the creator may be able to rely on a defence of fair dealing in England, for example, on the grounds of parody or caricature. In the USA, the defence of fair use is arguably even more protective of the deepfake creator as, unlike in England and Wales, it asks whether the work was “transformative” to which, in the context of deepfakes, the answer will almost certainly be yes.

What about passing-off, another member of the IP family tree? Rihanna famously successfully sued Topshop for passing off when they used her image in one of their advertising campaigns without her permission.

Would a celebrity suing a deepfake creator for using their image without permission be any different?  Perhaps not, but as a matter of law the deepfake would have to be sufficiently convincing that it would reasonably lead a consumer to think that the relevant celebrity had somehow endorsed the product in question. And even if that were the case, an action in passing off/false endorsement would be limited only to those with the most famous of faces as a critical element for a successful claim is that their image is known to be used to endorse or sponsor products.

Privacy/Defamation

If intellectual property law is not the most obvious fit, what about laws relating to defamation and the privacy of the individual?

The courts of England and Wales have traditionally been very reluctant to recognise any general freestanding right to privacy, preferring instead to rely on a patchwork of laws and judgments, particularly since the passing of the Human Rights Act 1998.

In the United States, however, there has been less reluctance, and the tort of False Light may serve as the obvious candidate to address deepfakes. After all, it specifically addresses circumstances in which the defendant (1) gives publicity to (2) a matter concerning the plaintiff (3) that places the plaintiff before the public in a false light (4) that would be highly offensive to a reasonable person, and does so (5) with knowledge or reckless disregard of the falsity of the matter and the false light[1]. That is the very essence of the majority of deepfakes.

Beyond False Light, the majority of US States also recognise a ‘right of publicity’ wherein an individual has a right to control the commercial use of their name, image, likeness, or some other identifying aspect of their identity.

Neither the right of publicity nor the tort of False Light exists, nor has any equivalent, in English law. Instead, the best alternative would likely be for a subject of a deepfake to seek to bring a claim in defamation. But, of course, that brings difficulties that are inherent in all cases of defamation, namely having to demonstrate that the deepfake is (1) in fact, defamatory (2) has caused the subject serious harm, and (3) that no defences are available to its creator. This is quite apart from the practical difficulty of trying to identify the creator, and any likely questions of jurisdiction that might well arise.

New Law

English law is not, therefore, well equipped to deal with deepfakes. The patchwork of laws that exists probably doesn’t quite extend far enough to capture the peculiarities of this new technology.

One answer may be simply to require tech companies to do more to stop deepfakes’ proliferation. Indeed, on 16 June 2022 the European Commission has published its strengthened Code of Practice on Disinformation which has precisely that aim, with the threat of financial penalties on tech companies if they fail to take sufficient action.

This is a model of oversight that is reflected too in the proposed Online Safety Bill, which puts the onus on tech companies, on pain of considerable fines, to police the darkest corners of their platforms. However, it is notable that a recent report by the Digital, Culture, Media and Sport Committee criticised the draft Bill for failing adequately to address the “insidious” problem of deepfakes.

The other answer, hinted at by the committee, is to introduce new primary legislation: an anti-deepfake law to address the problem before it gets out of control.

Such new laws are starting to emerge. And, ironically, they can be found in various states in the United States, where, as we’ve seen, adequate legal remedies are probably already available. In California, for example, a State that already protects the image rights not only of the living, but, in the case of celebrities, even the dead[2], Governor Newson nonetheless introduced AB 730, a law prohibiting the distribution “with actual malice, materially deceptive audio or visual media” of a political candidate within 60 days of an election with the “intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

AB730 has come under attack from those who say it is too limited in scope, unrealistically expects creators to add disclaimers to their deepfakes, and is impossible to enforce. Beyond these concerns about its workability and efficacy, there has also been criticism from those such as the Electronic Frontier Foundation who fear for that such laws will have a chilling effect on freedom of expression.

It remains to be seen whether these concerns are warranted. Other countries including the UK will be watching keenly to see whether to adopt a legislative clone of such laws for themselves. But they’ll just as equally be watching to see whether, in fact, these laws reflect the nature of deepfakes themselves: seemingly innocent and well-meaning at the beginning, but ultimately replete with unintended, harmful consequences.

[1] Restatement (Second) of Torts §652 E

[2] The Celebrity Rights Act 1985