ITV’s new show ‘Deep Fake Neighbour Wars’ has recently received a lot of media attention due to its use of deepfake technology.
The show uses images of celebrities and superimposes those images onto the faces of the UKs best impressionists. The impressionists then act out the supposed everyday lives of their impersonated celebrities.
English law is currently playing catch-up in terms of protecting people from the misuse of deepfake technology. In this article, the writers comment on the ability to bring a claim through English intellectual property law, the law of defamation and also what a person may be able to do through existing legislation.
Deepfake technology – what is it?
Deepfake technology is a type of synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. The same can be done to replicate a person’s voice. AI technology is used to manipulate photos, videos and/or audio to reproduce this likeness.
The technology has been around for a number of years but has recently garnered a lot of negative media attention do to its potential for malicious use, for example, in revenge pornography or to spread fake news.
What can be done to stop deepfakes?
Intellectual property law
There is no direct method to bring a claim under English intellectual property law for misuse of deepfake technology.
Deepfakes copy the image or voice of a person. However, a person’s image or voice is not intellectual property. A person could have a valid claim for copyright infringement if, for example, they took a photo or video of themselves and the AI technology copied that specific photo or video without the owner’s permission to create the deepfake. Of course, if the victim of the deepfake does not own the image or photo which has been copied then they are unable to bring a copyright claim.
Celebrities may be able to argue that the misuse of their image in deepfakes is protected via the law of passing off. To do so, they would have to prove that they own goodwill in their image/likeness amongst the relevant members of the public; that the deepfake implies that the celebrity has licenced their image/likeness to the creator; and that as a consequence, they have suffered damage. However, such claims are far from straightforward and it is unlikely that a person without a media profile would argue such a claim as they would likely not own any goodwill in their image/likeness.
The law of defamation does not provide a natural fit for misuse of deepfake technology – ordinarily it would require a statement about the claimant. However, there is significant scope for reputational damage through the misuse of deepfakes.
Imagery of course can be defamatory, for example, a cartoon depicting something about an individual. However, such imagery is akin to a statement about someone, whereas a deepfake is intended to convince the viewer that what is being portrayed is the actions of the individual.
The Courts will likely have to grapple with whether this amounts to a defamatory publication. It would be an unattractive academic distinction to state that imagery purporting to be an individual cannot amount to a statement about the individual so as to avoid the ambit of defamation.
The Courts would also need to consider what the deepfake imagery means to determine whether its meaning is defamatory. The Courts have always taken a pragmatic approach to meaning. The most straightforward approach would seem to be that the meaning is that the individual has done what the deepfake portrays them doing. If engaging in such an activity would tend to lower the view of the individual in the estimation of right-thinking members of society the deepfake will be defamatory.
In recent years, the Government have attempted to tackle this gap in the protection of a person’s image and reputation.
The Criminal Justice and Courts Act 2015 creates the offence of creating and sharing revenge porn but is absent from any criminal acts regarding deepfakes.
The Online Safety Bill, which is yet to receive royal assent, imposes new legal requirements on providers of internet services and search engines. It also gives OFCOM the power to act as a new online safety regulator, giving the regulator the power to fine companies and impose criminal sanctions on company directors.
However, there is still no clear civil remedy for a person who is deepfaked.
Written by Liam Tolen, Senior Associate, Chris Fotheringham, Solicitor at Ashfords LLP