Placeholder

Dec 13, 2019

Written By Elizabeth Hurst

How can the law deal with Deepfake?

Dec 13, 2019

Written By Elizabeth Hurst

The 21st Century is awash with innovative technologies that evolve faster than anyone could have anticipated. They aren’t always developed for the right reasons. Elizabeth Hurst looks into Deepfake, and its potential repercussions in terms of misinformation, misinterpretation and more. 

What is Deepfake?

The year was 2017. The location (sort of) was popular member discussion website, Reddit. An anonymous user was superimposing celebrity faces onto pornographic videos. His username? Deepfakes. And thus the term came into existence.

A Deepfake is a piece of media such as an image, video or audio clip that has been doctored using deep learning technology to produce something fake. These two crucial elements (deep learning and fakeness) combine to produce the term Deepfake. 

The funny thing is that the technology behind Deepfake systems is not new in any way. Millions of people use similar tech every day—such as every time a teenager applies a filter of puppy-dog ears, nose and tongue as they pose on Snapchat. Anyone can buy and download Adobe Photoshop then follow helpful online tutorials on how to doctor images. In cinema, audiences are used to watching actors completely transform with the help of CGI to become fantastical creatures that look so real in 3D you can almost reach out to touch them. 60% of James Cameron’s 2009 film Avatar is completely computer-generated. 

What makes Deepfake different?

Deepfakes are far from being fun, and while some are for entertainment purposes, there is usually something more nefarious at work. At their very heart, they are forgeries, usually in the form of video content, that have the power to make anyone’s likeness do anything the Deepfake maker decides. Deepfake has the power to put words into people’s mouths. 

Nowadays, people are perhaps more likely to be wary of images online. Everyone knows it isn’t hard to use photoshop to produce a fake tweet screenshot, or to superimpose a person onto a completely different background. But Deepfake videos are so believable that it’s hard to consider that what you’re seeing never happened at all. This manifested in the recent BBC drama, The Capture—the protagonist Shaun Emery finds himself believing he committed actions that he didn’t after watching a highly believable doctored video that appears to show him committing a crime. 

The three most terrifying things about Depfakes are perhaps their scope, scale and sophistication. The technology to produce these videos has arguably been around for a long time. But there were barriers to accessing it—you needed extremely advanced skills, they would take many hours to produce at a significant cost, and required specialised equipment and programmes. John Fletcher, professor of theatre at Louisiana University, puts forward that “although audio/visual fakes online are nothing new, recent technological and software advances have enabled cheap, fast generation of practically undetectable video fakery by consumer-level users.”

Now, it is entirely possible for anyone to create a Deepfake. There’s no need for expensive high-powered software or training—all you need is a computer. There are even apps you can get that do all of the hard work for you. One called “DeepNude” offered both a free and a paid premium version that allowed you to alter normal images of any women to make them appear as if they’re completely naked. Another, “FakeApp”, allowed users to swap faces on videos at the click of a button—making it far too easy to create content that could be used for bullying, blackmail, to make fake news or for political sabotage. 

Deepfakes may not seem like one of the most pressing issues of current times. However, the potential for damage is great. Not only that, the amount of Deepfake videos circulating online is growing at an exponential rate. Cyber-security company Deeptrace did some research that found the number of videos online produced using Deepfake technology had doubled in nine months: 14,698 videos compared to 7,964 online in December 2018.

As Deepfake software improves, the ability to manipulate videos has transitioned from a niche skill to something anyone can do, anonymously, for any reason—and they’re not always good ones. The worst bit? They’re convincing. As technology becomes more sophisticated, it’s getting harder and harder to tell they’re even fake.

How do Deepfakes work?

The technology behind Deepfakes is advanced. It uses artificial intelligence (AI) and machine learning, sometimes known as deep learning, to utilise artificial neural networks (ANN). Artificial neural networks are inspired by biological neural networks, i.e. the things that constitute animal brains. These networks allow systems to “learn” to perform tasks by considering examples, without the need to be explicitly programmed with specific rules. This technology is agile and allows for the rapid improvement of the videos produced because of it. It becomes smarter, better and at a faster rate. It can detect patterns in data, and very quickly adapt. 

Deepfakes use a form of deep learning called generative adversarial networks, also known as GANS. In 2014, this was introduced by researchers at the University of Montreal. By using two competing neural network architectures, one algorithm generates data while the other discriminates or classifies it. As cited by Karen Hao in the MIT Technology Review, “the [GAN] process ‘mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another.’” The two pieces work together to analyse data like neurons in the human brain.

The dataset in question? Photographs and videos. Thanks to social media, our identities are plastered all over the internet. At the click of a button, anyone can download enough images of your face to make a Deepfake from Instagram, Facebook, Twitter and the like. With celebrities, it’s even easier to find. Then, all that is required is to feed this wealth of data into the algorithm that is then able to map the face data of one person onto another. 

These complicated algorithms developed by experts are available to all and are easily accessible, usually for free on open-source platforms like TensorFlow or GitHub. Downloading the tech is so user-friendly it’s downright dangerous, and many people are taking advantage of this to make apps and programmes for others. One commonly-used programme has been downloaded more than 100,000 times, according to its designer. What’s more, the tech is learning and improving at an alarming rate. To demonstrate the pace that this technology is developing, in 2018, Motherboard, Vice’s tech channel, predicted that it would take another year to automate Deepfake software. The reality? It took a month.

How can the law tackle Deepfakes? 

There are currently no laws in the UK that reference Deepfakes explicitly, but there are existing laws that can be applied. One potential route for legal action is copyright infringement. Or, if it can be shown that the Deepfake material has caused or is likely to cause serious reputational harm, defamation legislation could be called upon. The remedies for this could include injunctions, damages or court orders to have the material destroyed.

Privacy and anti-harassment laws where individuals are targeted may be one way to convict Deepfakers. This already has precedence, from a trial in May 2018 where Davide Buccheri was sentenced to 16 weeks in jail and was ordered to pay his victim £5,000 in compensation. The city worker originally hailing from Italy had created a gallery of Deepfake pornographic images of a colleague in an attempt to discredit her to the bosses of the investment management firm they both worked at. Buccheri was also dismissed from the firm for gross misconduct. Simon Miles, of intellectual property specialists Edwin Coe, told The Sun that the fake sex tapes could be considered an "unlawful intrusion" into the privacy of a celebrity.

Deepfakes are a problem for the legal sector in a couple of ways. Firstly, Deepfake technology puts doubt on the use of video evidence in trials. What used to be verifiable proof may no longer be trusted—or perhaps shouldn’t be trusted. If digital sources (which previously held up) may be discredited or at least regarded with some suspicion, then other methods like eyewitness testimonies may have more weight placed upon them.

Another issue with creating laws to counteract Deepfakes is that the technology is moving too quickly to legislate effectively. The broader issue of how to legislate in the digital age is a question we are still looking to answer. There have however been examples of the law rising to the challenge with this. After a lengthy campaign started by Gina Martin after she experienced the now-crime at a music festival, “upskirting” became a criminal offence in April this year. Under the voyeurism act, offenders can face up to two years in prison for taking digital images or videos under someone’s clothing without their permission. This law is the first that acknowledges the dangerous effects of cyber harassment. So while the law doesn’t protect people against Deepfakes just yet, it’s a start.

Advertisement

Placeholder
Placeholder

Bringing it back to reality 

Deepfake technology is certainly one of the darker technological advances of recent times. In an age where people in power can spread misinformation (some would argue purposefully) because they know at least some people will believe them and “fake news” is the automatic response when people simply have a differing opinion, Deepfakes are a dangerous threat to democracy. For individuals that are targeted, particularly with their features being used in pornographic Deepfake images and videos, the impact can be devastating. 

Some say it’s the duty of social media companies to identify Deepfakes and prevent them from being shared on their platforms. So far, PornHub has banned Deepfake videos—although critics have said it isn’t working. Searching “Deepfake” on their site returns no results. But Google search “Pornhub Deepfake” and you find playlists with pornographic Deepfake videos featuring celebrities within seconds. Twitter and Gfycat have also placed similar bans while Reddit closed down the primary subreddit chain hosting Deepfake porn with 90,000 users, although again this effort may be largely ineffective as users can simply migrate the material to other locations. 

Companies are realising that the issues surrounding Deepfakes are complex, and need careful attention. In September 2019, Facebook Chief Technology Officer Mike Schroepfer released a post titled “Creating a dataset and a challenge for Deepfakes”. It lays out an attempt to find a solution by joining with the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY to build the Deepfake Detection Challenge (DFDC) and produce technology that is able to tell what is truth, and what is fake.

Meanwhile, others are taking a more backseat approach. In an interview with CBS, Instagram Head Adam Mosseri said the company hasn’t yet come up with an official policy on AI-altered video, saying: “we are trying to evaluate if we wanted to do that [delete Deepfakes] and if so, how you would define Deepfakes… if a million people see a video like that in the first 24 hours or the first 48 hours, the damage is done. So that conversation, though very important, currently, is moot."

There is no doubt about the urgent need to hold the makers of Deepfakes accountable to deter others from doing so. But it is often the case that by the point of charging, serious damage may have already been done. Humans are impressionable. Social media can spread media like wildfire. Does it really matter if a Deepfaker gets prison time, when their creation planted a seed that changed the minds of millions?

 

Advertisement

Placeholder
Placeholder

Legal Spotlight