The Emergence of Deepfake Technology: A Review of Mika Westerlund
„This is developing more rapidly than I thought. Soon, it’s going to
get to the point where there is no way that we can actually detect
[deepfakes] anymore, so we have to look at other types of solutions.“
Deepfake Pioneer & Associate Professor
Novel digital technologies make it increasingly difficult to distinguish between real and fake media. One of the most recent developments contributing to the problem is the emergence of deepfakes which are hyper-realistic videos that apply artificial intelligence (AI) to depict someone say and do things that never happened. Coupled with the reach and speed of social media, convincing deepfakes can quickly reach millions of people and have negative impacts on our society.
While scholarly research on the topic is sparse, this study analyzes 84 publicly available online news articles to examine what deepfakes are and who produces them, what the benefits and threats of deepfake technology are, what examples of deepfakes there are, and how to combat deepfakes. The results suggest that while deepfakes are a significant threat to our society, political system and business, they can be combatted via legislation and regulation, corporate policies and voluntary action, education and training, as well as the development of technology for deepfake detection, content authentication, and deepfake prevention.
The study provides a comprehensive review of deepfakes and provides cybersecurity and AI entrepreneurs with business opportunities in fighting against media forgeries and fake news.
In recent years, fake news has become an issue that is a threat to public discourse, human society, and democracy (Borges et al., 2018; Qayyum et al., 2019). Fake news refers to fictitious news style content that is fabricated to deceive the public (Aldwairi & Alwahedi, 2018; Jang & Kim, 2018). False information spreads quickly through social media, where it can impact millions of users (Figueira & Oliveira, 2017). Presently, one out of five Internet users get their news via YouTube, second only to Facebook (Anderson, 2018). This rise in popularity of video highlights the need for tools to confirm media and news content authenticity, as novel technologies allow convincing manipulation of video (Anderson, 2018). Given the ease in obtaining and spreading misinformation through social media platforms, it is increasingly hard to know what to trust, which results in harmful consequences for informed decision making, among other things (Borges et al., 2018; Britt et al., 2019). Indeed, today we live in what some have called a “post-truth” era, which is characterized by digital disinformation and information warfare led by malevolent actors running false information campaigns to manipulate public opinion (Anderson, 2018; Qayyum et al., 2019; Zannettou et al., 2019).
Recent technological advancements have made it easy to create what are now called “deepfakes”, hyper-realistic videos using face swaps that leave little trace of manipulation (Chawla, 2019). Deepfakes are the product of artificial intelligence (AI) applications that merge, combine, replace, and superimpose images and video clips to create fake videos that appear authentic (Maras & Alexandrou, 2018). Deepfake technology can generate, for example, a humorous, pornographic, or political video of a person saying anything, without the consent of the person whose image and voice is involved (Day, 2018; Fletcher, 2018). The game-changing factor of deepfakes is the scope, scale, and sophistication of the technology involved, as almost anyone with a computer can fabricate fake videos that are practically indistinguishable from authentic media (Fletcher, 2018).
While early examples of deepfakes focused on political leaders, actresses, comedians, and entertainers having their faces weaved into porn videos (Hasan & Salah, 2019), deepfakes in the future will likely be more and more used for revenge porn, bullying, fake video evidence in courts, political sabotage, terrorist propaganda, blackmail, market manipulation, and fake news (Maras & Alexandrou, 2019).
While spreading false information is easy, correcting the record and combating deepfakes are harder (Dekeersmaecker & Roets, 2017). In order to fight against deepfakes, we need to understand deepfakes, the reasons for their existence, and the technology behind them. However, scholarly research has only recently begun to address digital disinformation in social media (Anderson, 2018).
As deepfakes only surfaced on the Internet in 2017, scholarly literature on the topic is sparse. Hence, this study aims to discuss what deepfakes are and who produces them, what the benefits and threats of deepfake technology are, some examples of current deepfakes, and how to combat them. In so doing, the study analyzes a number of news articles on deepfakes drawn from news media websites. The study contributes to the nascent literatures of fake news and deepfakes both by providing a comprehensive review of deepfakes, as well as rooting the emerging topic into an academic debate that also identifies options for politicians, journalists, entrepreneurs, and others to combat deepfakes.
The article is organized as follows. After the introduction, the study explains data collection and news article analysis. The study then puts forward four sections that review deepfakes, what the potential benefits of deepfake technology are, who the actors involved in producing deepfakes are, and the threats of deepfakes to our societies, political systems, and businesses. Thereafter, two sections provide examples
of deepfakes and discuss four feasible mechanisms to combat deepfakes. Finally, the study concludes with implications, limitations, and suggestions for future research.