In the era of abundant online content, the emergence of deepfake technology presents a major concern for cybersecurity. Deepfakes, which are media created by artificial intelligence, have become present in several domains of our lives from entertainment and media to politics.
Although deepfakes may appear intriguing and fun from a technological development standpoint, the negative implications of deepfakes pose a threat to the fundamental trust we place in the digital age.
The rapid advancement of AI technology has introduced a host of challenges, including the blurring of lines between real and fake content, making individuals and businesses susceptible to scams and deepfake-related fraud. Recent data from Sumsub reveals a staggering increase in deep fake fraud cases from 2022 to Q1 2023, with Canada experiencing a 4,500% rise, the U.S. up by 1,200%, Germany by 407%, and the UK by 392%.
Notably, in Q1 2023, Great Britain and Spain accounted for the highest percentages of global deep fake fraud at 11.8% and 11.2%, respectively, while Australia, Argentina, and China also saw significant proportions. These statistics highlight the growing threat of deepfake-related fraud in multiple countries.
The Mechanics of Deepfake Technology
Deepfakes differ from other manipulated media, like shallowfakes, mainly because they involve limited human input. Unlike shallowfakes, where users have more control, deepfakes only allow users to assess the generated content at the end, with minimal influence on the computer's creative process.
In contrast, traditional media creation usually involves humans at every step, except for recent tools like Adobe's generative Firefly. Creating deepfakes primarily relies on deep neural networks and face-swapping techniques, using a target video and random source clips of the person to be inserted.
Machine learning, specifically Generative Adversarial Networks (GANs), refines deepfakes over multiple rounds, making them harder to detect. While the process is complex, there are user-friendly tools for deepfake generation, such as Zao, DeepFace Lab, FakeApp, and Face Swap, and many deepfake resources can be found on GitHub.
The Diverse Applications of Deepfakes
Media and Entertainment
In the entertainment industry, deep face technology has given rise to digital actors and performers. A virtual version of an actor can be used to recreate scenes or even entire movies, potentially reducing costs and logistical challenges.
Actors worry that their highly detailed digital recordings on film sets could be exploited with AI, potentially turning their performances in one movie into characters in other productions or video games. This fear stems from the emergence of synthetic media techniques like deepfakes, voice cloning, AI-driven visual effects (VFX), and entirely synthetic image and video generation.
Hollywood actors went on strike in July 2023, marking the first strike in 43 years, with AI-related concerns playing a significant role. The Screen Actors Guild (SAG-AFTRA) union's inability to secure adequate AI protections for its members has prompted warnings that "artificial intelligence poses an existential threat to creative professions."
The dispute revolved around studios' requests for perpetual rights to scan background artists' faces for a single day's pay, without consent or compensation, raising concerns about "performance cloning."
Filmmaker Justine Bateman questions the need for AI in entertainment and suggests it primarily benefits corporations seeking to boost profits by eliminating expenses.
Social Engineering and Cybercrime
Deepfakes pose a worrying concern as they are increasingly exploited for cybercrime and deceitful purposes. Malicious actors leverage deepfake technology to impersonate people and organizations, leading to phishing scams and financial fraud. These impersonations can inflict serious consequences on both individuals and businesses, including identity theft and reputation damage.
Security experts have raised concerns about the growing interest among threat actors in Voice Cloning-as-a-Service (VCaaS) available on the dark web, facilitating deepfake-related fraud. Recorded Future's recent report highlights the threat posed by deepfake audio technology, which can replicate a target's voice to bypass multi-factor authentication, spread disinformation, and enhance social engineering attacks.
The report notes that dark web platforms are now offering out-of-the-box voice cloning tools, making it easier for cybercriminals to access these capabilities. Some are even available for free with a registered account, while others cost as little as $5 per month. Impersonation, call-back scams, and voice phishing are commonly discussed in relation to these tools.
Political and Social Influence
AI software, readily accessible online, can produce videos within minutes for a few dollars a month, simplifying content creation at scale. Synthesia, an AI company located in London, offers software to generate deepfake avatars.
These "digital twins" resemble hired actors and can speak 120 languages, boasting diverse characteristics, appearances, and fashion styles. While primarily used for HR and training videos, it costs as little as $30 per month, streamlining video production. Synthesia's CEO emphasized the need for clearer rules on AI tool usage and warned of the increasing difficulty in identifying disinformation as deepfake technology advances towards Hollywood-level production capabilities on personal computers.
Deepfakes have also found their way into the world of politics and social influence. Political figures can have their speeches manipulated, and fake videos can be used to spread propaganda. The implications for democracy and public trust are profound, as deepfake technology can undermine the authenticity of digital content.
In mid-March, amid the ongoing Russia-Ukraine war, a peculiar video circulated on social media and even made its way to Ukraine 24 TV, all thanks to hackers' efforts. The video seemingly featured Ukrainian President Volodymyr Zelenskyy, with an unusual robotic appearance, urging his citizens to cease fighting Russian soldiers and surrender their weapons, claiming he had left Kyiv.
However, this video was a deepfake, created using artificial intelligence to mimic real people convincingly.
Although it was swiftly debunked, removed from major online platforms, and ridiculed for its poor quality, the incident highlights the significant threat posed by deepfakes in a politically polarized world where media consumers may embrace information that aligns with their biases, regardless of its authenticity, cautions Don Fallis, a computer science and philosophy professor at Northeastern University.
Innovative Approaches that Combat Deepfake Threats
Detecting deep fakes is challenging due to advancements in the technology itself. Deepfake creators continually refine their methods, producing new variations at an alarming pace.
Google has recently introduced a tool called SynthID to address the escalating challenge posed by AI generated images of deep fakes. As AI generated content becomes more prevalent this tool aims to watermark and identify images by embedding a digital watermark that can be detected for identification purposes. However experts emphasize the importance of adopting an approach and fostering collaboration to stay ahead of actors in combating the deep fake issue.
While watermarking has traditionally been used to protect copyrights there is a growing consensus that it may require standardization to counter AI generated imagery. Some proponents suggest that a long term solution could involve techniques, like cryptography and blockchain to ensure the authenticity of content.
The rise of deepfake technology poses a significant cybersecurity challenge. As detection methods improve, deepfake generators get more sophisticated at evading them. In this era of digital manipulation and deception, staying vigilant is crucial. Being up-to-date on deepfake technology and actively engaging in efforts to combat it is very important in order to minimize the cybersecurity threats.
As deepfakes become more convincing and widespread, addressing these threats is increasingly urgent. We need a multifaceted approach involving technology, updated laws and policies, as well as education to protect individuals, organizations, and society from digital impersonation. The future of cybersecurity hinges on our ability to adapt and stay ahead of the developing technologies.