Surging: Visual trust impacts every industry and aspect of society. In 2020, the number of deepfakes reached an estimated 145,227, nine times more than the previous year. In 2022, it is fair to assume that the number is incalculable. Deepfakes, which use artificial intelligence to create fake or synthetic images and videos, are evolving daily. This evolution is linked to the creation of a zero-trust society or one in which people can no longer distinguish what is real from what is fake. The ability to differentiate fact from fiction is the bedrock of online trust and a healthy society.
What’s New? Accessibility: Recently, there has been an explosion of “text-to-synthesis” Deepfake technology which creates images from text prompts. This technology marks the most significant democratization of synthetic media, making it accessible to any individual without any prerequisites or skills. Open AI’s DALL-E 2 and Google Brain’s Imagen placed limitations on access and capabilities to limit misuse by bad actors. However, other platforms, such as Stability AI, released their platforms and even source code to the public without any limitations. Unfortunately, four days after its code release, the pornographic images reportedly emerged on social media platforms using Stability AI’s code.
Weaponization - Fraud, Deception, Harm: As access increases, so will threats. The FBI issued a warning about scammers using deepfake technology to impersonate job candidates for remote positions. They noted substantial growth in the number of bad actors using deepfake videos coupled with stolen personal identification information to trick employers. The public is being targeted, too. A short while ago, scammers posted fakes of Elon Musk on YouTube to defraud unsuspecting victims. YouTube accounts were hijacked and used to promote cryptocurrency giveaways, according to the BBC.
Thus far, synthetic media has been chiefly weaponized to victimize women. For example, it removes clothing from non-nude photos to create nonconsensual pornographic fakes. This disturbing trend and ease of accessibility illustrate the increasing misuse of synthetic media. In addition, experts are warning about the increased use of real-time deepfake video for occupational fraud and corporate espionage.
Wider Implications: Beyond the immediate harm of a synthetic piece of media, there is a large and perhaps more concerning problem. The Liar’s Dividend will soon become a commonly known phrase, referring to the undermining of visual evidence under the pretext it could be fake. The more synthetic media released, the more bad actors will use it as an excuse to undermine actual imagery that cannot be authenticated. Most recently, this defense has been used by defendants filmed on video during the January 6th attack on the U.S. Capitol.
Developing: Deepfake technology has become increasingly more convincing over the past two years. According to the VMware study, which polled 125 incident response and cybersecurity experts, email accounted for 78% of deepfake attacks last year, making it the top delivery method for synthetic media attacks. Differentiating between synthetic and authentic media is a critical foundation of online business and marketplace interactions. As the threat of malicious deepfakes continues to grow and advance, media authentication will become a crucial factor in risk mitigation.