C2PA Chair, Director of Media Provenance at Microsoft
"Last year, C2PA marked a number of firsts that showed the technology was not just possible, but was real and here now. Authenticity of media took center stage in conversations far beyond the technical as Generative AI tools came to the forefront.
As we look forward to 2024, over 50 of the world’s democracies are going to vote. Already this year, we have seen generative AI content used in advertising and voter disenfranchisement campaigns. Content credentials have two roles to play in this process – both marking the synthetic content from well-meaning actors, but also marking the authentic content from those involved in elections. Doing so will help each of us make sense of the material we receive in this incredibly important process. There’s tremendous work to do to bring Content Credentials to users around the world, but the time is right for the effort.
I could not be more excited about the role Content Credentials, built on the C2PA specification, have to play this year. While important, this use case is just one of hundreds that deserve authenticity and integrity protection. I look forward to seeing those come from innovators across the landscape over the course of the year."
Adam Fivenson,
Senior Program Officer for Information Space Integrity International Forum for Democratic Studies at the National Endowment for Democracy
"We are already seeing authoritarian actors deploy generative AI to advance their efforts to undermine democracy at a global scale. Typically, these efforts leverage synthetic media to advance an authoritarian narrative, undermine a pro-democracy candidate, or amplify social distrust. Recent elections in Taiwan and Bangladesh offer ample illustrations of manipulative synthetic media’s potential to undermine the integrity of the information space at critical moments for democracies. These examples must serve as a clarion call to those working to secure the integrity of the information space for democracy—independent journalists, fact-checkers, narrative researchers, and their coalition partners—about the urgency of a forward-leaning democratic response. Civil society organizations are already taking the lead in that response, building awareness among the public, crafting new tools to detect synthetic media, using LLMs to train media monitoring algorithms, and creating efficiencies for journalists and fact-checkers."
Walter Pasquarelli, Expert & Advisor on Generative AI Policy
"News broke this week of an AI-generated voice clone of US President Joe Biden, discouraging Democrats from voting in the upcoming election. This incident is the latest in a series of similar events. In March 2023, a series of doctored images depicted Trump being arrested by the FBI. Before that, a video circulated showing Ukrainian President Zelensky urging his troops to surrender.
In light of last year's advancements in generative AI tools, now accessible to the wider public, the World Economic Forum has identified AI disinformation as the number one threat to the global economy. While AI-generated disinformation is not a new phenomenon, 2024 may be its most impactful year yet. This year, a record number of voters are expected to participate in national elections across at least 64 nations, including the European Union. This represents nearly half of the world’s population, with about 49% of voters involved.
As voters worldwide cast their ballots, ensuring the authenticity of content, data provenance, and trust will pose the most significant challenge for humanity."
The Latest News
Content credentials to offer a crucial defense against the rising threat of deepfakes.
AI has been capable of producing photorealistic faces for years, but as the systems have advanced, the tools have become better at it. The Liar's Dividend is being leveraged by politicians who attribute controversial photos, videos, and audio to AI. This trend is causing a destabilization in the understanding of truth itself, especially in the context of elections. Ahead of elections around the globe this year, CISA has issued crucial election recommendations, urging officials to adopt content authentication and provenance measures, and several media organizations have stated interest in implementing C2PA Content Credentials. These credentials secure information about the origin and history of an image through cryptographic methods. OpenAI is the latest Gen AI platform actively working to prevent potential abuse in future worldwide elections. The organization intends to utilize content credentials to add a layer of transparency to AI-generated content.
In 2024, the evolution of AI regulation is taking center stage amidst escalating global risks.
AI policy and regulation found its way to the headlines in 2023. The year marked a significant milestone in policymaking, with the EU finalizing its first comprehensive AI law, the US conducting Senate hearings and issuing executive orders, and China implementing specific regulations for recommended algorithms. As we move into 2024, the focus will shift from establishing a vision to implementing policies into tangible actions. The World Economic Forum (WEF) Global Risks Report 2024 identifies misinformation and disinformation as the most significant short-term risk globally, and the topic of AI regulation dominated this year's World Economic Forum annual meeting in Davos.
House legislators introduce federal safeguards for voice and likeness.
On January 10, U.S. House lawmakers from both sides of the aisle revealed new legislation aimed at controlling the application of AI to replicate voices and images. The bill, known as the "No AI FRAUD Act," seeks to implement a national structure for safeguarding individuals' voices and physical appearances while maintaining First Amendment rights. The legislation aims to prevent the misuse of AI and ensures that everyone has the right to control the use of their image and voice. Recently, a robocall mimicking President Joe Biden misled Democrats in New Hampshire by advising them against participating in Tuesday's election. The call falsely suggests that voting would assist Republicans in their efforts to re-elect Donald Trump.
State-level legislation targeting deepfakes become an emerging trend.
With the increasing use of AI, concerns are growing about deepfakes. In response, state representatives have recently proposed House Bill 367 at the Ohio Statehouse. Concurrently, the Idaho Legislature has proposed new legislation prohibiting using explicit AI-generated media for harassment or extortion. State lawmakers in Indiana introduced House Bill 1047, which would make it illegal to create and then send out deepfakes of real people manipulated by AI to make them appear nude. Simultaneously, bad actors increasingly use publicly available generative AI tools to mimic, mock, and target local officials online. The NYT highlighted the work of a Columbia University researcher tracking such activity in Louisiana.
Deepfake scams involving celebrities like Taylor Swift and Oprah cost Americans billions.
According to the FTC, misleading tactics involved in deepfakes on social media have resulted in a loss of $2.7 billion. Scammers use manipulated images or videos of celebrities to trick people into giving away their financial information. For instance, one scam involved a convincing video of Taylor Swift offering free Le Creuset products, with users asked to provide bank details to cover shipping costs, only to be hit with additional hidden charges. Other scams included deepfakes of Luke Combs promoting weight loss gummies and Tom Hanks endorsing dental plans.
George Carlin's daughter is distressed over father's AI-generated clone.
An AI-generated comedy special featuring a synthetic version of the late comedian George Carlin has been widely criticized, especially by his daughter, Kelly Carlin. The hour-long show, "George Carlin: I'm Glad I'm Dead" was created by actor Will Sasso. Podcaster Chad Kultgen was trained to impersonate George Carlin without permission from his family. The legal path George Carlin's estate could pursue in response is uncertain as the First Amendment protects parody, and deceased individuals typically do not have privacy rights under US law. A potential point of contention could be whether the AI was unlawfully trained using copyrighted material, a more extensive ongoing debate between copyright holders and AI developers.