AI image deception concerns from elections to natural disasters
Debbie Cox Bultan, CEO of NewDEAL & the NewDEAL Forum
"Generative AI technology holds great potential to deliver results that make government work better for everyone. However, reaping the benefits of AI also requires safeguards. Transparency and authenticity are key.
We have observed that especially when it comes to maintaining a healthy democracy with safe, secure, and fair elections. With the ability to create realistic deepfakes, including images, videos, and voices of candidates, generative AI has the potential to supercharge nefarious efforts to spread mis- and disinformation. Voters have a right to know if what they are looking at is real or fake. Many states are enforcing transparency by requiring campaigns to disclose the use of generative AI in campaign ads or banning the distribution of deepfakes within 90 days of an election. Further, policymakers should support ways to verify content as quickly as possible.”
Ugonma Nwankwo, Senior Associate, Tech, Media & Telecoms of Global Counsel
“Imagine walking through a museum, where each piece of art tells a story, inviting you to understand its origins, the history of ownership, the artist’s intent, and the context in which it was created. You pause in front of a painting, captivated by the image and the detailed explanation provided, which deepens your appreciation and trust in its authenticity. Similarly, in the chaotic landscape of digital content, trust is paramount. Transparency and authenticity act like the detailed museum placard, offering clarity and allowing consumers to make informed decisions about what content to trust and engage with amidst the noise. To create such an ecosystem where trust and authenticity are front and center, policymakers and industry must collaborate to establish standards around responsible practices and expectations, ensuring that transparency and authenticity become the norm on platforms, not the exception.”
The latest news
In this issue, we will cover:
AI deepfakes could be weaponized on voters
Deepfakes undermine hurricane relief
YouTube displays first authentically-produced video
French Embassy hosts AI harms briefing
Judge blocks California deepfake law
Study finds deepfake surge of Trump, Musk
Election officials worry that AI deepfakes could be weaponized on voters.
With U.S. voting underway, election officials in some states have cited AI deepfakes as a top concern. In battleground states, officials are training to handle threats like misleading AI-generated news or manipulated videos aimed at disrupting voting. The Department of Homeland Security warns that AI could create fraudulent election records or impersonate staff. More than 3 in 4 Americans believe AI will likely be used to influence the election outcome, according to an April 2024 Elon University poll highlighting the urgent need for public awareness and digital literacy.
AI-generated deepfakes of hurricane victims undermine disaster relief efforts.
AI-generated media depicting distressed victims proliferated across social media platforms following Hurricanes Helene and Milton in the U.S. Harrowing images of children devastated by floodwaters fueled false narratives and complicated relief efforts. Some reports identified foreign state actors like Russian troll farms promoting and posting synthetic media across popular social media platforms. ABC News’ Emmanuelle Saliba reported the level of synthetic media deception during these crises was a bit of a “turning point” and that relief workers were slowed down by the inability to identify authentic from synthetic media.
First authentic video uploaded and identified on YouTube.
Truepic captured and uploaded the first authentic video on YouTube, the world’s largest video sharing platform. This breakthrough was made possible because of the interoperability between Truepic’s technology and the YouTube platform through the Coalition for Content Provenance and Authenticity's (C2PA) open specification. While Content Credentials on AI creations have become standard practice, YouTube and LinkedIn have gone a step further to also begin sharing Content Credentials on non-AI (authentic) material.
The Embassy of France hosted a briefing on combating AI harms using content authenticity.
On Oct. 2nd, the Embassy of France and the C2PA held a briefing on the risks and opportunities associated with AI and the critical need for digital content authenticity. The event focused on efforts to address AI-related challenges through transparency and authenticity in media, with participation from stakeholders from public and private sectors across the international community. The C2PA standard is becoming a global standard with the aim of enhancing trust in the digital ecosystem. The discussion is set to continue at the AI Action Summit in Paris in February 2025.
Judge blocks California deepfake law, citing First Amendment concerns over free speech and satire.
A federal judge temporarily blocked California's new law aimed at curbing the use of deepfakes in political campaigns, ruling that it likely violates the First Amendment. The law signed by California Governor Newsom is meant to prevent the spread of misleading AI-generated digital content in the lead-up to the U.S. presidential elections. However, the court determined that the law unconstitutionally suppressed free speech, such as satire and parody, which are protected forms of expression.
Study reveals deepfake surge of Trump and Musk ahead of 2024 presidential election.
A study by Kapwing reveals that Donald Trump and Elon Musk are the most frequently deepfaked figures ahead of the 2024 U.S. presidential election, with over 12,000 and 9,500 deepfakes. The study illustrates the increase of synthetic media in the political arena.