"We are living through an unprecedented technology revolution due to the advent of Artificial Intelligence. It holds the immense promise of advancing industries, improving productivity, and enriching human lives. However, it also brings a set of significant societal challenges when it comes to digital content. The concern that the development of AI is outpacing the creation of robust mitigation systems and safeguards is warranted.
For example, it is becoming increasingly difficult to distinguish between human-generated content and synthetic content, particularly as deceptive uses of technology are commodified, and certain images and videos are deliberately shared for political ends. To navigate these complexities and to demystify the AI boogeyman, we must place transparency and authenticity at the heart of our approach. Initiatives like the C2PA or Partnership on AI’s framework for Responsible Practices for Synthetic Media give us some essential harm-reduction building blocks. The Blue Owl Group is excited to contribute to the public conversation and to ensure that transparency and authenticity are at the forefront of this era-defining tech transformation."
Sandra Khalil, Associate Director, All Tech Is Human
"Transparency and authenticity are not merely best practices, but foundational pillars for the credibility and integrity of digital content and trustworthy dissemination in today’s information ecosystem. Without one or both, we run the risk of chipping away at our institutions' legitimacy and public faith in them over time. Therefore, as technology continues to evolve and our content integrity issues become increasingly complicated, so must our commitment to these principles. That includes robust policies at the standard-setting level, like Content Authenticity and Provenance (C2PA), and global, multistakeholder efforts in the trust and safety solution space."
The Latest News
In this issue, we cover:
Jumio Report: Americans Fear Political Deepfakes
Fake Media Could Influence Elections
Microsoft Warns of AI Election Risks
Senators Introduce Bill To Protect Creators
Senate Passes Act Combatting Deepfake Pornography
Sensodyne Leads in Advertising Transparency
Jumio report reveals Americans fear AI deepfakes in elections and have skepticism towards online content.
Advanced identity verification platform Jumio's recent report highlights the growing threat of AI-generated deepfakes on elections. 72% of Americans are concerned about the influence of AI and deepfakes on upcoming elections. Additionally, 70% of U.S. consumers report increased skepticism towards online content compared to the last presidential election, emphasizing that deepfakes undermine their trust in politicians and media.
Increased prevalence of well-timed fake media could influence elections.
In an interview with Politico, Professor Danielle Citron, who has long studied synthetic media and helped coin the phrase “The Liar’s Dividend,” expressed concern about the increasing prevalence of well-timed synthetic deceptive videos. She warned that even a small number of these videos strategically released to influence public opinion could cause significant damage, particularly during elections. A recent incident involving a deepfake video of Vice President Kamala Harris has raised significant concerns about the impact of AI on public perception and political stability. The clip circulating on social media is based on a frequently mocked phrase the Vice President repeated. The clip is not genuine and appears to be an AI-generated version of a speech she delivered in 2023. In another instance, a digitally altered photo depicting Secret Service members smiling after an assassination attempt on Trump was widely shared online. These incidents underscore the urgent need for AI transparency to protect the integrity of political discourse.
Microsoft warns of deepfake risks to upcoming elections and advocates for AI transparency.
Microsoft highlighted the threat of synthetic video in democratic processes, particularly in the context of the upcoming elections in the EU and the U.S. Microsoft has led multiple initiatives to help increase security and transparency across global initiatives such as helping establish Content Credentials through the Coalition for Content Provenance and Authenticity. Content Credentials allow the transparent information and origins of digital content to be available for review by Content Consumers. It also deployed Content Integrity Tools, a platform for political campaigns, news organizations, and election officials so they can add Content Credentials to their own content. The tools also include a secure app built by Truepic to empower the authentic capture of images, videos, and audio with Content Credentials.
Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) to combat the rise of harmful deepfakes.
Senators Marsha Blackburn, Maria Cantwell, and Martin Heinrich have introduced the COPIED Act, a legislative measure designed to combat the challenges posed by deepfakes and protect the rights of content creators. This bill seeks to safeguard journalists, artists, and songwriters from the risks associated with AI-generated content. It establishes transparency standards for AI-generated material and empowers creators with greater control over their work. Additionally, the legislation prohibits the manipulation of provenance information. The COPIED Act has garnered support from various industry groups, including SAG-AFTRA and the Recording Academy.
Senate unanimously passes DEFIANCE Act to combat AI-generated pornographic deepfakes.
The Senate has unanimously passed the DEFIANCE (Disrupt Explicit Forged Images and Non-Consensual Edits) Act, which aims to provide legal recourse for victims of AI-generated pornographic deepfakes. Led by Senators Dick Durbin and Lindsey Graham, and Representative Alexandria Ocasio-Cortez, the bill amends the Violence Against Women Act to allow victims to sue those who produce, distribute, or receive such content without consent. If passed by the House, this legislation would be the first federal law to create a civil cause of action for deepfakes.
Haleon's Sensodyne leads the way in digital advertising transparency with Content Credentials.
Katie Williams, U.S. Chief Marketing Officer for Haleon, announced that Sensodyne is the company's first brand to incorporate Content Credentials by the C2PA in its U.S. digital advertisements. This initiative aims to verify that the dentists endorsing Sensodyne are genuine professionals rather than paid actors. Williams underscores the significance of transparency, citing a survey in which 75% of consumers expressed a desire for transparency indicators in advertisements. Additionally, research indicates an increase in brand equity for digital media incorporating Content Credentials. Although the use of Content Credentials in advertising is still in its early stages, Haleon’s positive experience suggests that more brands may benefit from adopting this technology.