Transparency Standards Becoming Essential in Countering AI Manipulation
Jennifer Brody, Deputy Director of Policy and Advocacy for Technology and Democracy, Freedom House
"Transparency and authenticity of digital content is important to help people assess the reliability of the information they consume. This is critical to empower society with the tools they need to combat false or misleading content, which can result in especially dangerous outcomes during elections. When determining how to appropriately enhance content provenance, companies should always be sure to prioritize privacy risks for those likely to be targeted for surveillance or attacks, like human rights defenders. It is also essential that civil society be consulted when formulating industry standards for content provenance documentation to best safeguard democracy and protect information integrity."
Henry Adams, Trust & Safety | Partnerships & Strategy, Resolver, a Kroll Business
"Whether we look at digital content through a macro lens, like mis and disinformation targeting the upcoming US Presidential Elections, or at deeply personal cases like the sharing of non-consensual intimate imagery, authenticity and transparency are increasingly central data points for Resolver’s work.
Answering “How” content came to be is increasingly as important as the Who and the Why in the insights our Trust & Safety intelligence & advisory services provide.
While bad actors aren’t generally users of emerging technologies like C2PA, we anticipate adversarial tactics that mask origin will need to be countered with an increasingly broad array of tools and techniques.
Our analysts’ ability to differentiate generative from hybrid, and hybrid from “real” content, is a material part of our ability to help our partners keep pace with the technologies being exploited by those seeking to cause or incite harm or distrust online."
Alexander Leschinsky, Co-Founder & CEO, G&L Geißendörfer & Leschinsky GmbH
"At G&L, we support leading broadcasting and news organizations that have cultivated trust with their audiences over decades. While misinformation and fraud have always posed challenges, the rise of generative AI has made it easier than ever to produce misleading or false content that is difficult to detect. C2PA offers robust cryptographic tools to establish verifiable links between digital content and trusted entities, providing a critical solution for maintaining audience trust in an era of disinformation. We are pleased to help our clients integrate C2PA into their workflows swiftly, utilizing Truepic’s proven SDKs for signing and validating digital assets."
The Latest News
In this issue, we cover:
Companies Showcase Media Transparency at IBC
Over Half of Businesses Hit by Deepfake Scams
iPhone 16 Event Targeted by Crypto Scam
Amazon and Meta Join C2PA Steering Committee
Growing Need for Media Literacy
New California AI Election Laws
State-Led Efforts to Combat Deepfake Porn
At IBC 2024, companies showcased media transparency advancements.
The International Broadcasting Convention (IBC) in Amsterdam spurred several announcements around media content transparency. A group of companies featured an end-to-end C2PA publisher workflow demo, including BBC, WDR, Adobe, G&L Systemhaus, netTrek, and Truepic at the EBU (European Broadcasting Union) booth. EZDRM showcased advanced media verification, focusing on content integrity and video provenance, while G&L Systemhaus and Truepic presented a proof of concept for C2PA-compliant streaming for European broadcasters,, and Media City Bergen discussed C2PA integrations from Norwegian tech companies in Project Reynir. On Sept 16, The International Press Telecommunications Council (IPTC) announced Phase 1 of the IPTC Verified News Publishers List.
Over half of businesses are hit by deepfake scams using AI-generated media for fraud.
Deepfake scams are rapidly increasing, affecting 53% of businesses in 2023, according to a survey by Medius. Cybercriminals use advanced AI technology to create convincing synthetic videos and audio, targeting companies for financial gain and data theft. These scams are part of a larger trend in AI-driven cybercrime, and businesses struggle to maintain effective defense mechanisms. According to a May report by Deloitte, it is projected that generative AI could lead to fraud losses reaching $40 billion in the U.S. by 2027. The rise in deepfake incidents highlights the urgent need for enhanced cybersecurity strategies and awareness to protect against increasingly sophisticated threats.
Crypto scammers use deepfake of Tim Cook in iPhone 16 live stream scam.
During Apple's iPhone 16 launch event, scammers hijacked YouTube with deepfake videos of CEO Tim Cook, promoting a crypto scam. The AI-generated version of Cook urged viewers to deposit Bitcoin and other cryptocurrencies into a wallet, promising to double their money. These streams, made to look like official Apple content, attracted hundreds of thousands of views and were likely boosted by bots. YouTube has since removed the fraudulent videos, but the incident highlights the growing use of deepfake technology for crypto scams.
Amazon and Meta join the C2PA steering committee and adopt Content Credentials.
Amazon and Meta are strengthening their commitments to digital content transparency by joining the Coalition for Content Provenance and Authenticity (C2PA) as new steering committee members. With Amazon and Meta now on board, alongside Adobe, BBC, Google, Microsoft, Truepic, and others, the push for transparency and authenticity in digital content is stronger than ever. C2PA Content Credentials provide vital information on how and when digital content was created or modified. Meta's focus on labeling AI-generated content across Facebook, Instagram, and Threads is a significant step in ensuring users know when AI has been involved in image and video generation and editing. Amazon will begin attaching Content Credentials to Titan Image Generator v1 and v2 and will work on incorporating them into AWS Elemental MediaConvert.
In the age of deepfakes, media literacy matters more than ever.
Media literacy has become essential in helping people discern authentic content from manipulated media. With advancements in AI, deepfakes are becoming increasingly convincing, making it harder to trust what we see online. Media literacy equips individuals with the tools to critically evaluate digital content, identify potentially misleading information, and recognize altered media. This skill set is vital in combating the spread of fake news and maintaining trust in media sources. As AI-generated content becomes more prevalent, businesses are using transparency to preserve consumer trust while ensuring a consistent and recognizable brand voice.
California Governor signs three laws regulating AI-generated election content.
California Governor Gavin Newsom signed three bills regulating AI-generated content and deepfakes related to elections. AB 2655 mandates online platforms to block deceptive election content and implement reporting mechanisms. AB 2355 requires political ads created with AI to disclose their AI-generated nature. AB 2839 prohibits the malicious distribution of materially deceptive ads within 120 days before an election. These laws aim to combat the spread of false information and protect election integrity ahead of the 2024 U.S. presidential election.
States are leading the charge on deepfake laws.
As national legislation on deepfake pornography moves slowly through Congress, many states are taking matters into their own hands. Thirty-nine states have introduced laws aimed at combating non-consensual deepfakes, with 23 already passing legislation. Efforts at the federal level include the Defiance Act, introduced by New York Congresswoman Alexandria Ocasio-Cortez, which would allow victims to sue, and Senator Ted Cruz’s Take It Down Act, requiring platforms to remove such content. However, the inconsistency between state laws and the challenges of enforcing these laws across state lines highlight the need for federal action.