“As a researcher focused on generative AI, I'm interested in building systems and training people to distinguish synthetic media from authentically recorded media. In order to evaluate and augment our own capabilities for identifying synthetic media, it becomes necessary to create a diverse array of high quality, potentially persuasive synthetic media on which to train humans and machines. However, these images, audios, and videos can be taken out of context and possibly be used to spread misinformation. C2PA offers an opportunity to address this problem by adding context on where perceptual media originated, which allows people to quickly check if a video is partially or wholly generated by AI."
“Generative AI has enormous potential, with positive and negative effects on our world. The study of those effects is critical. Digital content provenance and transparency in synthetic content through the C2PA standard will prove incredibly useful in enabling safe and scalable research on the impact of gen AI on business, society, and government."
- Mounir Ibrahim, EVP of Public Affairs and Impact, Truepic
The Latest News
Research process highlights ethical deployment of generative AI in research settings using the C2PA's open standard.
A study led by researcher Dr. Matt Groh, used Truepic’s cryptographic signing solution and display library to add and display tamper-evident provenance data on over 30 AI-generated videos. Using the C2PA open standard, the researchers were able to disclose and attribute the AI-generated videos they had shown to participants during the experiment as part of their participant debriefing process at the end of the experiment. This is the first time the C2PA spec has been used to debrief participants in an academic study involving deepfakes.
Schumer announces classified briefing on AI threats and national security for Senate members.
The United States Senate is scheduled to receive a classified briefing on the potential threats posed by Artificial Intelligence to national security. The briefing, which will also cover the impact of AI on global security and foreign adversaries, has been arranged by Chuck Schumer, the Senate Majority Leader. This marks the first-ever classified briefing of its kind for the Senate, as lawmakers seek to stay ahead of rapidly advancing technology.
The Verge Surveys 2,000 people about AI usage and fears.
The Verge and Vox Media surveyed 2,000 individuals to understand their usage of AI, their preferred capabilities, and the specific aspects of AI that they find most concerning. The survey revealed that 78% of them believed that digital content generated by AI should be obligated to mention explicitly that it was created using AI. Notably, 76% of those surveyed opined that using AI to mimic someone's voice or video content without their permission should be deemed illegal.
Gen AI is Flooding the Internet with Garbage.
A team of researchers from Cambridge University and the University of Toronto is looking into how LLMs will evolve and train on data that itself may be synthetic. Semafor reported that the researchers found that programs relying on AI-generated content became vulnerable they called “model collapse.” Such a phenomenon will benefit those models trained on pre-GenAI data or models that have marked or separated authentic content.
The role of data will be crucial in determining success or peril as AI becomes increasingly ubiquitous in the marketing industry.
As AI becomes increasingly prevalent in marketing and media, there is growing recognition that this highly advanced technology carries inherent risks. It remains to be seen whether sufficient measures are being taken to address these risks. In a move to promote the responsible use and management of AI, particularly Gen AI, Publicis Group, an agency holding company, announced its membership in the Coalition for Content Provenance Authority (C2PA).
A new bipartisan bill has been introduced that excludes AI from legal immunity under Section 230.
Senators Josh Hawley and Richard Blumenthal have introduced a new bipartisan bill that clarifies the internet's bedrock liability law, Section 230 of the Communications Decency Act, which does not apply to generative AI. The bill would remove immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI and allow people to sue companies in federal or state court for alleged harm by generative AI models.
Indian politician claims scandalous audio clips are deepfakes: experts warn of politicians using AI as cover
An Indian politician from the Hindu nationalist party released two audio clips allegedly showing an opposition leader admitting to corruption and praising his opponent. The opposition leader denied the accusations and claimed the clips were fabricated using artificial intelligence. Three deepfake experts were asked to analyze the authenticity of the clips, and they concluded that the second clip was authentic, but the first clip may have been tampered with.