“The rise of AI-generated content is shaping diverse sectors and spheres of society, from the media to politics and education. While Generative AI presents exciting possibilities, it also raises alarming concerns such as misleading content that can undermine citizens' ability to make an informed decision about what to believe. Trustworthy sources and digital transparency are becoming increasingly crucial for helping citizens navigate the information environment and become more discerning and well-informed. Multi-stakeholder dialogue can help inform important efforts to reinvigorate a shared foundation of online trust."
“In March, the call rang out for a halt in AI experiments over safety concerns. Yet, there's been no slowdown, only a rush forward, heightening risks to all who might be vulnerable. The risks are particularly high for children, and we in the child online safety field are concerned about it. Advanced AI is now being manipulated to produce or disseminate child abuse material, generating images of fictitious victims or altering real ones. AI-enhanced chat platforms are becoming tools for predators, enabling them to groom or deceive children. The stakes will only rise with the advent of sophisticated voice, image, and video features. To prevent this, we absolutely must be able to tell which content was AI generated and which was not. Together with our partners, we're advocating for robust safeguards and transparency of AI-generated content, which is critical to protect children's safety in an increasingly digital world."
Veena McCoole, Head of Communications and Marketing, NewsGuard
“For the extraordinary promise of generative AI, it has already presented a great threat to trust in information. Our research has identifiedan alarming propensity for generative AI chatbots to respond to prompts about topics in the news with well-written, persuasive, and entirely false accounts of the news.
Our analysts have also identified a growing number of Unreliable AI-generated News (UAIN) sites (currently 467 as of 18 September 2023), which operate with little to no human oversight and publish articles written largely or entirely by bots.
Fortunately, there's proof that incorporating transparently sourced trust signals in AI output is feasible: On Microsoft's Bing Chat, nuanced analysis is possible, describing questions as “controversial and disputed,” citing NewsGuard’s assessments of source credibility as part of its answer, and encouraging users to be “cautious and critical” when reading information from "unreliable sources.” NewsGuard's global team of misinformation experts continues to monitor the changing landscape of AI, and its impact on the news industry.”
The Latest News
First attempt at combining Content Credentials and watermarking on a Gen AI platform.
Truepic brings two new spacesto Hugging Face: the first makes C2PA Content Credentials available for developer use, and the second space integrates Content Credentials with Steg.AI’s digital watermarking algorithms. This marks the first time the two technologies have been combined on a major platform space.
The relaunch of Content Credentials and a new icon promoting transparency for digital content.
Major brands, including Adobe, Microsoft, Leica Camera, Nikon, Publicis Groupe, and Truepic, have begun adopting an "icon of transparency" or an official symbol of Content Credentials. As the volume of digital content grows exponentially, this icon offers a reliable way to distinguish that an image or video contains content credentials.
More weaponization of Gen AI on local levels: Spanish town in turmoil as mobile apps are used to create explicit images of underage girls.
Spanish police report that mobile apps are being used to create AI-generated explicit images of underage girls. A town in Spain has made international headlines after several young women said they receivedfabricated imagesof themselves created using an easily accessible AI-powered "undressing app." Over 30 victims between 12 and 14 years of age have been identified.
Famous personalities are raising their voices against deepfakes used to impersonate them.
AI has been employed to create deepfake videos of notable personalities. These fabricated videos were used to advertise various products and services, ranging from weight loss solutions to investment opportunities, without the consent or involvement of the individuals depicted. Celebrities have publicly disavowed these videos, deeming them fraudulent and misrepresenting their likeness. Moreover, several celebrities have publicly objected to the unauthorized use of their image in AI-generated deepfake videos.
Gen AI is equipping scammers with new tools.
From fraudulent texts to malevolent individuals duplicating voices and superimposing faces onto videos, generative AI has made it easier for malicious actors to create deceptive contentthat can fool individuals and anti-fraud systems. Criminals have historically been early adopters of new technologies, often outpacing law enforcement's ability to respond. Companies focused on preventing fraud are swiftly innovating to stay ahead, progressively exploring new kinds of data to identify malicious individuals.
Bad actors spread propaganda and sow confusion by sharing fake and misleading content.
False and misleading claims fuel division and spread propaganda about Israel and Gaza. A viral video depicts a Hamas militant downing an Israeli helicopter, but it's a scene from the Arma 3 video game. A video seemingly showing an Israeli woman being assaulted in Gaza was recorded in Guatemala in 2015. Furthermore, counterfeit accounts impersonating a BBC reporter and the Jerusalem Post disseminated false information before X suspended them.
Oxford Generative AI Summit 2023
A multi stakeholder event hosted at Oxford University to dive into the use cases implications, and future, of GenAI and Society. Register here.
Other Recommendations
Deepfakes of Chinese influencers are live streaming 24/7 ⇒
Hollywood Writers Win Promise: No Robots Will Get Screen Credits ⇒
New York’s Hottest Steakhouse Was a Fake, Until Saturday Night ⇒
PAI Welcomes Diverse Cohort of Partners to Synthetic Media Framework ⇒
Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy ⇒
Summit23 SCSP Tech Demo: Transparency & Authenticity Online: The Case for Content Provenance ⇒