“As generative AI enables the rapid development of synthetic content of all kinds, we believe it is increasingly likely that malicious actors will exploit these technologies to create new examples of disinformation and propaganda. Authentication apps would help information consumers to better grasp the origins of suspicious imagery and other content, which could mitigate their harmful cognitive effects. We view authentication apps as one of the possible solutions to this issue. Correctly implemented, authentication technology is a rare ethical and measurable defense."
Matt Boulos, Head of Policy & Safety, Generally Intelligent
“Advanced AI systems amplify the urgency around trust and transparency. We need to ask whether we have appropriate control over these systems, do we know exactly what they're doing and how it's being done, and do we even have the wherewithal to assess whether the things they're doing are consistent with our intentions.
Our approach to doing this responsibly is twofold. First, we require of ourselves, and expect of any frontier AI developer, a theoretical and practical understanding of the underlying principles governing every component of the system we build. Second, we live the values that we believe that companies building these systems should follow, and we encourage a legal/regulatory framework that enforces that expectation."
Tessa Sproule, Director, Metadata and Information Management, CBC/Radio-Canada
“Disinformation detection tools don't work well, at least not yet. We're playing whack-a-mole, and that's not sustainable if we want a healthy information ecosystem.
C2PA is a way for content creators, distributors and consumers to stand for truth. It shows readers, viewers and listeners what they need to understand the provenance, the chain of custody of content -- where was this picture taken, who took it and how was it edited (the stuff we all need to be thinking about when we encounter media).
I think C2PA's future should be in creating content credentials that integrate with communications tools and platforms. What if we could structure the authenticity signals of journalist-verified facts as secure metadata? I believe journalism will especially matter as generative AI floods our information ecosystem."
Digital Content Transparency with UC Berkeley Professor, Dr. Hany Farid
The Latest News
Coalition of companies seeks to promote transparency in digital content online.
The increasing prevalence of GenAI is causing concern among experts as it becomes harder for people to distinguish between real and fake online. The Content Authenticity Initiative (CAI) aims to establish a digital standard that enables creators to display "content credentials" that provide information about the entire history of a piece of content. The C2PA complements these efforts by providing documentation and end-to-end open technical standards.
Deepfake scams pose monumental challenges for the world’s banking industry.
Recent reports highlight the rise of deepfake imposter scams driven by AI, contributing to a new wave of fraud. These scams involve using generative technology to deceive individuals and financial institutions.
The potential interference of AI-powered deepfakes in elections.
With the upcoming 2024 presidential race, there is heightened worry about the potential harm posed by deceptive generative AI political content. Attorneys are preparing for a fierce election season marked by the emergence of a new wave of attack advertisements, which are barely unregulated or defined.
New tools emerge for marking AI-generated content.
Companies and prominent industry leaders, including OpenAI, have committed to implementing technical measures like digital watermarking or credentialing to aid in the identification of generated content. However, the success of this approach hinges on interoperability and also securing widespread participation in assigning and attaching labels to content.
AI feature adoptions spark privacy and security concerns among companies.
Coinciding with the rush to incorporate AI features into software, there are concerns about privacy and security. As the use of GenAI becomes more widespread, providers are realizing the importance of transparency in gaining the trust of their customers. In response, Twilio has announced that it will introduce "nutrition labels" for its AI services. Simultaneously, Salesforce is introducing an acceptable use policy that outlines the guidelines for companies' utilization of its GenAI technologies.
Copy Magazine, created by Carl-Axel Wahlstrӧm, is the world’s first AI-powered fashion magazine.
The debut edition of Copy Magazine is filled with uncannily flawless images—models with skin devoid of any imperfections and brilliant smiles of straight white teeth. However, none of the models exist. Each image, including the text, has been generated by AI. Copy was designed to obscure the boundaries between real and fake.
A multi stakeholder event hosted at Oxford University to dive into the use cases implications, and future, of GenAI and Society. Register here.
Other Recommendations
AI & Information Integrity: A Conversation with Nina Schick - Sam Harris ⇒
AI-Generated Deepfakes Are Taking Over the World. Here's How | Between the Lines with Palki Sharma ⇒
Are you being catfished by AI on dating apps? – BBC News ⇒
Artificial intelligence creates new challenges ahead of 2024 elections ⇒
Event: Using AI to Protect Civilians in Conflict Zones ⇒