Daniel Abas, Founder & President, Creators Guild of America and Founder of Mosaic
"Who created that? Today, digital content is more than a form of entertainment. It is a dynamic canvas for entrepreneurship. At a time when AI further blurs the lines between human created and artificially generated content, provenance and authenticity is imperative. This is not just about safeguarding intellectual property—it's about preserving the integrity of our society, the truths we collectively agree on. We carry the responsibility for safeguarding our children, and ensuring that economic opportunities exist for future generations. We believe that better tools to claim ownership, verify authenticity, and monetize creative output will track towards a future where technology enhances human prosperity, rather than obscures it."
Philip Reiner, CEO, Institute for Security and Technology
"Digital technologies are intentionally built to affect us, allowing for cognitive offloading and freeing up mental space for other tasks. We have to be clear eyed about the challenges that this can introduce, such as how digital content often exacerbates the illusion of explanatory depth. That means that these technologies pose metacognitive challenges–affecting the way we ‘think about thinking.’ I’m particularly interested and focused on the impact of generative AI on cognition and metacognition–both on an individual and societal level. Transparency and the ability to ascertain authenticity could provide “productive friction” for the user, which could help ameliorate potentially negative cognitive impacts. Through IST’s Generative Identity Initiative, we’re convening stakeholders from across the GenAI landscape to discuss these effects and propose long-term solutions—of which transparency and steps to prove authenticity could go a long way."
The Latest News
In this issue, we cover:
OpenAI and TikTok Commit to AI Transparency
AI Deepfakes Take Over Met Gala
Politics Faces AI Information Challenge
Senate Takes on AI
Arizona Trains Workers Against Deepfakes
NATO Combats AI-Generated Media
OpenAI and TikTok adopt C2PA-backed content labeling for AI transparency.
OpenAI joined the C2PA Steering Committee and implemented Content Credentials in its synthetic outputs to verify their origin. Microsoft and OpenAI launched the Societal Resilience Fund with $2 million to bolster AI understanding and literacy among vulnerable communities and voting-age individuals ahead of the US election, providing AI education. The same week, TikTok announced a new feature on its platform that delivers transparency by labeling AI-generated content. TikTok is the first video-sharing platform to adopt C2PA Content Credentials. Soon after, LinkedIn also pledged to display Content Credentials for all digital content on its platform, becoming the first social media platform to implement credentials for authentic and synthetic content.
Global audiences were captivated by this year's Met Gala coverage featuring AI-generated images.
This year's Met Gala coverage was flooded with deepfake imagery that captivated and confused audiences worldwide. The event became the backdrop for viral images, including Katy Perry in a whimsical, woodland-like gown and Rihanna in a regal, garden-themed dress, which turned out to be AI-generated. A mix of technologies, AI and editing tools, were used to achieve highly realistic images. Despite garnering millions of views and likes on social media platforms, discrepancies in the photos' details and the absence of fashion magazine coverage revealed them as fakes. This illustrates the growing impact of generative AI tools on celebrity culture and the public's perception of reality.
The rising issue of AI-driven information manipulation in politics.
Software analysis confirmed an audio clip of Evo Morales used to accuse the U.S. embassy's Chargé d'Affaires in Bolivia of interference was AI-generated. The AI-generated fake was disseminated widely on social media and used by Morales at political events. The Election Commission of India advised political parties to use social media in election campaigns ethically and to remove any fake content, including deepfakes, within three hours of its detection. The commission's advisory comes in response to controversies surrounding the misuse of AI in politics, emphasizing the need for transparency and accountability in digital campaigning. U.S. intelligence officials also recently identified a wave of videos targeting the 2024 U.S. elections.
Bipartisan Senate effort proposes regulatory and ethical measures to address AI's trust gap.
Every day, 72% of consumers express their worries about falling for deepfakes, prompting a call for more robust government oversight of AI. The tech industry's drive to advance AI is met with a significant trust gap in AI. This gap is born out of concerns over risks such as the spread of misleading information and ethics, highlighting the need for transparency and human oversight in AI's societal integration. A bipartisan group of senators, led by Senate Majority Leader Chuck Schumer, has unveiled a comprehensive plan that tackles the advancements and challenges posed by AI, with a strong emphasis on increased funding, law enforcement, workforce considerations, and the mitigation of deepfake risks.
Arizona educates election workers on identifying deepfakes to prepare for the 2024 elections.
In Arizona, a training session for election workers focused on preparing them for the 2024 elections by teaching them how to identify deepfakes. Welcomed by a video message from Secretary of State Adrian Fontes, the training aimed to equip those on the electoral frontline with the skills needed to tackle new challenges posed by sophisticated AI-generated media. The intensive training left participants with a healthy skepticism about the media they come across. This initiative underscores the state's proactive stance in combating digital threats to electoral integrity.
NATO will use a network approach to fight AI-generated misleading information.
NATO is considering a network approach to tackle the challenge of generative AI. This strategy involves partnering with civil society, NGOs, and private actors across member states to identify AI-generated synthetic media. Originating from Estonia's successful model, this collaborative method aims to leverage local expertise, fostering trust and credibility within communities. However, they face challenges such as the evolving sophistication of AI and the varying capabilities of partner organizations. NATO's potential adoption of this strategy reflects a proactive effort to enhance digital literacy and media verification processes.