Margaret Mitchell, Researcher and Chief Ethics Scientist, HuggingFace
"Transparency is fundamental to creating responsible and ethics-informed AI systems. This is because it is an “extrinsic value”, meaning that it is a mechanism through which we can realize so many other values that we care about. For example, transparency is a mechanism for accountability, as it makes clear the who, what, where, why, and how of AI system development. It’s a mechanism for inclusion, as it opens the door for diverse perspectives and inputs, combatting tech’s tendency towards exclusion. It’s a mechanism for honesty and truth about how well systems work. It incentivizes good practices, as we are much more likely to do a good job on things that we have to report — especially when we must transparently demonstrate due diligence, such as in creating “fair” systems that do not have disparate impact on some subpopulations."
Ray Lansigan, Executive Vice President, Digital Experience, Publicis Groupe
"Content provenance is at the heart of the global political economy. At least 20% of the $105T global economy transact digitally and humans, 2 billion of whom are voting in elections in the next 18 months, spend almost half their waking hours online. As a critical component of trust infrastructure, Content Credentials by the C2PA is a digital standard that displays provenance metadata providing transparency into the history of that content. The standard has established great momentum, but still needs ecosystem-wide adoption to establish parameters of trust in the digital behaviors that transcend sovereign borders."
The Latest News
The Royal Family image illustrates the intense scrutiny of digital content today.
The continuous alerts from scholars and media professionals about the potential of deepfakes and generative AI to dismantle our understanding of reality are no longer just hypothetical. The recent controversy over a royal image is a testament to the fact that we are currently navigating this era. An image of Kate Middleton and her children was the latest image under intense scrutiny from open-source investigators and the general public. The Associated Press used and retracted the photo a few hours later, issuing an order to delete it from their platforms due to suspicions that the source had altered it. The photograph appears to be a cheap fake, manipulated using editing software. Such alterations are achieved through basic editing techniques like cropping, filtering, and incorporating existing images into new ones. According to Digital Forensic Expert Dr. Hany Farid, if the C2PA's Content Credentials were used, newsroom photo editors could have reviewed the Content Credentials of the Royal Family's photograph before publishing it, averting the chaos that ensued due to its retraction. Soon after the controversy, Princess Kate shared she was facing significant health challenges.
YouTube introduces a new feature for labeling content created with AI.
YouTube has introduced a feature enabling content creators to mark their videos during the upload process if they include AI-generated or synthetic content. This initiative mandates creators to reveal any "altered or synthetic" material that appears genuine, encompassing actions such as making an actual person speak or act in ways they haven't, modifying footage of real happenings and locales, or depicting "realistic-looking scenes" that never occurred. YouTube has expressed its ongoing commitment to partnering with others in the industry to enhance the transparency of digital content. This effort is supported by its role as a steering member in the Coalition for Content Provenance and Authenticity (C2PA).
US Secretary of State Recognizes provenance system at Democracy Summit.
In his address at the Summit for Democracy, Secretary Blinken underscored the importance of utilizing technology to promote and sustain democratic principles worldwide. The Secretary referred to Project Providence as an example of a technological solution aimed at verifying the authenticity of media from its creation to when it is viewed. This recognition highlights the critical role of ensuring the integrity of images in fostering substantial progress and bolstering democracy across the globe.
BBC implements Content Credentials to validate media authenticity and increase transparency.
BBC News has recently unveiled a new feature, Content Credentials, designed to share information on the source and authenticity of photos and videos. Visitors to the BBC News site will notice a new button tagged 'how we verified this,' situated below images and videos on the BBC Verify content. This button is a part of the newly introduced content credentialing system. This feature was developed based on the open standard of the Coalition for Content Provenance and Authenticity (C2PA).
Inability to decipher synthetic from authentic content a challenge ahead of elections.
The inability to decipher synthetic from authentic content is a growing problem across industry and is especially important as elections draw near. The blur between authenticity and synthesis continues as prominent individuals report that public imagery is actually Gen AI. Former President Trump recently alleged that a video played in a Congressional testimony was AI-generated. Outside of the US, Gen AI images were allegedly deployed in Indonesia, Argentina, and elsewhere. In response, platforms are scrambling to mitigate risks such as implementing and adopting Content Credentials and watermarks for generative outputs. Others, like Midjourney, started preventing users from fabricating counterfeit images of US presidential candidates to help mitigate risks. A new bipartisan bill has been proposed in the House to mitigate the dangers posed by deepfakes by mandating the identification of AI-generated content on the internet.
The alarming rise of AI-powered phishing attacks.
Deepfake phishing is an emerging cybersecurity threat that uses deep learning algorithms to create incredibly realistic counterfeit content. In February 2024, a deepfake video conference call resulted in a devastating loss for a multinational company's Hong Kong office. The scam involved impersonating the company's Chief Financial Officer and led to a staggering loss of $25.6 million. According to a Global Initiative Against Transnational Organized Crime report, deepfakes and AI-enhanced cyber scams are being exploited by organized crime groups in Southeast Asia. Such attacks are becoming increasingly prevalent. In 2023, some experts suggested that deepfake fraud surged by 3,000%, mainly due to advanced AI models becoming more accessible. Additionally, new research shows a 704% increase in deepfake "face swap" attacks from the first to the second half of 2023.