"Helping users make informed decisions about the trustworthiness of the content that they are consuming should be a North Star guiding the AI platform shift. This aspiration can only be realized through the deployment of provenance technologies such as C2PA and Google DeepMind's SynthID, both of which support this drive for transparency about how content was made."
Chase Teron, Co-founder & CMO of Overlai
"Transparency and authenticity in digital content are absolutely critical in today's digital age. As we navigate an online world increasingly saturated with information, the ability to verify the origin and history of content helps build trust and credibility. It protects the integrity of information and safeguards the rights of creators by ensuring their contributions are accurately represented and acknowledged. By prioritizing these values, we can foster a digital ecosystem that respects and upholds the truth, empowering both creators and consumers to make informed decisions."
The Latest News
In this issue, we cover:
Media Transparency in Video Streaming
EBU TC Supports Content Provenance Standards
AI Imagery Heightens Election Tensions
Google Cracks Down on Harmful Fakes
Scammers Exploit Elon Musk's Likeness
Deepfake Scams Target Scientists
Advancing media transparency in video streaming for European broadcast companies.
G&L Systemhaus integrates C2PA into existing production and publishing processes, enabling the authenticity and integrity of images and videos. Each piece of content will be signed with C2PA metadata that communicates its origin and editing history. By adopting the C2PA standard, media companies can enhance content transparency and strengthen public trust in their brand. Truepic collaborated with G&L Systemhaus to lead the charge in C2PA streaming compliance for European broadcast companies.
The EBU Technical Committee urges industry collaboration on content provenance standards like C2PA.
The European Broadcasting Union (EBU) Technical Committee calls for collaboration across the media technology industry to support content provenance standards like the C2PA and advocates for a unified approach to governance, aiming to strengthen media integrity and combat misleading information. C2PA Content Credentials provide interoperable and tamper-evident metadata, helping explain how an image, video, or other content was created. This initiative has already been embraced by platforms like DALL-E, LinkedIn, TikTok, and others. The use of Content Credentials as a form of disclosing the use of AI is a growing trend toward provenance-based transparency in digital content.
AI-generated imagery and claims of AI manipulation intensify during the US Presidential race.
Former President Trump claimed that Vice President Kamala Harris' campaign used AI to fabricate images of a crowd at a Michigan rally. The claim alleged that an image showing a large gathering at the Detroit Metropolitan Airport was manipulated to exaggerate turnout. The claim was debunked and most experts believe that the photo accurately depicted the event and capture from an iPhone taken by a staffer. In another instance, images suggesting that Taylor Swift endorsed the Trump presidential campaign went viral. Swift has not made any political endorsements for the 2024 election and the images have been proven to be AI-generated.
Google strengthens efforts to combat non-consensual deepfakes.
Google is taking significant steps to combat non-consensual deepfakes by making it easier for victims to request the removal of explicit AI-generated imagery and preventing such harmful content from appearing in search results. Additionally, Google is working on more comprehensive solutions, including the development of proactive detection systems and improved protections against the creation and dissemination of deepfakes, especially in cases involving non-consensual or harmful uses of personal imagery.
Elon Musk’s likeness is used as part of a growing AI-powered scam industry causing billions in annual fraud losses.
Steve Beauchamp, an 82-year-old retiree, lost over $690,000 to scammers after being deceived by a fake investment video featuring a deepfake of Elon Musk. The scammers used AI tools to manipulate Musk's voice and mouth movements to make the video appear genuine, convincing Beauchamp and countless others to invest in what seemed like a legitimate opportunity. These deepfake videos, often circulated through social media ads, are part of a growing AI-powered scam industry expected to generate billions in fraud losses annually.
Deepfake scams target scientists, damaging reputations and misleading the public.
Scientists are becoming targets of deepfake scams, which damage their reputations and mislead the public. Cybercriminals are using realistic AI-generated videos to impersonate scientists and promote fake products, leading to financial scams and reputational risks. These deepfakes are hard to detect and debunk, posing significant threats to the credibility of scientists and their work.