"AI audio helps overcome language and communication barriers, creating a more connected and productive world, but it can also attract bad actors. To prevent misuse and promote responsible AI, we at ElevenLabs adopt AI safety strategies based on three principles: moderation, accountability, and provenance. To us provenance means that everyone should know whenever audio files or recordings they encounter are AI-generated. Providing this transparency through classifiers, AI detection systems, and cross-industry collaboration, such as with C2PA and CAI, is crucial for maintaining trust and safety."
Kellee Wicker, Director, Science and Technology Innovation Program, The Wilson Center
"We knew it before, but the pandemic made it starkly clear: modern life takes place online and we understand our world through a digital lens, increasing the importance of being able to trust what we see, hear, and read. We need to pair technical solutions on authenticating content with public education on digital literacy; and what's more, we need robust discussions with diverse stakeholders on how synthetic content can and should be used by good faith actors. To date, the discussion in the generative AI era has focused on how bad actors can wield deepfakes – we shouldn't demonize the technology, but rather have a clear understanding of the rules that define synthetic content as positive, harmless, or malicious. Finally, we are long overdue for a frank review of the data used to train generative systems, and this again is an area that requires broad stakeholder engagement to ensure we don't put vulnerable populations at risk."
The Latest News
In this issue, we cover:
Surgeon General Urges Social Media Labels
Marketers Urged to Adopt Provenance
AI Fraud Surge Hits Cryptocurrency
Deepfakes Target Celebs and Individuals
Pennsylvania Banning Deepfake Porn
Russians threaten information weaponization
Surgeon General urges warning labels on social media to address youth mental health crisis.
Dr. Vivek H. Murthy, the Surgeon General of the United States, calls for warning labels on social media platforms due to the significant mental health risks for adolescents, including increased anxiety and depression. The Surgeon General notes that adolescents who spend over three hours daily on social media face double the risk of these symptoms, with current average use at 4.8 hours. Nearly half of adolescents reported that social media negatively impacts their body image. Murthy argues that, similar to tobacco warning labels, such measures could increase awareness and prompt behavioral changes and emphasizes that comprehensive legislation is also needed to protect young people from online harm.
Marketers urged to adopt content provenance for brand protection.
The use of generative AI in the US has surged from 7.8 million people in 2022 to 100.1 million in 2024. This staggering rise highlights its potential for productivity but also misuse and deception. Implementing a standard for content origin is essential for maintaining a healthy internet environment. According to EMARKETER, marketers should adopt content provenance to protect their brand from being associated with misleading media. Ensuring clarity about the origin of digital assets safeguards brand safety and prevents potential damage from controversial or fake content.
Tenfold surge in synthetic video, often referred to as deepfakes, within the cryptocurrency sector hit hardest.
Synthetic media-monitoring company Sensity warns that deepfakes are rapidly spreading in scams on social media, surpassing expectations. Fraudsters use advanced deepfake technology to deceive and exploit individuals, targeting politicians, celebrities, and businesses. There was a tenfold increase in deepfakes detected worldwide from 2022 to 2023. Deepfake frauds are increasingly targeting the cryptocurrency industry. The crypto sector leads in deepfake cases with 87.7%, followed by fintech. The sector's digital nature, potential for high financial gain, and regulatory challenges make it especially vulnerable to fraudsters. This trend underscores the need for transparency and authenticity in digital content to prevent significant economic losses and reputational damage.
Deepfake abuse escalates, targeting public figures and private individuals.
Musical artist Megan Pete, known as "Megan Thee Stallion," is the most recent example of a prominent woman targeted with a non-consensual synthetic video/ The artist is speaking out, warning others about this form of abuse against women. Her experience highlights how AI has been weaponized in recent years. Aside from celebrities, private individuals are also being targeted globally. A teenage boy in Australia was arrested after creating and distributing explicit deepfakes of approximately 50 female students from a private school. The sexualized synthetic images were made using photos from the girls' social media profiles and were shared on social media platforms, including Instagram and Snapchat. The incident has drawn significant attention to how AI is being weaponized against private individuals, especially at the local level, prompting calls for more robust legal measures.
Pennsylvania Senate passes bill to outlaw distribution of pornographic deepfakes.
The Pennsylvania State Senate passed legislation outlawing the distribution of pornographic deepfakes. This bill aims to eliminate a legal loophole and will make it a crime to distribute AI-generated deepfakes without the subject's consent. The bill now moves to the House for consideration, highlighting the state's effort to update laws to combat digital abuse and protect citizens.
In response to Western sanctions, Russians threaten to weaponize information.
Former Russian President Medvedev is promoting an aggressive campaign against Western society and infrastructure in response to sanctions. He emphasized a strategic approach to counteract the pressures from the West by calling on citizens to sow misleading information and create an environment in which Western citizens can't tell reality from fiction. These geopolitical tensions underscore the weaponization of information and digital content in the digital age.