Deputy Director, Cyber, Technology & Security, Australian Strategic Policy Institute (ASPI)
"Inauthentic media degrades democracy. We need to take a two-track approach for democratic resilience in the age of synthetic media. The first is policy, regulation and education – create the incentives for good behaviours, punish the bad and build awareness in populations so they are resilient against malicious synthetic media intended to deceive and influence. The second is technical – implement new technologies that can uncover malicious synthetic media and mitigate the risks. Scalability is critical here. People will always have a key role, but the volume, velocity and variety of inauthentic media means it must be tackled through human-machine teaming."
Dr. Courtney Radsch,
Director, Center for Journalism and Liberty, Open Markets Institute
"Improving the safety and reducing the harm of AI systems will require bold, meaningful actions to address the root causes of many of the problems posed by AI and a broader conception of what we think of as harm and safety. We need to adopt policies that mitigate concentrations of power, structurally separate dependent lines of business, and ensure that tech companies are no longer able to externalize the societal and environmental harms of their business models. AI as it is being developed and deployed today poses an existential threat to the future of journalism, a core institution of democracy. Ensuring that the provenance of content can be traced would allow us to not only implement important safeguards and protect against synthetic media but also ensure that we can determine rights and build these sociotechnical systems in ways that enable licensing and compensation for human creators like journalists. Ensuring that the economic gains of the AI revolution do not remain in the hands of a few companies will be the most important safeguard against the vast array of problems posed by AI."
The Latest News
Twenty major tech firms pledge to combat AI-driven election interference.
Tech companies are also doing their part as generative AI could be exploited during elections to produce highly credible deepfakes of political candidates. In preparation for the 2024 presidential race, twenty technology companies announced their commitment to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. The pledge was signed by a total of 20 companies, namely Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
Tech giants pledge to curb misleading content and adopt C2PA ahead of elections.
OpenAI has incorporated Content Credentials into outputs generated by DALL-E 3 and new products such as Sora, an advanced AI model capable of converting text into video, following a broad acceptance of the standards set by the C2PA. The authenticity of any image created using OpenAI's platforms can be verified through platforms like Content Credentials Verify, allowing individuals to determine which AI tool was used in the creation process. Simultaneously, Google joined the C2PA and will serve on its steering committee, collaborating with Adobe, Microsoft, Intel, Truepic, and others. Meta also revealed that they will begin labeling images generated by OpenAI, Midjourney, and other AI tools. The labels are slated for release in the upcoming months and will be used to denote AI-produced imagery.
White House intends to use cryptographic verification for videos, and the FCC takes action on synthetic audio.
The White House is taking action to address the challenges posed by the widespread use of generative AI. This concern has grown significantly, with numerous known examples of deepfakes related to upcoming elections. To combat this, the White House is developing a method to cryptographically verify all content from the White House, whether a statement or a video, to ensure its authenticity in the face of misleading generative AI content. The Federal Communications Commission (FCC) also passed a ruling that bans AI-generated robocalls, enabling the FCC to impose fines on companies using AI voices in their calls in response to recent misuse of the technology for voter manipulation and scams.
States take proactive measures to battle deepfakes ahead of elections.
States are bringing forth new legislative bills to control the surge of deepfakes in election campaign materials as new technologies proliferate. Wisconsin has joined the ranks of 20 other states that have either proposed or enacted election laws mandating the disclosure of AI-generated content in campaign advertisements. A bipartisan collection of state assembly members approved two significant bills addressing the use of AI in election cycles.
AI Election Security Handbook emphasizes adopting content provenance technologies.
An increasing number of politicians are leveraging AI to create deepfakes, not just of their opponents but of themselves. This trend is currently noticeable in countries such as Indonesia and Pakistan, offering insights into the potential future of elections globally. The Alliance for Securing Democracy and the German Marshall Fund of the United States have unveiled an indispensable resource: The ASD AI Election Security Handbook. As the sophistication of AI-generated images and videos increases, new technologies are being developed to counter deepfakes. Technologies that verify content authenticity are among the promising solutions being explored.
Deepfake scams escalate as a company loses $26 million, and fake ID generation enables bank fraud.
OnlyFake uses neural networks to produce fake IDs for only $15, causing significant disruption in the fake identity market and the broader field of cybersecurity. The technology can generate fraudulent IDs almost immediately, potentially facilitating various illicit activities ranging from bank fraud to money laundering. A recent scam occurred involving a financial team member at a global corporation who was deceived into transferring $26 million to fraudsters who utilized deepfake technology to impersonate the company's Chief Financial Officer during a video call, as reported by the police in Hong Kong.
Women continue to be adversely affected by sexually explicit deepfakes.
No charges were filed after AI-generated explicit photos of underage girls circulated at a school in Winnipeg. The case highlights a gap in Canadian law around sexualized deepfakes. In the United States, Public backlash persists over the fake, sexually explicit images of Taylor Swift that recently circulated online. This has drawn the attention of government representatives at both national and state levels. Ten states have implemented laws targeting individuals who produce and disseminate explicit deepfake content.