Impact of Executive Order on AI and Visual Transparency
Shanthi Kalathil, Principal, MDO Advisors, and former Deputy Assistant to the President and Coordinator for Democracy and Human Rights, National Security Council
“Synthetic media has become an inextricable and unpredictable component of elections, conflict, and other political flashpoints around the world. The Biden Administration's new Executive Order correctly highlights the global and national security risks potentially posed by AI and synthetic content. It moves in the right direction by calling for understanding and guidance on visual and other forms of transparency - including tools for tracking provenance, labeling/watermarking, and detection. Furthermore, the EO can serve to sharpen understanding of how adversaries and other foreign actors may utilize synthetic media in ways that pose risks to the security of the United States or its allies and partners.
At the same time, executive action by the Biden Administration can only go so far. Fostering a more trustworthy global information ecosystem is one of the most pressing needs of our current moment, and responsibility lies with all who are active in this space. And as technical and other solutions begin to arise, these actors must be deliberate about ensuring that their efforts are inclusive across different regions of the world - especially places without access to advanced technology."
Ken Carbullido, Vice President of Election Product and Technology Strategy, Ballotpedia
“We’re in the age of misinformation, and now more than ever, it’s important to provide reliable, accurate, unbiased, and nonpartisan information online that voters can trust. For Ballotpedia, verifying the authenticity of images is an essential part of providing voters with trustworthy information about their candidates in our Sample Ballot Lookup tool and across our site.
Manipulation of content is easier than ever before, and voters must have confidence in the information they find online about their elections and candidates if we’re to maintain a healthy democracy. Providing access to high-quality, dependable, and verified information on candidates is mission critical.”
Megan Shahi, Director, Technology Policy at Center for American Progress
“AI continues to rapidly advance in sophistication and capture public attention, with generative AI from large language models quickly reaching 100 million commercial users and a spate of AI tools already available to federal agencies through leading cloud computing services. Biden's Executive Order on safe, secure, and trustworthy AI and the draft OMB guidance on AI usage in government that followed are commendable first steps toward securing Americans from the risks of these technologies while ensuring we can collectively harness their opportunities.
The EO includes many of the principles that the Center for American Progress (CAP) has been calling for since April, including an all-of-government approach to AI that encompasses the framework of the 2022 AI Bill of Rights. The EO is well-poised to specifically enhance transparency and authenticity online by outlining industry transparency requirements, promoting watermarking and other content authentication mechanisms for automated systems and synthetic media, and highlighting the importance of disclosure when AI systems are being used. Civil society, including CAP, is eager to support the crucial work across the administration and industry to implement the EO in an effective and future-proofed manner."
The Latest News
Executive Order paves the way for robust AI guidelines focused on safety, security, and transparency.
The United States is making significant strides in implementing comprehensive rules and guidelines for AI. President Biden issued an Executive Order on Artificial Intelligence (AI) to establish a framework for AI safety and security. A key focus of the order is to address the need for transparency and authentication in Gen AI (section 4.5). It emphasizes content labeling, watermarking, and transparency as essential steps forward. The Administration is urging nations worldwide to back the creation and enforcement of global standards for identifying and tracking genuine government digital content and AI-created or manipulated media. Other parts of government are also closely watching; including notable Congressional hearings and AI Senate forums, all focused on the emerging threats of generative AI.
The malicious use of AI in the UK's political arena.
Deepfake videos featuring Sir Keir Starmer circulated online, coinciding with the start of the Labour Party conference. As constituents of the UK's most significant opposition party convened in Liverpool, an audio file with the potential to cause controversy began making rounds on X. The Metropolitan Police have confirmed that a counterfeit audio recording, which supposedly features the Mayor of London advocating for the rescheduling of Armistice Day in favor of a pro-Palestinian demonstration, does not meet the criteria of a criminal act. These incidents highlight the potential risks AI could pose to the UK's political landscape. These controversies come only days after the UK held a notable global summit on AI. Global leaders from 28 countries issued the "Bletchley Declaration," calling for international cooperation to manage the risks associated with Gen AI.
2024 elections call for the need to defend democracy amid rising AI influence.
In 2024, elections will be held around the globe, with over 2 billion voters from 50 countries going to the polls—including the US, Europe, and India. The proliferation of generative AI will require technologists, civil society, and social media platforms to rise to the occasion to protect democracy. As fears escalate about AI's ability to amplify the distribution of false information, Microsoft is stepping forward with its plan of action, which includes content credentials to signify content produced by AI, all aimed at combating deceptive media and bolstering cybersecurity. Additionally, Meta stated that it would enforce a new rule requiring advertisers to transparently disclose when their political, electoral, or social issue ads include potentially misleading content generated or altered by AI. In Argentina, this month’s presidential elections has been widely dubbed the “first AI election,” as leading candidates have deployed synthetic media as critical pieces to their campaigns.
Content transparency moving to hardware devices.
Qualcomm announced that its newest premium smartphone chip, the Snapdragon 8 Gen 3, incorporates built-in capabilities for image labeling. This applies to images captured by the camera and those produced via AI, utilizing technology from Truepic. The same week, Leica unveiled the M11-P camera with built-in Content Credentials. These two initiatives highlight how content authenticity and transparency are quickly moving toward hardware devices.
Famous actress fights back against AI app for illegally utilizing her image in an ad.
Scarlett Johansson has initiated legal proceedings against an AI app for illicitly utilizing her name and image in an online ad. The actress's representatives confirmed that Johansson has no affiliation with the app. Not just celebrities but ordinary individuals can also face issues due to deepfakes. Instances might include the use of images or those of family members to create explicit content, for blackmail purposes, or to bypass security measures to steal identities. At Westfield High School in New Jersey, for instance, students used AI to create and distribute fake nude photos of female students, the legality of which remains ambiguous due to the lack of federal laws regulating deepfakes.
Other Recommendations
AI Briefing: More companies are advertising AI as spending picks up ⇒
How Biden's executive order pushes tech to flag AI content ⇒
The Horrifying Images Are Real. But They’re Not From the Israel-Gaza War ⇒
The inside scoop on watermarking and content authentication ⇒