Tactics Ahead of Elections: Visual information, often spread through social media platforms, is central to voters’ decision-making and sentiment, especially ahead of next month’s midterm elections. Officials are expressing heightened concern about visual disinformation ahead of elections, and the FBI’s Cyber Crime Office is working with county officials to protect against deceptive visual and cyber attacks. This month, a deepfake of President Biden singing a children’s song went viral on TikTok and other social media platforms. The satirical video, made by a British civil servant using open-source software in a GitHub repository, used voice cloning and lip-synching AI. Despite being disclosed as satire, it was still highlighted by the President’s critics and garnered nearly 500,000 views in a few days.
Further, tech leaders, such as Microsoft, are working to collaborate and share information on bad actors and trends across the cyber and influence side. Others have advocated a proactive approach to moderation that identifies deceptive posts as being inaccurate before they have the chance to spread as another way to combat the problem. Experts are closely watching various tactics or methods bad actors may use for visual deception. Some include:
- Repurposed or rebroadcasted images - real images with different date, time, location, or context meant to deceive.
- Cheapfakes - real images or videos that are manipulated (edited, cropped, sped up, or slowed down) to deceive the public.
- Liar’s Dividend - using the existence of deceptive media to undermine actual or authentic media as “fake.”
- AI Generated or synthetic images (deepfakes) - created by a digital system that is completely fabricated.
Weaponized Image Deception with Geopolitical Consequences: The use of user-generated content in world and geopolitical affairs is only growing as smartphone penetration and internet access increase globally. Today both the conflict in Ukraine and widespread protests in Iran highlight the importance of images and videos captured in hard-to-reach or non-permissive areas. As deepfakes advance, it will become easier to misrepresent information and allow bad actors to exploit the fog of information. If deepfakes like that of Ukrainian President Volodymyr Zelenskyy that circulated earlier this year are perceived as real, they could negatively impact world events and lead to political crises. On September 20, 2022, during the UN General Assembly, Secretary-General of the United Nations, Antonio Guterres, stated that “trust is crumbling” and expressed his concerns about the growing threat of digital misinformation. Guterres also noted the resulting threats to the integrity of information, the media, and democracy.
Impact on Human Rights? Furthermore, user-generated content in the form of images and videos recorded by witnesses plays a critical role in accountability and legal proceedings globally. It has changed how we stay informed about large-scale human rights violations, especially in non-permissive environments. Though Cheapfakes and the Liar’s Dividend (casting doubt on all media claiming “deepfake”) remain the largest threats to user-generated media of human rights abuses, there is growing concern the proliferation of synthetic media will have significant ramifications. TRUE (Trust in User-generated Evidence) was awarded funding to examine how public perceptions of deepfakes—AI-manipulated images, videos, or audio—affect trust in user-generated evidence of human rights violations.
Latest - Synthetic Identity Impersonation: Another trend is bad actors increasingly using synthetic media to create false or impersonate identities online. Last month, former US Ambassador to Russia, Michael McFaul, warned that his identity was being impersonated through a live synthetic media generation. Bad actors have started applying real-time Deepfakes to defraud in virtual job interviews, but Ambassador McFaul’s experience highlights what many experts feared - the use of it to propel disinformation globally. Internet and social platforms often struggle to establish the identity of online personas to ensure accurate information is being published.