We asked experts: How does generative ai, and digital content manipulation affect democracies? What do governments, industry and society need to maintain a shared sense of reality online?
"While many in the expert community have for years been raising questions about the potential negative externalities from the rise of generative AI and synthetic media, even the experts are surprised by the pace of change and adoption of new tools in the last six months. Adversarial behaviors typically lag a bit; but within months I suspect we will see many new applications of text, audio and video generators for scams, political manipulation and nonconsensual pornography. It should indicate the need for new approaches to forensics, as well as verification of content authenticity."
"Disruptive technologies like generative AI have the potential to exploit asymmetries in the way democracies and autocracies depend on, use, and misuse information. That there is an objective truth is also critical to the scientific enterprise. If generative AI is used to further blur the lines between fact and fiction; if quality information does not look meaningfully different from fakes, spoofs, or the patently false, at risk is a degradation of a common knowledge base, a shared sense of reality, and shared view of history as well. Democracy needs all of these. As a society, we need to shore up our repositories of knowledge and ensure that as generative AI finds its way into mainstream applications, we can continue to tell what’s real from what isn’t. For starters, that means designing authenticity architectures and building tools into products from the get-go."
"AI will be one of the most disruptive technologies of the next decade and to realize its full potential we will have to address four essential cybersecurity requirements. How do we secure the infrastructure these systems reside on? How do we ensure the integrity of the training data AI relies on? How do we develop unbiased algorithms and understand the decisions these systems are making to build trust? How do we monitor for adversarial AI designed to negatively impact society, business, or national security?"
An unnamed concept by Stephen Coorlas looks as if it’s ready to be 3D printed. Source: Coorlaas Architecture
Headlines to Note
Disinformation’s Newest Weapon? Generative AI is accelerating how synthetic media can be weaponized in disinformation campaigns. According to a Graphika report, Pro-Chinese disinformation actors used commercially available deepfake technology to generate videos of two fictitious but very real-looking people. These AI-generated avatars were positioned to be news anchors for a fabricated media outlet. The deepfake videos circulated on social media before being removed. This is one of the most high-profile manifestations of past warnings from various authorities on the weaponization of deepfakes.
AI Chatbots & Search Engines: Alphabet Inc, the parent company of Google, saw its share value plummet by $100 billion due to inaccurate information shared by its new chatbot BARD during a promotional video. Meanwhile, Microsoft’s release of its AI-powered chatbot in the Bing search engine received much fanfare, but its hyperrealistic and human-like language has some worried about its potential to potentially persuade human users in harmful ways.
Risk to Democracy?: The proliferation of AI will have profound implications on politics. AI, powered by social media, rose to Eurasia Group's No. 3 top risk for 2023. The pre-eminent risk group noted that the more sophisticated AI becomes at disseminating false information across social media platforms, the more dangerous it will be. By the time a fake piece of content goes viral, users have moved on to the next story. These concerns led to several experts urging that government pay more attention to the trends.
Brand Reputation: The rise of AI-generated content creates brand reputation fears, as Gartner predicts that 80% of marketers will deal with content authenticity issues by 2027. Unprecedented access to sophisticated synthetic and image deception technologies leaves the digital economy more exposed than ever to visual misinformation and fraud, making it a threat to the global economy.
Media Verification: Deepfakes are becoming sophisticated, and experts worry about how they will impact news and democracy. CBS’s Sunday Morning show featured how the emerging open standard by the C2PA and the work of the CAI will help protect democracy. Microsoft's CSO, Eric Horvitz, and Adobe's EVP, General Counsel, & Chief Trust Officer, Dana Rao, explain the importance of verifying media authenticity by tracing origins and edits.
AI in the Courtroom: After receiving stern warnings from State Bar prosecutors, Joshua Browder, CEO of DoNotPay, will refrain from using an AI chatbot to defend a man in an upcoming court hearing. The unrealized stunt involved a man wearing headphones to recite the output of an AI chatbot in a court hearing over Zoom. Concurrently, Judge Juan Manuel Padilla from Cartagena, Colombia, created a stir when he used ChatGPT in a ruling. He used the AI tool and previous rulings to determine that an autistic child's insurance should cover the medical treatment and transportation costs, which the parents could not afford.
Sky High Investments: The AI entertainment startup Deep Voodoo founded by Trey Parker and Matt Stone has obtained a $20 million investment led by Connect Ventures. The invested capital will accelerate the development of their deep fake technology, visual effects services, and synthetic media projects. Google has invested $300 million in AI start-up Anthropic, joining other tech giants in investing in the rapidly growing field of Gen AI. Anthropic is building a chatGPT competitor called Claude. ChatGPT has seen tremendous growth since its launch two months ago. The app had 100 million monthly active users and 590 million unique visits in January, making it the fastest-growing consumer app.
Nonconsensual Pornography: Several Twitch stars have had their likenesses used in deepfake pornography without consent, yet few laws are in place to protect them. They are now calling for better regulations to help protect victims of this form of abuse.