The Fancy Aunties from The Bestie Series: AI-powered art by Malik Afegbua
Source: Slickcity Media
Experts Sound Alarm on Top Digital Risks
We asked experts what they are tracking concerning the veracity of digital content online and its impact.
"There are so many trends in the AI space, but on this topic, I will focus on disinformation and misinformation using AI. This can be very dangerous on many levels, from politics to even personal, because I have seen it's been used in both cases on detrimental levels, so this means transparency or tools to reveal transparency should be on the front line."
"Across every story I’ve covered in the last couple of months – whether it’s a fake profile extorting a teen on social media, non-consensual deepfakes, or a fake image of an explosion – the call for more transparency online is increasing. In our reporting we are seeing that users want the ability to detect a fake account before interacting with it. They want to know when a piece of content is created using artificial intelligence. They want the right to protect their digital image."
'Godfather of AI' discusses dangers the developing technologies pose to society
Source: PBS NewsHour
The Latest News
Microsoft commits to marking AI-generated images and videos with a watermark.
Microsoft has launched a web app called Bing Image Creator and Designer. The app will feature media provenance capabilities aligning with the C2PA's specification, allowing users to verify whether an image or video is AI-generated. Microsoft’s President Brad Smith also presented a blueprint for the Future of AI in which the C2PA’s digital content provenance specification was central to scaling transparency and authenticity.
An AI-generated hoax depicting an explosion at the Pentagon went viral, resulting in a temporary stock market drop.
The spread of a fabricated image depicting an explosion near the Pentagon on social media resulted in a momentary decline in the stock market. The authenticity of the image and accompanying assertion was subsequently debunked by the Arlington Police Department, who suggested that artificial intelligence tools could have been utilized in its creation. The Pentagon AI hoax has been described as a harbinger of what is to come.
South Carolina law targets “Sextortion” or the extortion of minors for explicit images.
Brandon Guffey's 17-year-old son Gavin died by suicide due to falling victim to a "sextortion" scheme online. Sextortion involves blackmail, where victims are lured into sending explicit images and threatened to do something they don't want, such as sending additional photos or money. These sophisticated schemes often begin with fake profiles and images to deceive potential victims on platforms. Law enforcement officials report over 7,000 cases of teens being deceived into sending sexually explicit photos and subsequently blackmailed.
Adobe has introduced a content credential system that verifies if a piece of media has been altered by AI.
Adobe's new AI technology will include "nutrition labels" on images indicating whether they have been altered using AI in an effort to increase transparency. The company is only training its Firefly program on licensed, high-resolution content that it owns the rights to avoid creating content based on other people's work or intellectual property.
Gen AI is prompting individuals to reevaluate the definition of authenticity.
Historical authenticity refers to the question of whether an object genuinely dates back to the claimed time, place, and person. Artificial intelligence was used to create a viral Drake and The Weeknd song, a synthetic "photograph" which won a competition, and a fake image of Pope Francis wearing a Balenciaga jacket. This technology has sparked concerns about an authenticity crisis and raises questions about AI’s impact on art, music, and other creative industries.
NewsGuard identifies 49 news and information sites written almost entirely by AI.
AI tools are being used to create content for low-quality websites, known as content farms, to increase advertising revenue. In April, NewsGuard identified 49 websites across seven languages that appear to be generated almost entirely by AI language models designed to mimic human communication. Most of the content features bland language and repetitive phrases, suggesting they are designed to generate revenue through programmatic ads. This increases concerns around the creation of AI-based news organizations that spread unverified information.
A WSJ columnist conducted an experiment, substituting her own presence with AI-generated voice and video.
The columnist experimented with two companies: Synthesia, which creates AI clones based on recorded audio and video, and ElevenLabs, used to make clones based on uploaded audio tracks. With these tools, she was able to trick her bank and even her family into believing her AI avatar was her. This experiment highlights the growing concern over AI being used to defraud individuals and organizations.
The FTC recently released guidance on Gen AI.
The FTC is closely examining the potential impact of AI technology on consumers. Gen AI tools have the potential to tap into unearned human trust, which is why many businesses are interested in them. The FTC suggests that it may not be the best decision for companies developing or implementing these tools to dismiss or lay off their ethics and responsibility personnel. Their main concern is companies utilizing these tools to steer people into harmful decisions.
The Content Authenticity Initiative (CAI) grows as Google integrates Adobe Firefly and Express into Bard.
Adobe and Google have partnered to use Adobe's Firefly technology to power Bard's AI art generation. The open-source technology of the Content Authenticity Initiative supports the Firefly-Bard integration. The CAI has recently gained over 1,000 members, including Google's Bard, and new participants such as Universal Music Group (UMG), Stability AI, and Spawning.ai. With About this image from Google, you can see important context such as when the image and similar images were first indexed by Google, where it first appeared, and where else it’s been seen online. This differs from the C2PA which has a digitally signed and tamper-evident data structure that allows viewers to see who created the piece of content and how, when, and where it was created or edited.
OpenAI CEO Sam Altman appears before Congress to testify about the potential dangers of AI.
OpenAI CEO Sam Altman called for government regulation in the Senate panel hearing, stating that the current boom of AI technology required safeguards. He emphasized the need for regulatory intervention and mitigation of risks posed by increasingly powerful models. Altman's appearance follows concerns from lawmakers about the potential risks of AI technology, which were sparked by the viral success of ChatGPT, OpenAI's chatbot tool.
Tristan Harris and Aza Raskin discuss the AI Dilemma.
50% or half of AI researchers believe there is a 10% or greater chance that humans will not be able to control AI. Tristan Harris and Aza Raskin discussed the current catastrophic risks posed by existing AI capabilities to society and the race by AI companies to deploy their products quickly without adequate safety measures.
The results are in...
We posted these 4 images to various social media platforms and 63% of respondents could not identify which image was AI-generated. This aligns with the findings of past studies such as that conducted by Hany Farid and Sophie Nightingale in which AI-synthesized faces were found to be indistinguishable from real faces and more trustworthy.
Other Recommendations
A.I. Photoshopping Is About to Get Very Easy. Maybe Too Easy. ⇒
Adobe is adding AI image generator Firefly to Photoshop ⇒
ANALYSIS: Seeing Is Not Believing—Authenticating ‘Deepfakes’ ⇒
Another Side of the A.I. Boom: Detecting What A.I. Makes ⇒
Google to test ads in generative AI search results ⇒
Here are safeguards brands and agencies are tapping to prevent or mitigate AI fraud ⇒
How verified accounts helped make fake images of a Pentagon explosion go viral ⇒
USAID: At Diia in DC Event, U.S. Highlights U.S. - Ukraine Innovation Partnership and Launches First Phase of Countries in Digital Transformation Initiative ⇒
Open Questions on V.0 Shared Protocols for Responsible Frontier Model Deployment ⇒
The Deepfake Defense—Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers ⇒