Trusted Future
The source for issues, trends, and news on visual trust today
Presented by Truepic

Welcome to the eighth edition of Trusted Future, a monthly update on news, trends, and issues related to visual trust. See past newsletter volumes: here.

This month we look at the acceleration of text-to-synthesis tools with top experts: Andy Parsons, Hao Li, and Siwei Lyu.

Mobile Phones
The Growing Accessibility and Sophistication of Deepfakes

Top experts share their insights on the advancement of AI image generators taking the internet by storm.


"The advancement of synthetic media creation tools, both in output quality and ease of use, is hastening day-by-day. And with each incremental advancement it is becoming more challenging to discern the real from the fake. Transparency is a big component of the ultimate solution­. Using powerful AI tools for creativity should not be discouraged, but rather, clarity about how a piece of content was created must become standard. We call this provenance, and I believe it will be the bedrock for a future of restored trust across news, AI creations, and all creative expression."

- Andy Parsons, Sr. Director, Content Authenticity Initiative, Adobe


"While we have also made several advancements in detecting AI manipulated and falsified content, many sophisticated tools are already in the public domain made by 3rd party individuals, and the technology will continue to evolve. The first malicious use cases are already on their way, and we have to implement more advanced security mechanisms for authentication."

- Hao Li, CEO, Pinscreen 


"The latest text-to-image synthesis methods are capable of synthesizing high-resolution realistic images from a single text prompt. They enable the creation of almost arbitrary semantic content at a level of quality and ease unimaginable a few years ago. We should take this challenge seriously, not only are such images harder to discern visually, but the open access to these tools will also flood a large number of manipulated or synthesized images online. Text-to-image synthesis services must authenticate the media created using these tools, with either robust watermarks or control capture techniques."

-Siwei Lyu, Professor of CSE, University at Buffalo

Deceptive and Fraudulent
Deepfakes: What Business AND Society Need to Know

Surging: Visual trust impacts every industry and aspect of society. In 2020, the number of deepfakes reached an estimated 145,227, nine times more than the previous year. In 2022, it is fair to assume that the number is incalculable. Deepfakes, which use artificial intelligence to create fake or synthetic images and videos, are evolving daily. This evolution is linked to the creation of a zero-trust society or one in which people can no longer distinguish what is real from what is fake. The ability to differentiate fact from fiction is the bedrock of online trust and a healthy society. 


What’s New? Accessibility: Recently, there has been an explosion of “text-to-synthesis” Deepfake technology which creates images from text prompts. This technology marks the most significant democratization of synthetic media, making it accessible to any individual without any prerequisites or skills. Open AI’s DALL-E 2 and Google Brain’s Imagen placed limitations on access and capabilities to limit misuse by bad actors. However, other platforms, such as Stability AI, released their platforms and even source code to the public without any limitations. Unfortunately, four days after its code release, the pornographic images reportedly emerged on social media platforms using Stability AI’s code.  


Weaponization - Fraud, Deception, Harm: As access increases, so will threats. The FBI issued a warning about scammers using deepfake technology to impersonate job candidates for remote positions. They noted substantial growth in the number of bad actors using deepfake videos coupled with stolen personal identification information to trick employers. The public is being targeted, too. A short while ago, scammers posted fakes of Elon Musk on YouTube to defraud unsuspecting victims. YouTube accounts were hijacked and used to promote cryptocurrency giveaways, according to the BBC

Thus far, synthetic media has been chiefly weaponized to victimize women. For example, it removes clothing from non-nude photos to create nonconsensual pornographic fakes. This disturbing trend and ease of accessibility illustrate the increasing misuse of synthetic media. In addition, experts are warning about the increased use of real-time deepfake video for occupational fraud and corporate espionage. 


Wider Implications: Beyond the immediate harm of a synthetic piece of media, there is a large and perhaps more concerning problem. The Liar’s Dividend will soon become a commonly known phrase, referring to the undermining of visual evidence under the pretext it could be fake.  The more synthetic media released, the more bad actors will use it as an excuse to undermine actual imagery that cannot be authenticated. Most recently, this defense has been used by defendants filmed on video during the January 6th attack on the U.S. Capitol. 


Developing: Deepfake technology has become increasingly more convincing over the past two years. According to the VMware study, which polled 125 incident response and cybersecurity experts, email accounted for 78% of deepfake attacks last year, making it the top delivery method for synthetic media attacks. Differentiating between synthetic and authentic media is a critical foundation of online business and marketplace interactions. As the threat of malicious deepfakes continues to grow and advance, media authentication will become a crucial factor in risk mitigation.

In Other News

1 big thing: AI-generated images open cans of worms Arrow

Celebrity deepfakes are all over TikTok. Here’s why they’re becoming common – and how you can spot them Arrow

Deepfakes - The Danger Of Artificial Intelligence That We Will Learn To Manage Better Arrow

Eerie Deepfake Tech Turns Random Guy Into Angelina Jolie Arrow

Is That Trump Photo Real? Free AI Tools Come With Risks Arrow

Political Deepfakes: social media trend or genuine threat? Arrow

Positive Thinking: An end to deepfakes Arrow

Ready or not, mass video deepfakes are coming Arrow

SAG-AFTRA: Deepfakes “Pose a Potential Threat to Performers’ Livelihoods” Arrow

Surreal or too real? Breathtaking AI tool DALL-E takes its images to a bigger stage Arrow

Synthetic Media: How deepfakes could soon change our world Arrow

The impact of deepfakes: How do you know when a video is real? Arrow

What is a deepfake? Everything you need to know about the AI-powered fake media Arrow

With Stable Diffusion, you may never believe what you see online again Arrow

You just hired a deepfake. Get ready for the rise of imposter employees Arrow

Trusted Future
Have any comments, ideas, or opinions - send them to us:
Share with a friend
© Truepic 2022   |   Visit   |   7817 Ivanhoe Ave, Suite 210, San Diego, California
Manage your email preferences   |   Unsubscribe