The increased use of rudimentary deception techniques in visual imagery and the growth of hyperreal synthetic media raises several significant and developing legal issues. Whether related to their positive uses for entertainment or accessibility, or their misuse for abuse and deception, image alteration and deception pose evolving questions for attorneys, legislators, and businesses. Here is an overview of some of these considerations:
Cheapfakes: Crudely edited, mislabeled, or decontextualized imagery, audios, and videos, commonly called “cheapfakes,” are the most common form of manipulated media online today. Experience shows that it does not require cutting edge synthetic media generation or alteration to propel false narratives. The surge of cheapfakes will serve to only deepen the distrust many feel toward all media, leading to a growth in the “liar’s dividend.” That is the benefit malefactors can draw from being able to dismiss authentic media as fake, because the public will be primed to doubt the veracity of all inconvenient evidence. These challenges will only grow with the democratization and greater believability of advanced synthetic media.
Deepfakes: The advent of artificial intelligence (AI)-generated synthetic media, “deepfakes,” creates new and controversial challenges that legislators, attorneys, and business must now grapple with, such as:
Ownership: Who owns a deepfake? It is often an open question. The source data that is fed into the AI generator to create synthetic images may belong to one or more people, affording the rightsholders copyright claims in the generated media. Determining when the use of underlying source data to create a deepfake is “fair use” or when the output is sufficiently “transformed” from the source imagery that it is no longer covered by copyright law will vary case-by-case. In the meantime, businesses that want to use deepfakes for commercial purposes will need to consider the provenance of source data, secure appropriate licenses, and address similar intellectual-property implications of their use.
Deepfake-Specific Laws: Legislators around the country have moved with notable speed to legislate around deepfakes. So far, eight states have passed laws that bar deepfakes of some kind. Congress has passed and the President has signed four laws related to deepfakes. About thirty bills on deepfakes in roughly twelve states and Congress are under consideration.
Matthew F. Ferraro (@MatthewFFerraro), a former U.S. intelligence officer, is a visiting fellow at the National Security Institute at George Mason University and a counsel at Wilmer Cutler Pickering Hale & Dorr.
|