Deepfakes—realistic-looking images and videos altered by AI to portray someone doing or saying something that never actually happened—have been around since the end of 2017, yet in recent months have become a major focus of policymakers. Though image and video manipulation have posed challenges for decades, the threat of deepfakes is different. The early examples were created mostly by people editing the faces of celebrities into pornography, but in April 2018, comedian and filmmaker Jordan Peele worked with BuzzFeed to create a deepfake of President Obama, kicking off a wave of fears about the potential for deepfakes to turbocharge fake news. Congress has introduced a handful of bills designed to help address this threat but preventing deepfakes from hurting people and society will require additional solutions.
The risks posed by deepfakes, a portmanteau of “deep learning” and “fake,” fall into two camps: that this technology will intrude on individual rights, such as using a person’s likeness for profit or to create pornographic videos without their consent; and that this technology could be weaponized as a disinformation tool. To address these risks, Senator Ben Sasse (R-NE) introduced the Malicious Deep Fake Prohibition Act of 2018 late last year, which would make it illegal to create, with the intent to distribute, or knowingly distribute, deepfakes that would facilitate criminal or “tortious conduct” (i.e. conduct that causes harm, but is not necessary unlawful, such as creating a deepfake that might harm someone’s reputation). And at a June House Intelligence Committee hearing, Representative Yvette Clark (D-NY) introduced the DEEPFAKES Accountability Act, which would require anyone creating a deepfake to include an irremovable digital watermark indicating it as such.
These proposals could stop unscrupulous companies in the United States from selling software to produce some of the most offensive deepfakes—such as the “DeepNude” software briefly launched last week offering to manipulate photos of women to make them appear to be nude. It may also deter people in the United States from knowingly creating and distributing deepfakes. But the risk these bills do not address, particularly as it relates to disinformation, is bad actors using the technology to create and distribute this content. After all, bad actors likely do not care if what they do is legal, and many may live outside the United States making prosecution unlikely. As Devin Coledewy of TechCrunch puts it, requiring watermarks “is akin to asking bootleggers to mark their barrels with their contact information. No malicious actor will even attempt to mark their work as an ‘official’ fake.’” And since software to make deepfakes are freely available online, laws regulating commercial sale of the software will not be fully effective at stopping the spread of these tools. Plus removing watermarks is not difficult to do, so bad actors wishing to pass off a deepfake as real would easily be able to do so.
As deepfake technology matures and proliferates, policymakers should recognize that these tools will soon be common place. Though some rules restricting the creation and distribution of deepfakes, and software to produce deepfakes, may be worthwhile, policymakers should not view this strategy as a silver bullet for stopping the threat deepfakes pose, no matter how strict these rules are.
Moreover, policymakers should be careful to not limit the useful applications of the technology underlying deepfakes. The technology will be especially useful in shooting video—imagine a much more natural-looking version of the scene from Forrest Gump where Forrest meets President Kennedy. In the coming years, deepfake technology will likely be integrated into most commercial video editing software, allowing editors to tweak a script or fix a blooper in a recording, or even allow stand-ins to reshoot for someone else, thereby saving amateur and professional producers time and money. Deepfake bills proposed thus far rightfully make exceptions for such applications, including in fictionalized media, satire, historical reenactments, and other kinds of content and these exemptions should be maintained in future bills as well.
However, there are still more opportunities for policymakers to productively address remaining concerns. First, policymakers should focus on gaining a much greater understanding of the threat of deepfakes to national security, including transparency on when and how the U.S. government uses these tools. Bipartisan members of the Senate AI Caucus introduced the Deepfake Report Act in June 2019 to do exactly that by directing the Department of Homeland Security to conduct an annual study of how deepfakes are made, used, and countered, as well as propose policy and regulatory changes based on this information.
Second, policymakers should encourage the development and adoption of technical solutions that can identify deepfakes for moderation purposes. For example, researchers at UC Berkeley and the University of Southern California have developed a machine learning technique that can spot deepfakes with 92 percent accuracy. Similarly, DARPA is investing heavily in developing deepfake-detection algorithms.
Third, lawmakers should strengthen civil and criminal laws protecting the use of an individual’s likeness without their consent. New York lawmakers, for example, are considering legislation that would prohibit using “digital replicas” of individual’s without their consent. While such laws would not prevent all bad actors intent on causing harm from doing so—especially those in another country—they could provide legal recourse for victims to take action against people who knowingly, for example, use their likeness in pornography without their consent. Unfortunately, as drafted, the bill has a number of significant shortcomings and could prohibit legitimate uses of “digital replicas.”
Deepfakes are not going away, and new examples of their malicious potential continue to arise. Policymakers should study the issue closely to develop effective countermeasures and pursue innovative policy and technical solutions to keep the threat of deepfakes at bay.
Image: Gage Skidmore