Lvxiao Chen is a J.D. candidate, 2021 at NYU School of Law.
Seeing used to be believing. Not so much in the era of photoshop and computer-generated imagery (CGI). Even less so starting around the end of 2017 with the introduction of Deepfake, an image, audio, and video synthesis technology that generates persuasive counterfeits using machine learning algorithms. It is achieved by Generative Adversarial Network (GAN), where a generative model outputs images based on existing images, and a discriminatory model identifies whether the generated images are fake or not. The generative model thus improves itself, producing even more convincing graphics.
Shortly after its emergence, Deepfake became popular in online communities like Reddit and 4chan, where people used it to swap faces of celebrities, politicians, friends and families and create falsified videos. Although some contents are neutral, others pose serious misuse concerns. For example, fake porn videos of celebrities such as Scarlett Johansson circulate on the internet. This year, a video which misrepresents speaker of the United States House of Representatives Nancy Pelosi as though she was slurring her speech went viral. People with less prominence are also subject to the threat of Deepfake, as the technology is used to generate revenge porn and scam calls.
Although graphics-editing software has been around for a long time, the problem with Deepfake misuse is even more pressing because it is more accessible than ever. In the past, only skilled designers were equipped to use photoshop to create fake, convincing images. A 10-minute CGI would cost a Hollywood studio $800,000. Nowadays, however, anyone with a consumer-grade graphics card can create fake videos at home using open-source libraries and online images. There are even free softwares that generate videos based on single images.
The current legal framework is ill-equipped to address the misuse of Deepfake. In some scenarios, victims can resort to civil litigation by claiming copyright infringement, intentional infliction of emotional distress, defamation, and false advertisement. However, some of the elements are hard to prove in the case of computer-generated videos. Furthermore, because lawsuits are time and resource-consuming, victims might be hesitant to bring private actions.
There are several attempts, both at the state and federal levels, to use criminal statutes to address Deepfake misuses. Virginia has adopted a law to ban creations of involuntary porn with the intent to “depict an actual person and who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristics.” New York, California, and Texas are also considering laws that ban non-consensual, digitally created pornographic works and fake videos that might influence elections.
Nevertheless, State laws are probably not the best way to solve the impending problem of Deepfake. Because only a few states are considering regulating Deepfake-generated works on a rather narrow span of subject matters, many abusers might go undetected and unpunished. Also, States might lack the resources and knowledge to best tailor the law to address misuses as the technology is cutting-edge and rapidly developing. Further, because of the nature of the Internet, it could be hard for state courts to obtain jurisdiction over the possible offenders. Therefore, federal regulation is probably a more promising solution.
Two federal bills have also been proposed to criminalize Deepfake creation and redistribution. The Malicious Deep Fake Prohibition Act of 2018 (S. 3805), which has expired, makes it unlawful to create and distribute Deepfake works “with the intent that the distribution of the deepfake would facilitate criminal or tortious conduct.” It exempts social media providers from liability if they make good faith efforts to restrict Deepfake contents, which probably serves as an incentive for tech companies to research on Deepfake detection methods. It also offered a First Amendment exception on protected speech. Being Congress’ first attempt to regulate Deepfake, this bill does little more than restating what is already unlawful: aiding and abetting a crime.
Another bill under consideration is the DEEP FAKES Accountability Act (H.R.3230). The bill imposes a watermark and disclosure requirement on Deepfake contents. The defendant will be criminally liable if he or she:
- Fails to put on, or removes the watermark or disclosure;
- Has done so knowingly;
- With one of the following intents, or:
- To “humiliate or otherwise harass” when the content is a visual and in sexual nature;
- To cause “violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding” when the content is a credible threat;
- To interfere with domestic public policy if the actor is a foreign power;
- There is no intent requirement if the actor had done so in the course of criminal conduct involving fraud.
This bill also offers a private cause of action and made exceptions when the reasonable person will not mistake the content from actual activity, such as “parody shows or publications, historical reenactments, or fictionalized radio, television, or motion picture programming.”
Like the previous bill, the DEEP FAKES Accountability Act is flawed. To start with, Deepfake creators with the intent to cause harm, especially in the case of revenge porn, fraud, and election interference, are likely not going to comply with the watermark requirement, and can easily hide their identities, making enforcement extremely hard. Another problem with enforcement will be when the actor is not within the United States jurisdiction. Furthermore, the Act does not offer protection when the intent element is missing but the content is otherwise harmful or humiliating. Lastly, although there is an exception for creative work, the bill, if passed, will certainly face First Amendment challenges. For example, fake news and fake political speeches are likely to interfere with domestic public policy, but the consensus is that they should not be regulated.
A more favorable way of resolving Deepfake problems would be making social platforms jointly liable in civil litigations when the plaintiff can show there is no good faith effort to remove harmful Deepfake contents. There are several advantages to this solution. First, it allows plaintiffs to recover when the actor is hard to identify or hard to reach. Second, it incentivizes tech companies to invest more on Deepfake detection technologies. Currently, 47 U.S.C. § 230(c) shields social platforms from civil liabilities from their users’ posts and allows internet service providers to experiment with their content filters. By pushing the social platforms to restrict harmful contents, the good faith effort requirement probably does not conflict with the purpose of Section 230. Third, while it is resource-consuming for people to bring civil actions, the litigation can probably be financed when big companies are being sued.
In conclusion, Deepfake can have serious impact on the society, and anyone can be subject to its harm if the technology is not properly regulated. Although current bills are flawed in many ways, they indicate that the legislators are aware of the problem and are actively trying to solve it.
… [Trackback]
[…] Here you will find 20768 more Information to that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] There you can find 30239 additional Information on that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] Read More to that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] Find More Info here to that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] Find More Information here on that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] Find More on that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]
… [Trackback]
[…] There you can find 86865 more Information to that Topic: jipel.law.nyu.edu/deepfake-is-here-what-should-we-do/ […]