While reading a New York Times article by Catharine A. MacKinnon for NYU Law’s Art Law Class with Professor Amy Adler, I was particularly struck by Mackinnon’s mention of Deepfakes in the porn industry and the potential bias in deepfake protection measures. MacKinnon, known for her critical stance on the porn industry, stated in her 2021 article, “Deepfake laws aim only to protect the person who is falsely introduced, often a celebrity, not the person who is used for sex.” This observation by MacKinnon triggered my contemplation about the various industries that would inevitably encounter this technology and the profound repercussions it might exert on individuals within these fields.
A quick search for Deepfakes on YouTube yielded videos predominantly focused on manipulating the images of celebrities and politicians, including figures such as Tom Cruise, Morgan Freeman, and Barack Obama, among others. However, delving deeper into other platforms revealed a shift. What once was perceived as a comical yet slightly concerning phenomenon among social media users has now reached the legal field, seeping into courtrooms and igniting heightened concerns about using Deepfake technology and generative AI as evidence. Deepfakes entail the creation of AI-generated videos where a person’s face, and now even their voice, is superimposed onto that of an actor, as exemplified by MacKinnon’s concerns regarding the faces of forced sex workers being obscured by those of celebrities.
In a 2023 article by Law360 legal tech reporter Sarah Martinson entitled “Courts Need To Brace Themselves For Deepfake Evidence,” Martinson underscores the imperative for courts to brace themselves for the challenges that advanced Deepfake technology may shortly present. Debunking Deepfakes is not always straightforward, and jurors and judges are poised to grapple with an unprecedented courtroom challenge. When interviewed by Law360 Pulse, Rebecca Delfino, clinical law professor at Loyola Law School, Los Angeles, aptly remarked, “We no longer live in a world where seeing is believing… Most individuals are now questioning things they never before questioned.” Another looming concern is the potential inundation of genuine evidence facing claims of being Deepfakes. Martinson expresses her fear that this issue could give rise to a “deepfake defense,” with lawyers invoking Deepfakes excessively, thereby impeding and burdening the discovery process and the litigation landscape as a whole.
Martinson also underscores a disagreement amongst legal scholars on whether or not the current Federal Rules of Evidence are capable of dealing with this emerging issue. The likelihood of legislative updates to these rules appears remote unless a significant event compels action, Martinson notes. A research scholar and former practicing attorney cited in the article compares the emergence of photoshop and the ways in which the evidence rules didn’t necessitate a change to the evidence rules in the 90s. However, as someone with a background in graphic design, I find it difficult to equate a Photoshop edit to a manipulated video featuring the former President of the United States saying statements he never made, deceiving an unsuspecting audience until they were informed otherwise. Regardless of whether the evidence rules undergo revision, the legal field, much like many other industries, must proactively brace itself for a future where what one sees can no longer be believed.