Download a PDF version of this article here.

Samantha Fink Hedrick*

Artificial intelligence (AI) has often been viewed as either an ally or an adversary—a powerful analytical system to be harnessed or a source of risk to be managed. In copyright law, AI has been treated much the same way, with academic debates focused primarily on whether AI-generated works should be owned by the AI itself, the human programmer who created the AI, or the end user. However, little attention has been paid to how the use of AI in the creative process can affect the validity of ownership claims asserted by any of these human actors in computer-generated works—a question that may have a far greater impact on creative industries.

In this article, I examine whether the use of AI as a tool of creation interferes with a human’s ability to claim copyright in the resulting works. First, I identify the various human actors who could plausibly own the copyright in the creative outputs of AI and evaluate the relative merits of their claims. Second, I analyze the doctrine of authorship to determine whether the use of AI presents a barrier to any human claiming authorship in these outputs, rather than which human should own the copyright in a computer-generated work. Finally, I explain how AI operates in the creative process and the various mechanisms of control available to humans to modify these outputs.

Ultimately, I argue that the humans who create and use AI retain sufficient control over the AI’s “decisions,” and that the use of AI therefore does not constitute a barrier to human ownership of copyrightable computer-generated works. The “original intellectual conceptions” represented in computer-generated works are still those of the humans creating and controlling the algorithms used in the creative process, not those of the AI itself. Like a camera, AI functions merely as a tool of creation, not as a sentient “author.”

 

Introduction

Artificial intelligence is taking over the world.[1] Some people mean that literally and would have you believe that the reign of humans in the world is swiftly coming to a close.[2] Others simply mean that nearly every object we interact with in the course of our day will soon be part of the networked universe of “smart,” internet-connected devices known as the Internet of Things.[3] Wherever we currently are on this spectrum, it is unarguable that this technology is becoming increasingly prevalent and has been steadily entering new areas of our daily lives, some predictable and some surprising. For example, AI is now being used in connection with medical diagnosis,[4] facial recognition,[5] smart assistants,[6] driverless cars,[7] imaging historical landmarks,[8] mastering games,[9] weather prediction,[10] online ad serving,[11] drafting form email responses,[12] creating music,[13] sculptures,[14] and literature,[15] and even helping the blind navigate the offline, physical world.[16] AI has also already been receiving tremendous scrutiny in areas like bail reform, sentencing, and employment decisions.[17]

As AI continues to infiltrate our daily lives more deeply, many people are understandably calling for increased transparency and accountability. That, however, has been difficult to achieve, partly due to the complexity of the technology and the public’s relative inexperience with AI, and partly because these algorithms tend to be proprietary and closely guarded by the companies that create and own them. Furthermore, as AI seemingly becomes more “human,” it is increasingly difficult to distinguish between works created by humans and those created by machines. Consequently, questions of ownership over works created with the aid of technology have become more difficult. While a discussion of transparency and accountability in algorithms generally is outside the scope of this article, these issues may guide how we view the claims of ownership that result from the use of such algorithms to create copyrightable works.

Previous scholarship has focused primarily on the push and pull between the claims of the AI and the claims of the humans by exploring arguments that would support a claim that the AI itself should be deemed the author of computer-generated works. In discussing the claims of the human actors, the debate has centered around which human should “win” the copyright instead. My focus in this Article is not about who the exact human author should be (from among the choices identified below). Instead, I focus on whether the interposition of an algorithm between the programmer (or user) and the output should present a barrier to that human’s claim of authorship in the output. I conclude that it should not.

Control over the outputs is at the heart of this debate. Even with extremely complex deep-learning algorithms, it is the human programmers and users who write the algorithm’s code, decide what kinds of outputs are desired, set the objective functions and other parameters, or otherwise play an active role in shaping the products that result from the creative processes to which AI is applied.[18] These humans are exercising sufficient control such that the “original intellectual conceptions”[19] embodied in the resulting works are truthfully those of the human, not the algorithm. Like a camera in the hands of a photographer, the AI is merely a tool of creation employed by a human with a creative vision—not a sentient being developing “original intellectual conceptions” of its own.

Part I discusses possible options for the allocation of copyright in computer-generated works—to the algorithm,[20] the programmer, the user, the data owner, a combination of those entities via joint ownership, or no one (i.e., the public domain)—and summarizes the arguments for and against each option. Part II discusses the doctrinal underpinnings of authorship and creativity. Part III applies the doctrine to algorithms—deep learning algorithms in particular—by delving into their operations and addressing such issues as accountability and transparency.

I. Eeny Meeny Miny Moe: Who Owns Computer-Generated Works?

As AI technology has evolved to mimic more and more human capabilities, the question of how to allocate copyright in the works these programs create has become increasingly complicated. Copyrightable, computer-generated works have long vexed scholars and legislators. As Doctor Annemarie Bridy puts it, “we know that these works would be copyrightable if they were done by people, but we don’t know what to do with them if they’re done by computers.”[21] Both academics and non-academics generally seem willing to attribute some degree of agency, autonomy, or even intent to AI, particularly as the technology becomes more complex, less intuitively explainable, and more human-like in its abilities (or perhaps, in some situations, less human-like, as some AI appears to execute tasks that humans would be unable to perform).[22] As a result, the interposition of an algorithm between the human “author” and the creative output feels different from the presence of a tool such as a camera or a paintbrush. The question is: Who should own the copyright in computer-generated works? There are six possible answers to this question: the AI itself,[23] the programmer,[24] the user,[25] the data owner, some combination through joint authorship,[26] or no one.[27]

This debate has been raging for over fifty years, but no consensus has yet been reached. Indeed, the arguments supporting each outcome remain essentially unchanged from the beginning of the computer age. The Copyright Office was confronted with this precise dilemma as early as 1956, when it refused to register Push Button Bertha, a song composed by a Datatron computer, because it was not created by a human and there was no precedent for recognizing an authorship claim by a non-human.[28] In 1966, the Register of Copyrights explicitly noted this debate in the office’s 68th annual report, stating that:

The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.[29]

In 1974, Congress entered the fray when it created the National Commission on New Technological Uses of Copyrighted Works (“CONTU”) to analyze this issue (along with several others related to the computer revolution, then in its infancy).[30] Interestingly, CONTU found that “existing statute and case law adequately cover any questions involved” in computer-aided creation.[31]

In 1986, twelve years after CONTU released its final report, Pamela Samuelson observed:

When one thinks of how widespread are uses of computer programs to generate other works . . . one can see that the stakes of the allocation of ownership rights in computer-generated works are very high indeed. When the stakes are high and the statute ambiguous, the stage would seem to be set for a hot contest.[32]

That same year, Congress’ Office of Technology Assessment noted that “[computer-aided creation] greatly complicates the process of determining originality and authorship, and of assigning rights. Similarly, with advances in artificial intelligence, computer-aided design, and computer-generated software, it will become increasingly difficult to determine what creators have actually created.”[33]

Yet today, more than three decades after that stage was observed to be set, scholars and policymakers around the world are still grappling with these same questions.[34] The discussion has even made its way into pop culture.[35] Some countries have enacted laws that expressly address the issue of ownership in computer-generated works. For example, the copyright laws in the U.K. and New Zealand stipulate that the entity deemed to be the author of a computer-generated work is “the person by whom the arrangements necessary for the creation of the work are undertaken.”[36] The copyright laws in France, Germany, Greece, Switzerland, and Hungary are more explicit, expressly limiting authorship to “humans” or “natural persons.”[37] Although U.S. copyright law does not currently address this issue directly, the Copyright Office has expressly stated that it will not recognize non-human authors.[38]

My focus in this article is less about who the exact human author should be, but rather on whether the interposition of an algorithm between the programmer or user and the output should present a barrier to that human (or corporate) being’s claim of authorship in the output. I conclude that it should not. Even with extremely complex deep-learning algorithms, there are human programmers and users who write the algorithm’s code, set the objective functions and other parameters of the algorithm, and decide whether the algorithm is creating the desired outputs or whether it ought to be tweaked. These humans are masterminding the creative process; even complex AI models are simply following the humans’ commands (or at least creative guidelines, criteria, and rules).

General assertions about humans’ claims to AI-generated works cannot be made until the merits of each possible claim of authorship are evaluated. Only then can we examine how the use of AI might interfere with any or all of these claims of authorship—and, therefore, ownership.

A. I “Think,” Therefore I Am an Author: Computer as Author

When discussing computer-generated works, many scholars have focused on whether the algorithm itself ought to be recognized as the author of an AI-generated work. There is, of course, a colorable argument that AI is capable of meeting the explicit criteria for copyrightability in its outputs[39]: (1) a “work of authorship” that falls within the subject matter of the Copyright Act (including the categories listed in section 102);[40] (2) fixation in a tangible medium of expression;[41] and (3) originality,[42] which post-Feist has two elements of its own—(a) independent creation and (b) a “modicum of creativity.”[43]

However, deeming the AI to be the author for copyright purposes is nonsensical and impractical. First, the U.S. Copyright Office does not recognize non-human authors.[44] Remarking on courts in the United States, Bridy noted a “deep-seated . . . assumption that authors are necessarily human.”[45] As an example, Bridy highlights the District Court for the Northern District of California’s decision in Naruto v. Slater, which includes several quotations from Ninth Circuit decisions in which the terms “human” and “natural persons” are used in discussing the concept of authorship.[46]

CONTU also noted that “[t]he eligibility of any work for protection by copyright depends not upon the device or devices used in its creation, but rather upon the presence of at least minimal human creative effort at the time the work is produced.”[47] International law also generally agrees on this issue and, as noted above, a number of countries have laws explicitly stating that only human authors will be recognized. It is easy to say that these statutes and policies should simply be changed so that copyright can be granted to non-human authors; but in the United States, the reason for limiting authorship to natural persons (and corporate entities comprised of humans) comes directly from the U.S. Constitution and the policy justifications it embodies. The IP Clause of the U.S. Constitution permits Congress to grant copyright protection to “Authors and Inventors” to “promote the Progress of Science and the useful Arts.”[48] The purpose of copyright law, therefore, is to provide incentives for authors to create so that the public domain of creative works will continue to expand.[49] Machines, however, cannot be incentivized in the same way that humans can.[50] Algorithms follow the orders of their programmers and need no further incentives to create. Although it is likely that a human will ultimately benefit commercially from the outputs of AI algorithms—and would therefore be incentivized to create, use, and improve them—the incentives are, at the very least, less direct and their effects are less certain when provided to the machine instead of the human. The way to incentivize a robot to create is to incentivize its programmer to instruct it to create. Granting the copyright to the AI is therefore a roundabout way of serving the incentives of copyright law.

From a practical standpoint, allocating copyright to the algorithm would normally result in ownership of the copyright by the company or individual who owns the AI itself, since the owner of the AI would also own any of the AI’s “possessions.” In many cases, the owner would be the company that employed the programmer(s) who created the algorithm (as a work made for hire, or otherwise assigned through employment agreements or other contracts). In practice, the only situation where the allocation of the copyright to the AI would change the outcome is when no party holds the copyright in the algorithm’s code.[51] Additionally, given that allocating the copyright in the output in this manner also distorts the incentives for the human creators who could be influenced instead, it does not make any practical sense to go down this road.

In addition to rendering initial vesting of the copyright in the AI moot, the ability to transfer ownership of the copyright in the output by transferring ownership of the algorithm also undermines the Copyright Act’s protections (e.g., termination of transfers) for initial authors (e.g., the programmer—assuming his or her work on the algorithm was not considered a work made for hire). These protections are intended to ensure that authors are properly incentivized. Interrupting such protections and, therefore, incentives, ought to be accompanied by a serious consideration of the repercussions and whether modifications to existing law would be required in order to preserve the incentives in these situations.

One question on which previous scholarship has focused is whether the work made for hire doctrine can function as a justification for deeming the AI to be the legal author of an AI-generated work.[52] However, this stretches the doctrine to its breaking point. The factors relevant for determining whether someone is an employee include language that, at least as the technology exists today, solely applies to humans. For instance, such phrases as “the extent of the hired party’s discretion over when and how long to work,” “the provision of employee benefits,” and “the tax treatment of the hired party” only make sense when applied to humans.[53] The doctrine also requires that the conduct is “actuated, at least in part, by a purpose to serve the master.”[54] Applying those factors to AI would be illogical, as computers presently cannot exercise discretion over their working hours, have no need for retirement plans or health insurance, and cannot be taxed. Furthermore, these factors denote intentionality and choice, and it would be difficult to plausibly argue that an algorithm possesses either one.

Finally, although it is hotly disputed, a computer is simply not the type of creative “author” that copyright law contemplates. As CONTU concluded in its final report, a computer is more like an inert tool used by a human in the creative process, “completely lacking in creative capabilities while requiring human direction to bring about a creative result.”[55] Under this rationale, CONTU found “there is no reasonable basis for considering that a computer in any way contributes authorship to a work produced through its use.”[56]

Perhaps this is really just an issue of framing. If we focus on the bare minimum of sufficiency for meeting authorship requirements, AI might pass the test. However, if we look instead at the “human” elements of authorship, AI probably falls short. This could conceivably become a closer case if AI technology becomes more autonomous and “sentient” in the future, but the discussion of control in Part III below still resolves this issue in favor of a human author.

B. Pygmalion: Programmer as Author

There are two main arguments for allocating copyright in the outputs of algorithms to the programmer(s) of the algorithm itself: (1) the programmer’s creative choices in preparing the algorithm (e.g., designing the algorithm, selecting a type of model, setting the objective function and other key parameters, and training and adjusting the algorithm) substantially affect, if not completely determine, the resulting outputs;[57] and (2) the incentives provided to the programmer align with the fundamental goals of copyright.

David Lehr and Paul Ohm define eight “stages of machine learning”: (1) problem definition; (2) data collection; (3) data cleaning; (4) summary statistics review; (5) data partitioning; (6) model selection; (7) model training (including tuning, assessment, and feature selection); and (8) model deployment.[58] One of the key design decisions a programmer makes about an algorithm is which model[59] is best suited to produce the desired outputs.[60] The programmer also performs the critical task of defining the objective function. This component of the algorithm sets the “goals” of the algorithm and determines the general characteristics of the outputs (e.g., the format and what is being optimized).[61] After defining the objective function, the programmer sets other parameters (e.g., bias and variance, which determine the accuracy and speed of the algorithm)[62] and selects the datasets that will be used to “train” the algorithm (and decides how to divide the data for training and testing purposes).[63] The size of the dataset and representativeness of the data (i.e., how accurate extrapolations from sample data to a broader data set will be) both significantly affect the accuracy of the algorithm’s predictions and the usefulness of its outputs.[64] Before deciding that the algorithm is ready to “go live,” the programmer also makes myriad decisions concerning how and how much to adjust the parameters and data.[65] Only after the programmer has made all of those decisions is the algorithm set loose to create an output “on its own.”[66]

In light of this substantial contribution to—and control over—the form and creative parameters of the outputs, it is easy to see why the programmer is a sensible choice to be the “author” of the algorithm’s outputs. Furthermore, even where the steps between the programmer’s final decisions and the actual moment of a work’s creation are so complicated that humans may not fully comprehend the exact processes (e.g., when using complex neural networks), the choices that the programmer made in the first phases of creation still strongly influence the characteristics of the algorithm’s outputs.[67] If the programmer (or end user) of the algorithm decides after an output is created that further changes are needed or desired, they can also adjust the parameters or data at that point in order to influence future outputs—even if they do not understand the intermediate steps between those changes and the moment of creation of the outputs. In other words, despite some work being done by the algorithm during the later stages of the creative process, the programmer or the user can still exercise control over the outputs by “tweaking” the parameters.

The idea of recognizing authorship in the user is more readily acceptable to many scholars if the algorithm is conceived of as a tool, like a camera.[68] A novice photographer can pick up a DSLR camera, put it in “sunset” mode, and effectively capture an autumn-hued landscape photograph, despite the fact that the photo is taken in the broad daylight in spring.[69] The resulting photograph is not considered any less copyrightable when taken by that novice than it is when taken by a professional photographer who fully understands every special effect implemented by the camera’s software. Why, then, should the use of an algorithm be thought of any differently? Perhaps it is society’s romantic, anthropomorphic notions of humanoid robots in science fiction stories that make the automatic processes of an algorithm feel more intentional and thoughtful than they truly are, as though they were genuine “choices.”

If the idea to create something (even if reasonably specific, such as a 100-page romance novel set in Paris with a protagonist who owns a cafe) originates from the programmer, but the copyrightable expression of that idea is directly generated by the algorithm, can the programmer claim that AI-generated expression as his or her own? Because the programmer selects the parameters and training data that guide the algorithm in its choice of each word, plot twist, and style choice, I submit that the expression ultimately derives from the programmer. If an author is permitted to claim the accidental variation resulting from a clap of thunder as “his own,”[70] then certainly the product of the variation resulting from the narrow (or even broad) set of choices a programmer allows for should belong to him or her as well. Returning to the camera analogy, any randomness or rule-based “creativity” in an AI’s final output is produced in the same way as the randomness or creativity in a photograph taken using a pre-selected mode on a camera. The resulting image may not exactly match the photographer’s initial vision of what it would look like, but it nonetheless follows from his initial choices and parameters—just as the AI’s outputs follow from the programmer’s initial choices and parameters.

The programmer also breathes whatever life we perceive in AI into it. The programmer’s choices in designing and calibrating the algorithm provide the algorithm with all of its “creative” capabilities[71]—the algorithm has no ability to create outputs except that which the programmer provides. An algorithm is therefore more an extension of the human programmer’s own creative mind than it is an independent, autonomous being capable of originality and creativity. Even when an algorithm generates something H-creative (“historically creative,” i.e., never before created by humans),[72] such creativity is the result of the instructions and capabilities programmed by its creator and is therefore dictated by the (creative) choices of the programmer or user.[73]

A programmer may also respond to financial incentives in a way that an algorithm does not. Like writers, painters, composers, and other traditional creators, programmers are the very type of “Authors and Inventors” contemplated by the drafters of the Copyright Clause. While an algorithm will blindly follow the instructions given by its programmer (whether to create or to stop creating) and will not be swayed by the prospect of financial gain (unless it is instructed to be), the programmers themselves can be incentivized to create, use, and improve algorithms in order to generate additional works. This is true whether the output is a novel, a song, a painting, or even another AI program.

Furthermore, labor theory, although discredited by the Supreme Court in Feist as a basis for copyright protection, logically supports the allocation of copyright to the programmer.[74] The virtually endless choices described above amount to a substantial expenditure of time, resources, and creativity by the programmer. As Samuelson puts it, the programmer will always be, at the very least, a “substantial contributor to the production of any output.”[75] Samuelson also discussed—albeit pre-Feist—what she termed the “comparative sweat test.”[76] Although post-Feist, labor itself is not dispositive in granting copyright in the work, there is still some logic in comparing the relative creative contributions of various contributors to determine who should be granted ownership of the copyright (provided that the work, and perhaps also the contribution, meets the minimum threshold requirements of copyrightability). For example, the more modern “mastermind” doctrine of joint authorship[77] rewards the contributor who is deemed to have provided the largest creative contribution—the “original intellectual conceptions” or “vision” for the work.[78]

However, some scholars have argued against granting copyright in computer-generated works to the programmer. Samuelson argues that “[t]he programmer creates the potentiality for the creation of the output, but not its actuality.”[79] Bridy employs a highly formalistic application of the labor theory to argue that the programmer has not expended sufficient labor to create the outputs, noting that the programmer “doesn’t lift a finger to create them.”[80] Instead, she entirely separates the process (and labor) of creating the algorithm from that of creating the output (after the algorithm becomes operational).[81] CONTU also conceived of the creation of the algorithm and the creation of the ultimate work as distinct processes: “[i]t appears to the Commission that authorship of the program or of the input data is entirely separate from authorship of the final work.”[82] However, to say that the programmer has expended no “minimal human creative effort”[83] to create the work once the algorithm has been made operational is to discount not only all the previous labor expended in building and calibrating the algorithm, but also (and more important to current copyright doctrine) all of the programmer’s creative choices in model selection, parameter setting, data selection and allocation, calibration, testing, the remaining steps from the conception of the algorithm to its final execution, and the ongoing tasks of monitoring and modifying the algorithm once it is operational.[84]

Bridy also objects to granting the copyright in the outputs of an algorithm to its programmer because the algorithm, not the human, is the agent of fixation.[85] However, this view has been rejected by courts as an obstacle to copyright. Photographs have been deemed copyrightable despite the fact that the camera is the “agent of fixation,”[86] and novels (or articles like this very one) are still considered copyrightable despite the fact that a computer ultimately fixes the work. Furthermore, in Lindsay v. The Wrecked and Abandoned Vessel R.M.S. Titanic, the Southern District of New York held that Lindsay, the film director, was the author of a documentary even though he was not among the film crew who were not only the agents of fixation, but also the humans who actually captured the footage (presumably exercising at least some creative discretion with respect to framing, lighting, focus, etc.).[87] The mastermind doctrine established in Lindsay and developed in Aalmuhammed v. Lee allows the human who “superintends” the process, or whose “original intellectual conceptions” the work embodies, to own the copyright, regardless of whether other sentient human beings actively make creative choices and add their own original and creative contributions to the work as a whole (unless there is an express intention to be considered joint authors).[88] If other humans cannot deprive the mastermind of his or her copyright, then surely an inert algorithm, just like an inert camera, should not either. David Nimmer agrees, stating that:

Given that copyright inheres only in works fixed in a tangible medium of expression, is the “author” to be construed as the party fixing the work? Important as fixation is, we have just seen that originality is the essence of authorship; accordingly, the originator, rather than the fixer, should be deemed the “author.” The distinction between one poet who brandishes a quill (or word processor) and another who dictates to a stenographer cannot call for a differing legal conclusion as to “authorship.” “Poets, essayists, novelists, and the like may have copyrights even if they do not run the printing presses or process the photographic plates necessary to fix the writings into book form.”[89]

As discussed above in Part I.A, one of the main arguments for granting copyright to the AI is the work made for hire doctrine. This, however, is at best an awkward fit for non-human entities. Another benefit of using the mastermind doctrine to allocate the copyright to the programmer or user is that the analysis does not require the AI to be or to act like a human. Specifically, there is no intentionality required on the part of the AI. There is room for creativity or even intent on the part of the AI, but unless the algorithm truly conceives of and executes the idea without human guidance (which is not possible with today’s technology, and unlikely to become possible in the near future), a human is still “masterminding” the process, even if the AI is responsible for intermediate steps and creative decisions. The AI in this scenario is simply executing the “original intellectual conceptions” of the programmer or user, just like the film crew in Lindsay[90] or the sound engineers, makeup artists, costume and set designers, writers, producers, actors, and consultants in Aalmuhammed.[91]

Bridy’s final argument against granting the copyright to the programmer is that unpredictability in the algorithm leaves the programmer with insufficient control over the output.[92] However, this, too, is a fallacy. As discussed, the fact that some steps in the creative process are not known or fully understood by the programmer does not negate the programmer’s contributions to the creative process, nor does it prevent the programmer from being the true mastermind of the creative process. A novice photographer who expects his photograph to come out looking like a sunset when he uses “sunset” mode, despite not understanding why or how this process works, nevertheless produces a copyrightable photograph. The same holds true even if the photographer has no idea what effect the “sunset” setting will have on the resulting photograph. Furthermore, even when unpredictability built into the algorithm results in randomness once the algorithm is set free to complete the creative process, the programmer can still adjust later iterations to change and shape future output(s).[93] The programmer typically reserves the power to tweak the algorithm later on, meaning that he or she may continue to exercise control over its outputs. Moreover, the Second Circuit rejected the idea that an unpredictable or accidental outcome is not copyrightable. Following its famous reference to a “clap of thunder” that jars a painter’s arm and changes the work, the court unequivocally stated that “[h]aving hit upon such a variation unintentionally, the ‘author’ may adopt it as his and copyright it.”[94]

A final, intriguing argument by Samuelson suggests that the very fact that the algorithm’s code is copyrightable is the reason why the process leading to the creation of an algorithm should be considered to be a separate from the process leading directly to the creation of the output.[95] Samuelson argues that a programmer should only be allowed to commercialize one of those two creative processes—a form of election doctrine that forces the programmer to choose either to commercialize the software itself or to sell the outputs, but not both.[96] This idea, while intriguing, seems to bear more on the issue of whether the copyright should also, or instead, be allocated to the user when the programmer chooses to sell the software. It does not, however, present a compelling reason to deny the copyright to the programmer.

C. What Does This Button Do?: User as Author

The arguments for and against granting copyright in computer-generated works to the user largely track those for the programmer: the user (if the user and the programmer are different individuals) is likely to have made a substantial contribution to the creative process; the user exercises significant control over the inputs and parameters of the algorithm; and the user is generally responsive to the incentive mechanisms provided by copyright law. The same challenges made to the programmer’s claim could be applied to the user’s claim as well. Under Samuelson’s “comparative sweat test,”[97] the user has expended even less labor than the programmer did to create the output—although in some instances, the user’s labor may also be substantial, since many of the choices around setting the parameters, selecting the data, and calibration of the algorithm may also (or instead) be performed by the user. The algorithm still stands between the user and the output as the agent of fixation, and the same unpredictability exists for the user as for the programmer, perhaps even to a greater degree, since the user is more likely to be in the position of the novice photographer than an experienced code master.

However, users possess certain unique qualities. First, the user is best positioned to bring the outputs to market,[98] and may therefore be better positioned than the programmer to fulfill the goals of copyright.[99] After all, copyright is not intended simply to encourage more works to be created, but also for them to be disseminated.[100] If works were hidden away in secret private libraries, that would not “promote the Progress of Science and the Useful Arts,”[101] because no one else would be able to build off of the knowledge contained within those works or to find inspiration in them. Therefore, it may be better to allocate ownership to the person who can not only produce additional works but can also be motivated by the financial incentives of copyright to disseminate those works.

Second, in some instances, the user may set the parameters and provide data for the algorithm in ways that vastly change the output, and may even affect the way the algorithm operates.[102] In other words, the same software provided to two different users could result in two wildly different sets of outputs, depending on the creative choices made by the user, and regardless of the choices previously made by the programmer.

Third, although the algorithm still stands between the user and the outputs, the user is the human closest to the moment of fixation and therefore holds a stronger claim to being regarded as the agent of fixation. Samuelson, for example, compares the user to the person who records a jazz improvisation session (and therefore fixes the work).[103] In that sense, the user is fixing the work of both the programmer and the algorithm, and would have a claim to the copyright even if she did not mastermind the entire creative process. However, as discussed above in Part II.B, courts have not accepted the agent of fixation theory.

Finally, the user makes additional decisions regarding the selection and editing of outputs when determining which to bring to market and disseminate, and which to destroy or discard.[104] Since one of the advantages of algorithms is their ability to operate at scale (and therefore produce vast quantities of potentially copyrightable works), the user will typically need to curate the outputs rather than flood the market with large numbers of works of varying quality. These choices represent originality and creativity of their own.

One additional argument against the user as author centers on a line of cases holding that users of video games are not authors of the resulting audiovisual work, even when their interaction with the software influences the output.[105] In Midway v. Artic International, a prominent early video game case, the Seventh Circuit rejected the claim that the video game’s players became authors of the resulting audiovisual work. As the court noted:

The question is whether the creative effort in playing a video game is enough like writing or painting to make each performance of a video game the work of the player and not the game’s inventor.
We think it is not. . . . The player of a video game does not have control over the sequence of images that appears on the video game screen.[106]

In other words, if the programmer places sufficient limitations or constraints on the creative process of the end user—or the AI—it could be argued that the programmer should still be considered the author. The resulting works still represent the programmer’s “original intellectual conceptions”[107] because those works can only be conceived and created within the bounds of the creative environment established by the programmer.

D. You Say Tomato, I Say Tomahto: User vs. Programmer

As between the programmer and the user, the decision of who the copyright should be allocated to is fact-dependent, and would likely differ based on the nature of the software.[108] For example, on the one hand, it would be extremely unfair if a piece of software’s terms of service demanded ownership of the copyright in all outputs of a word processing program, since the copyrightable expression clearly belongs to the user. The only hook for the programmer claiming the copyright would be as the agent of fixation, which was firmly rejected above.[109] On the other hand, if a program dispenses a story or a song at the mere press of a button by the user (such as the program that created Push Button Bertha),[110] there might be a stronger argument for the programmer to own that copyright, both on its own merits and relative to the argument for authorship by the user.[111] In situations where an algorithm produces very different outputs depending on the parameters and inputs selected by the user (e.g., Alfred Knipe’s Great Automatic Grammatizator[112]), the user’s claim to sole ownership of the ensuing work may be stronger than that of the programmer because, in this scenario, the algorithm functions just like any other machine, tool, or instrument that facilitates the creation of copyrightable works by human authors (e.g., a piano or a camera).

Furthermore, this issue is likely to be resolved ex ante through licensing agreements between these parties, thereby rendering these arguments moot.[113] However, it is worth questioning the fairness of such licensing arrangements, especially in light of the proliferation of contracts of adhesion in today’s increasingly online world. But that is a topic for another paper and another day.

Finally, substantial evidentiary issues are likely to further complicate this decision. It may be difficult to determine which algorithm created a particular work, thereby creating uncertainty as to which programmer may lay claim to the output. It might even be difficult to determine whether the work was created by any algorithm (as opposed to having been created solely by a human). As the “Turing test” for artwork becomes easier for AI to pass as technology improves, this will only become more difficult.[114]

Given the fact-dependency of this decision, blanket assumptions in favor of either the programmer or the user are unhelpful and misleading. Attempting to make this decision ex ante, without a specific case and fact pattern before us, is putting the cart before the horse. Therefore, I will refer to them collectively or nearly interchangeably throughout the remainder of this paper. This distinction is also unnecessary for the ultimate question this article seeks to resolve: not which human should own the copyright in a computer-generated work, but rather whether the use of AI presents a barrier to any human claiming authorship in the outputs.

E. The Proof Is in the Data: Data Owner as Author

Both the quantity and quality of the data used to train an algorithm play a crucial role in determining the accuracy and quality (and therefore the value) of the algorithm itself,[115] and the outputs of an algorithm can vary significantly based on the data on which the algorithm performs. [116] Therefore, it may make sense in certain situations for the owners of that data to receive at least partial ownership rights in the outputs created through the use of that data.[117] This author was unable to find any published articles arguing for ownership of the outputs of AI by the data owner.[118] However, this option would also likely be moot in practice, since such allocations of ownership almost certainly could and would be made through licensing agreements for the use of such data.

Furthermore, when data is being used subject to a claim to a fair use justification, whether transformative or technological (e.g., a corpus of novels being used for the purposes of understanding language structure and patterns of conversation),[119] that use undermines any data owner’s claim for ownership in the outputs, just as an author or publisher owning the rights in a novel would not have a claim to ownership in the search results or product features of Google Books, or a photographer would in an image search engine.[120]

F. Two Great Authors, Better Together: Joint Authorship

Another option is to grant joint authorship to some combination of the categories discussed above. For example, assuming that they are not one and the same, both the programmer and user will have substantially contributed to the creative process. Similarly, if the AI, as an independent entity, is granted copyright in the ultimate work, there is a strong argument that the programmer and user will also have made substantial contributions to the work. Courts would have to decide whether such an arrangement would satisfy the Aalmuhammed test[121] in the absence of an expressed intent by the AI, and whether an intention by the programmer and user to merge their contributions with those of the AI into a unitary work would be sufficient. Finally, in the absence of a contract for the use of the data on which the algorithm was trained or operated, one could make an argument for joint authorship by the data owner and any of the other parties. However, the Aalmuhammed intent bar would be difficult to meet in this situation, unless joint authorship was expressly made a condition of a license or grant of access to the data.

G. If I Can’t Have It, No One Can: Computer-Generated Works as Belonging to the Public Domain

If none of the other actors discussed above are successful in arguing doctrinally that they are entitled to authorship over the work, dedicating the outputs of AI to the public domain might be a sensible solution. The ultimate goal of copyright law is to expand the public domain of creative works,[122] and this approach initially seems to further that goal.

However, the problem with this approach is that it undermines the utilitarian view of copyright law, which is the dominant view in the United States and suggests that copyright’s exclusive rights provide authors with economic incentives to create additional works, thereby (at least eventually) enriching the public domain.[123] If humans are not adequately incentivized to create AI in the first place, or to spend the requisite time and resources gathering data to train or improve it, then fewer works will be created, undermining the goal of increasing the public domain. Without financial incentives, it is likely that fewer companies and engineers would decide to create, improve, or use AI to generate creative works. There are other incentives, of course, such as fame, academic respect, commercial gain through sales to other users, and a pure desire to create, but they would likely not inspire the same type, quality, or scale of creation as traditional incentives would.[124] Even if such incentives were sufficient, there is no reason to treat AI’s outputs any differently from other means of creation.

II. I, Author: What It Truly Means To Be an Author

Perhaps even more intriguing than who should be deemed the author of a computer-generated work is the question of what it means to be an “author” in the first place, and how our existing doctrine is (or should be) applied in the age of AI. Although “author” is not defined in the Constitution or the Copyright Act,[125] caselaw has provided several answers. In Burrow-Giles Lithographic Co. v. Sarony, the Court defined an author as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.”[126] By this definition, an algorithm could be considered an author. However, the Court went on to say in the same case that “writings” refers to all forms of expression “by which the ideas in the mind of the author are given visible expression”[127] and that works are copyrightable “so far as they are representatives of original intellectual conceptions of the author.”[128]

In 1999, the District Court for the Southern District of New York reiterated the focus on the “original intellectual conceptions” of an author in a decision upholding a documentary film director’s claim to the film’s copyright, despite the actual footage having been shot by other members of his crew.[129] There, the Lindsay court concluded that

[W]here a plaintiff alleges that he exercised such a high degree of control over a film operation . . . such that the final product duplicates his conceptions and visions of what the film should look like, the plaintiff may be said to be an ‘author’ within the meaning of the Copyright Act.[130]

With respect to ownership of the outputs of algorithms, it is easy to draw an analogy to the Lindsay case: the algorithm functions as the film crew (or perhaps even the camera), while the programmer or user of the algorithm functions as the director and, therefore, the author. To be sure, someone claiming to be an author “must supply more than mere direction or ideas,”[131] but, in general, the extent to which a programmer or user exercises control over the operation of the algorithm is likely to meet this bar.

Even more apropos is the “superintendence” or “mastermind” doctrine formulated in Aalmuhammed v. Lee, which posits that a contributor must “superintend” the work in order to be considered an author.[132] The case addressed a claim of joint authorship by a consultant who made various contributions to the film, including writing two scenes. The Ninth Circuit found that the consultant “did not at any time have superintendence of the work”[133] and therefore could not be considered an author of the film. Together with Lindsay, these decisions suggest that even if the algorithm is deemed to have some creative ability and to have contributed to the copyrightable expression in the final work, the human who orchestrates the process—whose vision the algorithm brings to life—may still be considered the “mastermind.”[134]

This conclusion is further supported by Bridy’s “authorship-as-causation” concept, which suggests that the decisions in Burrow-Giles and other authorship cases are consistent with the view that the author is “the motive force without which [the work] could not have come into existence.”[135] Indeed, the Burrow-Giles Court referred to the author as “the cause of the picture.”[136] The effects of a programmer’s or user’s choices in designing and guiding an algorithm certainly support the concept of the programmer or user as the proximate “cause” of the work (including, most importantly, the underlying expression).

As the foregoing analysis makes clear, one way to determine whose creativity is represented in the expression of the final work is from the perspective of control (e.g., the mastermind doctrine). Another lens through which to analyze the process is creativity itself: if the decisions that inject the requisite originality or creativity into the output result from the choices made by a human programmer, then there should be no barrier to authorship vesting in that human. If, however, the creative elements of the output instead arise from decisions and learnings made by the algorithm alone, then perhaps its human programmer or user has no rightful claim to authorship after all.

One challenge to a human’s claim of authorship in computer-generated works is that an algorithm lies between the actions of the purported author and the expression itself. However, as discussed above, the programmer and the user both contribute substantially to the creativity and expression of the resulting work. As will be discussed in Part II.B, the parameters a programmer selects, the data on which he or she chooses to train the algorithm, the type of work he or she directs the algorithm to produce, and many more decisions in the process are all decidedly creative choices.[137]

Furthermore, the fact that a user does not mastermind every detail of the creative process does not undermine his or her claim of ownership and can be rebutted through analog examples. For example, a photographer who manages to capture the perfect lighting without understanding how their camera operates would not forfeit his or her copyright in the resulting work. As Bridy put it, “[l]ike the photographer standing behind the camera, an intelligent programmer . . . stands behind every artificially intelligent machine.”[138] Similarly, while the camera crew in Lindsay and the other contributors to the film in Aalmuhammed certainly made some creative choices in the films’ creation, that does not undermine or interfere with the directors’ claims in the final work.

As between the creator or user of the algorithm and the algorithm itself, there should really be no debate. It is not the “mind”[139] of the algorithm that conceives of or creates a work. An algorithm simply follows the parameters that the programmer or user has programmed into it. The programmer or user therefore “superintends” and “masterminds” the work of the algorithm, providing it with parameters that guide its functionality and data that determines its trajectory. As James Grimmelmann astutely observed, “[a]nything an author does with a computer she could in theory do without it. . . . Computers make some kinds of creativity practically feasible, but they do not make anything newly possible.”[140]

Furthermore, these decisions to guide the algorithm on its course should overcome any unpredictability in the output of the algorithm. For example, imagine that Jackson Pollock, bored of flinging paint at the canvas, decided instead to build a machine with a little scoop that could hold paint and, when cranked, would fling the paint forward toward the canvas. Pollock would select the colors and load them up, and could decide to tilt, move, or rotate the canvas for the desired effect, but the actual painting would occur at the whim of physics, determined by factors such as the weight of the paint or the strength of the wind. One would be hard pressed to argue that Pollock’s use of the paint-flinging machine would interfere with his ownership of the resulting painting. Even if Pollock did not use the machine, his own act of flinging paint at, or spilling it onto, the canvas still contains an inherent degree of randomness. Therefore, this is simply an example of an algorithm or machine mimicking human behavior, or substituting for human labor.

Next, imagine that an engineer builds an algorithm that fills in a certain number of pixels on a screen at random. The number of pixels and the possible colors with which the pixels may be filled are selected by the user, but the actual selection of the pixels and pixel colors is done at random by the AI. Would anyone argue that the programmer should not own the resulting work? If a “clap of thunder” jarring one’s arm is sufficient to be considered “original,”[141] how then could this type of planned, intentional randomness (or intentional “unpredictability”) be any less original, or any less the “original intellectual conception” of the author?

As algorithms become more complex and more decisions are made “by” the algorithm rather than the programmer, there is a stronger argument to be made that the resulting work is no longer the “original intellectual conception” of the programmer. However, the strength of this argument is mitigated by the fact that the programmer or user can still manipulate the outputs by adjusting the algorithm’s parameters, or by feeding the algorithm different data. So long as the programmer or user retains that type of control, it seems the process is still analogous to the pixel program or the paint-flinging machine, albeit at a larger scale and with a greater degree of programmed “randomness.” Unpredictability within selected parameters, or even inherent randomness throughout the process should not hinder the human programmer’s right to claim copyright in the work created—especially when the randomness is intentionally included. This is even true of unintended randomness, just as the result of the happy coincidence of a clap of thunder was considered copyrightable.

A. What Is Creativity? Creativity, Originality, Novelty, and Intent

Although there are many definitions of creativity, several key elements have consistently been identified across different perspectives and definitions.[142] In the context of copyright, the Supreme Court has only required a finding of “originality,”[143] without defining that term clearly. The only guidance offered by the Court is a requirement that the expression contain “more than a de minimis quantum of creativity”[144] (modifying its initial suggestion that original simply meant independently created[145]) and a definition of “originality” as “the personal reaction of an individual upon nature . . . something irreducible, which is one man’s alone.”[146]

The Seventh Circuit, however, has provided a framework that breaks down creativity into three distinct elements of originality, creativity, and novelty:

A work is original if it is the independent creation of its author. A work is creative if it embodies some modest amount of intellectual labor. A work is novel if it differs from existing works in some relevant respect. For a work to be copyrightable, it must be original and creative, but need not be novel.[147]

It is worth noting that, unlike patent law, copyright does not require novelty. In Alfred Bell, the Second Circuit firmly rejected novelty as a requirement of copyright, holding that originality (at least under copyright law) does not mean “startling, novel or unusual, a marked departure from the past . . . [or] highly unusual in creativeness.”[148] The legislative history of the Copyright Act of 1976 clearly shows that Congress agreed with the Second Circuit’s view: “This standard [of originality] does not include requirements of novelty, ingenuity, or esthetic merit . . . .”[149]

An algorithm can easily satisfy this low bar for originality. An algorithm relies on the data on which it is trained and the rules it is given, which makes it possible to verify that the output does not duplicate the expressive content of those inputs. Novelty is also easily met because an algorithm is capable of creating something H-creative (new to the world).[150] The difficult question is whether an algorithm exhibits sufficient “intellectual labor,” or whether we would deem an algorithm to be capable of exhibiting any intellectual labor, or true creativity, at all.

In addition to the three elements of creativity identified by the Seventh Circuit, there appears to be another factor that has been present throughout the history of copyright law but has not received much attention. That unspoken requirement is intent. In 1884, the Supreme Court noted that the low bar for copyrightability meant that in an infringement claim, the author must prove “facts of originality, of intellectual production, of thought, and conception on the part of the author.”[151] Even Feist’s “minimal degree of creativity”[152] and “some creative spark”[153] suggests that the author must actually intend for a work to be creative (if only minimally), or at least for it to be the type of work that it is (i.e. intend the work have the characteristics it does, with the court deciding whether it is actually “creative” after the fact).

Nearly seventy years after Burrow-Giles, however, the Second Circuit flatly rejected any intentionality requirement when it suggested that “bad eyesight or defective musculature, or a shock caused by a clap of thunder”[154] could produce sufficient originality to make the work copyrightable. The court went on to explicitly state that originality could be achieved by the author “unintentionally.”[155] Despite Bell’s explicit rejection of intent as a requirement, the language from the other cases just discussed—including the later-decided case of Feist—seems to support the idea that an author must act with some degree of intentionality during the creative process. Furthermore, this reasoning does not necessarily conflict with the holding of Bell, since the painter intended to paint. Perhaps intent applies to the decision to create in the first place, or to the decision to bring the creative work to market, but not to the specific expression or the mode of creation.

Although not explicitly endorsed as a requirement for copyrightability, the language used by scholars discussing the originality requirement has also invoked the idea of the author’s intent to create. Samuelson argues that “[c]onceiving a work is part of what traditional copyright doctrine has meant by authorship and creativity, without which rights should not inure in the programmer.”[156] Bridy also rejects Bell’s accidental creation standard and interprets Burrow-Giles to mean that “creativity must be purposive or intentional.”[157] Therefore, identifying the source of this intention (presumably a human) could affect the determination of whose creativity a work represents.

B. Programmed to Be Creative: Oxymoron or Truth?

There are many examples of highly “creative” AI today, including AARON, a program that writes music,[158] and BRUTUS, a program that writes short stories.[159] However, the debate over whether AI can ever truly be creative has been raging for decades, ever since science fiction writers conceived of the idea of a “creative” robot.[160]

One side of the debate posits that creativity is an “intrinsically human space,”[161] and that no computer will ever truly be able to achieve it no matter how good the AI gets at imitating it. Ada Lovelace perhaps said it best when she observed that “the analytical engine has no pretensions whatever to originate anything. It can do only whatever we know how to order it to perform.”[162] CONTU, in its Final Report, echoed this sentiment when it firmly stated that:

[T]here is no reasonable basis for considering that a computer in any way contributes authorship to a work produced through its use. The computer . . . is an inert instrument, capable of functioning only when activated either directly or indirectly by a human. When so activated it is capable of doing only what it is directed to do in the way it is directed to perform.[163]

CONTU further noted that “[i]n every case, the work produced will result from the contents of the data base, the instructions indirectly provided in the program, and the direct discretionary intervention of a human involved in the process.”[164] One can argue that the language in the Compendium of U.S. Copyright Office Practices also supports this position. Section 306 states that “[b]ecause copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”[165] In other words, only a human being can form “original intellectual conceptions,” and non-human creators (e.g., monkeys and dolphins—or AI) cannot. Finally, CONTU further asserted that no matter how “complex and powerful” computers may be, “it is a human power they extend.”[166] Thus, even when computers exceed the capacity of humans to create in a certain way, they are still merely tools amplifying their human users’ capabilities.

Furthermore, Lovelace adherents emphasize that it is the programmer who creates the algorithm’s capacity to create.[167] An algorithm does not think on its own. Any capacity for “thought” comes from its code and can be controlled by the programmer.[168] For example, even as Bridy praises AARON as an example of an extremely creative AI, she also discusses how Harold Cohen, AARON’s inventor, altered AARON’s musical style over time. As Bridy notes, “[i]ndeed, it was Cohen, through AARON’s changing code, who redefined the outer bounds of AARON’s artistic capacity.”[169] Even the most sophisticated forms AI may be refined by engineers to adjust the outcomes.[170] Finally, as discussed in greater detail in Part II.C below, algorithms can be programmed to exhibit apparent creativity as the result of built-in randomness and other rules, including commands to break certain rules in order to create more unique works. However, that creativity is still the result of those rules and of the creative choices made by the programmer and the user.

The other side of the debate compares human thought to algorithms and code. Proponents posit that creativity is entirely programmable and that the language of AI reflects this. We speak of artificial intelligence and neural networks because algorithms are capable of mimicking human thought processes so accurately that we perceive AI as being able to “think” just as we do. Alan Turing himself suggested that “the only way by which one could be sure that a machine thinks is to be the machine and feel oneself thinking.”[171] This line of reasoning tends to raise existential questions about whether humans are just computers ourselves. Indeed, the word “computer” originally referred to humans performing mechanical mathematical tasks.[172] John Haugeland found the fact that an algorithm owes its existence and capabilities to a programmer close to irrelevant in determining whether it should be considered the creative force behind its outputs, asking why “an entity’s potential for inventiveness [should] be determined by its ancestry . . . and not by its own manifest competence.”[173] He further derided the notion that “when we’re creative, it’s all our own, but when a computer printout contains something artistic, that’s really the programmer’s artistry, not the machine’s,” implying that AI deserves credit for its own “creative” work.[174]

Bridy invokes the concept of algorithmic creation (where works are created by following a precise set of rules, with little or no discretion exercised in the process of creation), pointing out that since humans could produce the same works in the same way by hand, computers are therefore shortcuts for the labor, but not for the creative choices.[175] When this view is taken to its extreme, true creativity ends where the rules and parameters governing the creative process have been determined and the process of production begins, without the exercise of any further discretion or choice.[176] If neither pure randomness nor pure obedience to predetermined rules is creativity (both of which, of course, are debatable), then algorithmic creation is not creative. The resulting works still exhibit creativity and the choices of parameters, forms, and rules are unquestionably creative, but the same cannot be said of the steps between finalizing the rules and completion of the work. If Samuelson and Bridy are correct that the creation of the algorithm and the creation of the outputs are entirely separate processes,[177] then the AI has exhibited no creativity.

One interesting consequence of taking this view is that it undermines the arguments set out above for why copyright is limited to human authors. Many authorities have limited authorship to humans, but the reasons provided tend to invoke a requirement of sentience. If AI can truly “think” in the same way humans can, then these arguments might be weakened. For example, Bill Patry states that “a work owing its form to the forces of nature . . . is not registrable.”[178] The Copyright Office similarly refuses to register works created by non-human authors “[b]ecause copyright law is limited to ‘original intellectual conceptions of the author.’”[179] A work made by an AI would not “ow[e] its form to the forces of nature”[180] any more than would a human-generated work. Furthermore, if we accept that human thought is algorithmic and can be imitated by AI, then perhaps AI is also capable of generating “original intellectual conceptions.”

The final missing piece would be incentives, because copyright aims not only to encourage creation, but to incentivize it financially. If we accept that AI can be trained to think like a human, as Turing suggests, then we might posit that it can be trained to respond to financial incentives as well. Setting the objective function to maximize revenue might be one way to achieve this—if the AI’s strength is producing creative works and it discovers (or is told) that copyright is one way to maximize profits from those works, then it could be trained to be “motivated” by similar incentives to humans.[181] However, this once again depends on the control that the human programmers are exerting over the functionality of the AI.

AI is unquestionably capable of producing “creative” works. AARON’s music and BRUTUS’ short story[182] would likely pass Bridy’s “Turing Test for creativity,”[183] as many people would have difficulty telling the computer-generated works apart from human-generated works. However, whether the AI is legally creative is a different question, and a much more difficult one. This is especially true with respect to the type of creativity required in order for the creator to have sufficient “original intellectual conceptions” to be deemed the “author” under copyright law. As Bridy put it, “[w]e might not say that AARON is creative, but we can say that AARON’s painting exhibits creativity.”[184] Likewise, if we think of an algorithm as a tool (like a camera), the works created “by” that tool unquestionably meet the Feist bar of independent creation plus a modicum of creativity. We do not question whether the human who pressed the button is the author; it is assumed that the requisite modicum of creativity came from the human and not the machine. On the one hand, although it is easy to say that the works exhibit originality, creativity, and novelty, it is very difficult to plausibly demonstrate intentionality on the part of the AI (as opposed to the programmer or user). On the other hand, it is also clear that the operations performed by the algorithm are the source, if not the proximate cause, of the work. In this sense, the algorithm is also the agent of execution of the idea. The key question is therefore whether it is the machine that takes the concept from an idea to copyrightable expression, or whether the programmer or user exercises sufficient “control” to be considered the mastermind of the process and claim the expression as well as the idea.

Thus, the question is really whose “original intellectual conceptions” are represented in the resulting work when a human programmer or user interacts with a complex algorithm to generate a copyrightable work. If creativity is programmable—if novelty, randomness, and independent creation are sufficient—then it is possible for AI to be creative in the sense recognized by copyright doctrine. It is also then possible to make a colorable argument that the work in fact represents the “original intellectual conceptions” of the AI and not the human—or those of both. These questions, however, are unlikely to be resolved any time soon. In the meantime, control is perhaps our best proxy for determining whose conceptions (and creativity) the expression represents.

C. The Gift of Creativity: Intentional Unpredictability and Randomness

One of the biggest hurdles to a human claiming copyright in the outputs of an algorithm is the concept of unpredictability, including both randomness and the ability of computers to exceed human capabilities (e.g., in speed, scale, and discrete skills such as pattern recognition).[185] As a practical concern, if the human claiming authorship cannot show that he conceived of and controlled the output, it would be difficult to establish that it truly represents his “original intellectual conceptions.” Deep neural networks and other complicated AI are capable of breathtakingly complex computations, and perhaps in some circumstances even exceed the abilities of their human programmers. The outputs—and the process for creating them—may even become more complicated than the human brain is able to comprehend, predict, or intend. However, this is simply a difference in degree, not a difference in kind.

The language used by engineers and scholars to describe AI reflects this view. CONTU noted that it is “a human power [AI] extend[s].”[186] Grimmelmann states that “[a]nything an author does with a computer she could in theory do without it. . . . Computers make some kinds of creativity practically feasible, but they do not make anything newly possible.”[187] Jeff Dean holds a similar view, suggesting that “[a]nything humans can do in 0.1 sec, the right big 10-layer network can do too.”[188] Jason Tanz goes even further, claiming that “[s]oon we won’t program computers. We’ll train them like dogs.”[189] While it is certainly possible that computers in the future will be unmoored from the capabilities of humans and able to accomplish things that are truly different in kind from what a human is capable of, that day is not yet upon us.[190] Even if (or when) it is, the reality is that the AI will remain responsive to programmers’ or users’ adjustments to the parameters, data, variable weights, and other components, which allows those humans to retain control over the outputs, if not the exact steps of the creative process itself. The programmer also makes the decision to use those particular capabilities in the first instance.

Since the novice photographer discussed in Part I.B and thunderstruck painter discussed in Alfred Bell are no less authors than a creator who fully understands how to execute their vision and does so flawlessly, we can also dismiss the notion that an unknown or unknowable result undermines copyright in traditional forms of creation. Forms of accidental or random creation are nonetheless recognized as copyrightable works, whether it be the result of the paint flung at the canvas (whether by a machine or by Jackson Pollock himself) or random selection and coloring of pixels by a simple algorithm.

One specific form of unpredictability, however, has greatly troubled scholars and has received a lot of attention in the context of AI: randomness. It is common to program randomness into an algorithm’s choices, particularly when the output is a creative work. There are certainly creative software programs that do not utilize randomness—a camera behaves the same way each time you take a photograph with the same settings, and a word processor inserts the precise letter that corresponds to the key you press.[191] However, many other programs are intentionally coded to include randomness. For example, in 1956, Martin Klein built an algorithm to compose music. He adopted six rules—three from Mozart and three from his own observations of music.[192] The algorithm started the process by selecting a note at random, and then followed a clear set of steps until all six rules of composition were satisfied. The decision to begin the song with a randomly selected note helps make the body of resulting works more interesting. If, alternatively, every song started with a G, the possible number and variety of outputs would be severely reduced.

BRUTUS and other literary machines are doing something similar, albeit on a far more complicated scale and manner than the computer that generated Push Button Bertha. These AI are following rules of creation. The apparent creativity in their outputs comes from the variety of rules from which the machines are allowed to choose and the vast vocabulary they are given. However, the output is still precisely what their human creators intended: a story of a particular format and genre that mimics the language structure of human storytelling. The rules may be drawn from other human creations (e.g., human-generated stories), but the choices among those rules, possible data sets, and other parameters are the true creative choices that determine the end result.

Another reason for intentionally introducing randomness into an algorithm’s choices is to increase the likelihood of discovering something H-creative.[193] For example, imagine an algorithm that tells a football coach what play to call next. Presumably, the coach wants the play call that will maximize the chances of his team winning. The data on which the algorithm would be trained would likely be play calls from actual past games, along with the results (labeled data). However, you could also allow the algorithm to test options and decide which would lead to more positive outcomes (reinforcement learning).[194] Particularly in the latter scenario, to ensure that the algorithm is able to find the “best” play call, it should consider all possible play calls. Limiting the algorithm’s choices to those that have actually been made in the past restricts the algorithm’s options. For example, if no coach in the history of football has ever chosen to punt on second down, and the algorithm is restricted to play calls present in the data set, the algorithm will never recommend punting on second down. However, if it is programmed such that it is allowed to learn by choosing a play from the full panoply of play calls available, it may discover that punting on second down would be sensible in certain situations.[195]

Some argue that introducing randomness or other forms of unpredictability divests the human programmer or user of the requisite control over the resulting work. For example, in 1964, the Copyright Office refused to register a design for a tile floor because it had been generated by a machine using random geometric patterns. The Register of Copyrights asserted that “the ‘design’ does not constitute the ‘writing of an author’”[196] because it had been created by a machine and not by a man. Bridy also interprets Ada Lovelace’s famous quote[197] as supporting a definition of creativity as “the ability to do the unexpected or to deviate from rules. Some think computers can do this if their code incorporates elements of randomness, so that they make choices about composition that are governed at least in part by chance.”[198] However, even if we accept this definition of creativity, accidental creation is not a bar to copyrightability.[199] The fact that an accident was an intentional one rather than a truly unexpected “clap of thunder” only buttresses the conclusion that the programmer’s “original intellectual conceptions” are represented. Had randomness or unpredictability been a bar to creativity, Jackson Pollock would have been unable to claim copyright in any of his works, as he could not have known precisely where each drop of paint would fall on the canvas, or the shape that every splatter would take upon contact. To claim copyright, control over a work must be sufficient, but not complete.

III. A Journey to the Center of the Algorithm: Demystifying the “Black Box”

AI is often referred to as a “black box” because it is difficult to access or understand,[200] which leads to two major concerns. First, AI can be very complicated. In fact, as deep learning and neural network technology advances, we may reach a point where AI is so complex that human beings are incapable of fully understanding every step of the process between creation of the algorithm and creation of the algorithm’s output.[201] Second, the proprietary nature of algorithms and, accordingly, their tendency to be protected as trade secrets[202] makes it difficult for anyone other than the owner to understand and challenge any aspect of an algorithm’s operation, from bias and discrimination in employment or sentencing decisions[203] to copyright infringement. This lack of transparency also interferes with the ability to parse out which elements of the decision come from the algorithm, which come from the data, and which come from the programmer’s choices in setting the parameters (e.g., the relative weights of the variables). These are valid concerns, and both must be addressed by developers and users of AI technology in order for AI to continue to advance and flourish.

However, these arguments do not logically support withholding copyright ownership from the programmers and users of algorithms. With respect to proprietary algorithms and claims of trade secrecy, one option is to allow social and political pressure to shape laws (or self-regulatory frameworks) around transparency and accountability. Another would be to allow economic pressure from consumers to incentivize companies to voluntarily provide the transparency and accountability that users desire. Either option would be better aligned with the purposes of copyright law than withholding copyright from the programmer or user of the algorithm. Choosing to allocate copyright to the AI itself (or to the public domain) simply because the public does not fully understand how it functions would disincentivize human programmers and users to create both the AI and AI-generated works, resulting in fewer works being disseminated to the public, inhibiting AI development, and losing tremendous benefits to society that AI makes possible.

However, if the human “mastermind” is truly unable to understand or exercise sufficient control over the creative process due to the sophistication of the technology itself, that could undermine their claim to ownership in the expression of the resulting work. After all, if “the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine,”[204] then the expression could not be said to duplicate the “conceptions and visions”[205] of the human claiming authorship. Therefore, the real question is whether humans are capable of sufficiently controlling the creative outputs of the algorithms that they create and use.

Deep learning is one form of machine learning and among the most complex forms of AI that exist today. Jeff Dean describes it as “[a] collection of simple trainable mathematical units, which collaborate to compute a complicated function.”[206] Deep learning is compatible with many algorithmic models, including supervised, unsupervised, and reinforcement learning.[207] It can be used for tasks like pattern recognition for modeling human speech, vision, language understanding, predictions of online user behavior, or translation.[208] Deep learning requires massive amounts of data and tremendous computing power.[209] One common form of deep learning is neural networks, which have multiple layers of algorithms. Each layer performs a mathematical function on the data, and the layers are then connected to each other.[210]

When enlisting algorithms in the creative process, the first steps include such decisions as setting the objective function and other parameters (e.g., variance and bias) and training the algorithm on one or more data sets.[211] There is, however, a conceptual gap between the decision that the algorithm is ready to go live and the actual creation of output(s). For example, if a user purchases software that writes music on demand, this gap would be the set of steps between clicking the “create” button and seeing the sheet music the software produces. With respect to the hypothetical algorithm discussed earlier that fills in pixels on a screen according to instructions the user selects, the conceptual gap would include the steps after the user chooses the number of pixels and the colors, but before the final artwork appears on the screen. The crucial question is whether the ability to understand those intervening steps—or at least to control them—is a prerequisite to claiming authorship over the copyrightable expression in that work.

How much conceptual distance is too far a leap from the initial instructions provided by the programmer and the output of the algorithm? Does “learning” by a machine in the interim increase that distance? What is truly “unpredictable,” as opposed to being the intended (if only vaguely planned or conceived) result of the programmer’s instructions? What transforms the AI from an inert tool into an intentional, creative being capable of authorship?[212]

Admittedly, the mere setting of guidelines and rules for creation does not does not provide us with clear answers to any of these questions.[213] For example, the person who organizes a writing competition will set the length of submissions, the topic, and other creative constraints, but, in the absence of a voluntary contract to the contrary, he or she would not own the works written and submitted by other human authors. In contrast, the choices made by a programmer in creating, configuring, and training an algorithm that would produce these same stories go far beyond the rules of a simple contest. The computer must follow the rules set by its programmer, and it can only learn from the data fed by the programmer or user. It cannot bring a tremendous wealth of inexact, volatile, and unintentional human experiences to the creative process the way a human author does. Even if it has been trained for hundreds of years on vast quantities of data, and even if it far exceeds in scale what a human would be capable of in hundreds of lifetimes, it is still beholden to that universe of data and cannot exceed the capabilities granted to it by its programmer(s) and the knowledge or data provided to it by its user(s).

A. Peeking Behind the Curtain: Mechanisms of Control

It is important to note that creative control does not require full and complete understanding of the operations of the algorithm. For example, the novice photographer selecting a setting without understanding what it does or how it works will still be able to use those settings to manipulate the output (perhaps through trial or error, or through sheer luck). The same holds true for extremely complicated deep learning algorithms—a programmer can still maintain control even without a complete understanding of its operations. For example, the programmer can adjust the variable weights,[214] provide the algorithm with different training data to correct perceived bias,[215] or adjust the objective function (i.e., the metric that the algorithm is trying to maximize).[216]

Furthermore, the criticism that algorithms are opaque is unpersuasive when one considers the alternative: a volatile and unpredictable human being. Between the finalization of parameters and the actual creation of the work, the actions of a human who makes similar decisions or creates similar works are equally obscure. In fact, when a human is the creator, it is less possible to interrogate the results and determine which variables influenced the decision or creation. The doctrine of subconscious copying[217] illustrates this point. With an algorithm, on the one hand, one can examine its inputs and see exactly what “inspired” the output, as well as verify that no copyrightable expression was duplicated from its inputs. A person, on the other hand, brings to the process a lifetime of experiences and unmeasurable inputs, with no practical way to determine whether the creation was truly independent, making the author more vulnerable to an accusation of “subconscious” copying. Nor is there an obvious way to adjust the inputs if desired—a person cannot delete memories at will, or avoid incorporating an input to which they have already been exposed. Similarly, with respect to bias and discrimination, an algorithm has no malicious or moral responses that influence the outputs—it simply follows rules. The rules themselves, or the data inputs, could contain bias, but that is caused by human and not algorithmic error.[218] Furthermore, many other criticisms or flaws of algorithms can be found in human behavior as well. For example, overfitting (when an algorithm learns a rule that is too specific and makes predictions that are not generalizable to other sets of data) could be analogized to some forms of PTSD, where innocuous loud noises or sudden movements may be perceived as serious and imminent threats (as a result of a “rule” learned from a single negative experience or set of experiences).

Finally, and perhaps most significantly, there are methods of accountability that can identify, for example, which variables are most important to an individual outcome of the algorithm, or which variables are most important to all decisions across the board. To be effective, accountability measures must keep up as algorithms become more complex over time, but encouraging companies and individuals to create responsibly is still preferable to not encouraging them to create. Failures of explainability or accountability are not excuses to deny programmers and users copyright in the outputs of the algorithms they create and use; they will neither make the technology any more transparent nor advance the goals of copyright law.

B. It’s All Greek to Me: The “Black Box” and Explainability in Artificial Intelligence

Without understanding how an algorithm operates and how it interacts with human programmers and users, we cannot determine whether the AI has done so much to generate the creative expression in the work that a human can no longer be considered the author. To determine whether this line exists and where it might lie, it is necessary to dissect the ubiquitous “black box” arguments, which suggest that no human can truly understand the inner workings of an algorithm between the setting of parameters and the creation of output.[219] This leap from inputs to outputs is a critical step but has not been addressed in legal literature in great depth.[220] In the future, one obstacle for potential authors of computer-generated works will be their inability to understand and describe to others how the algorithm analyzes its inputs, makes decisions, and creates its outputs.

Lehr and Ohm refer to this as the “explainability” of the algorithm and define it as “the ability of machine learning to give reasons for its estimations.”[221] They suggest two viable ways in which programmers can currently explain an algorithm: they can either “describe how important different input variables are to the resulting predictions,” or “describe how increases or decreases in the various input variables translate to changes in the outcome variable.”[222] In other words, one approach identifies the most important variables for the algorithm’s individual decisions and outputs, and the other looks at the relationship between the variables, comparing them to each other as well as to the outcome. The first provides “partial dependence or individual conditional expectation plots,”[223] and focuses on identifying those variables that were most important to a particular decision or prediction. The other includes options such as “variable importance plot[s],”[224] which provide insight into which variables were most significant across the data set. However, Lehr and Ohm acknowledge that these approaches may not work for deep learning algorithms.[225] Thus, additional methods will need to be developed for more complex models.

There are also a number of methods being developed to help make AI—and deep neural networks in particular—more explainable. The field is referred to as XAI—explainable AI.[226] David Gunning of DARPA optimistically notes that:

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. . . . These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user.[227]

Katherine McTole describes five specific methods for achieving XAI: learning semantic associations; generating visual explanations; local, interpretable, model-agnostic explanations; rationalizing neural predictions; and explainable reinforcement learning.[228] An article in Science Magazine suggests that “[j]ust as the microscope revealed the cell . . . researchers are crafting tools that will allow insight into the [sic] how neural networks make decisions” and describes three approaches to achieving explainability: building in a “transparent layer” that helps control the neural networks, “probing” the network by varying the inputs in an attempt to understand which variables are most important to a particular decision, and using more neural networks to understand how other neural networks are operating (for example, by exposing knowledge gaps in the AI’s logic).[229] Ultimately, the hope is that these XAI methods will result in the equivalent of an fMRI for the AI’s artificial “brain,” allowing us to see how it operates while it is “thinking.”

In addition, programmers are facing mounting pressure to explain how their algorithms work in many areas of law and life. Lawyers and advocates call for increased explainability and human oversight in automated bail and sentencing decisions;[230] medical patients clamor for increased transparency in automated diagnostic processes;[231] and Gunning emphasizes the importance of XAI in allowing the military “to understand, trust, and effectively manage this emerging generation of artificially intelligent partners.”[232]

Another example of public calls for transparency came in August 2017, when New York City Councilman James Vacca, chair of the Council’s technology committee, introduced a bill proposing that the source code of any algorithm that a city agency uses to make automated decisions be made available to the public. Vacca stated, “[i]f we’re going to be governed by machines and algorithms and data, well, they better be transparent.”[233] While that bill did not pass in its original form, New York City has now created a task force to make recommendations on “which types of algorithms should be regulated, how private citizens can ‘meaningfully assess’ the algorithms’ functions and gain an explanation of decisions that affect them personally, and how the government can address ‘instances in which a person is harmed’ by algorithmic bias.”[234] Similar calls for transparency are being made across the globe. For example, the European Union’s General Data Protection Regulation mandates that a data subject has the right to request human intervention in automated decisions that have a substantial or legal effect on the data subject.[235]

As these pressures increase, programmers will find new ways of improving explainability for AI. As the use of AI becomes increasingly commonplace and the public becomes better acquainted with how algorithms work, what seems incomprehensible today will make more sense in the future. Programmers will find new ways to translate the AI’s “thoughts” into a language we can understand. Programmers might even find ways to have the algorithm explain itself to us, thus obviating the need for humans to analyze formulas and decipher patterns themselves.[236] Consequently, the rules that algorithms create from their training data sets will become easier to discover and understand, and the “black box” will become increasingly transparent.

Conclusion

AI is getting closer and closer to passing the Turing test for creative works every day. As AI continues to approximate human capabilities, the question of who should own the copyright in computer-generated works will only become more complex. The crux of the issue is whether there is any point at which the programmer and user have yielded so much control over the creative process to the AI that the human programmer or user can no longer claim copyright in the expression of the resulting work. After all, if the idea is the programmer’s, but the expression is the “original intellectual conception”[237] of the AI—that is, “conceived and executed not by man but by a machine”[238]—then it is difficult to justify a programmer’s claim of ownership.

Given the current state of AI technology, I conclude that such a threshold does not exist. Even with the most complex deep neural networks, human programmers and users still retain sufficient control over the creative process such that the resulting work can be said to embody their “original intellectual conceptions.” Even when the process includes unpredictability (e.g., due to the complexity of the technology or the relative inexperience of the user) or randomness (intentional or otherwise), the programmer and user retain the ability to adjust the algorithms’ parameters, variable weights, and other factors in order to exercise control over the output. AI is also more a glass box than a black box, and it will only continue to become more transparent as societal pressure and technological demands spur the development of XAI.

Furthermore, the incentives inherent in the copyright bargain—and the very rationale for the existence of copyright law—are only advanced when copyright is allocated to a human, whether that is the programmer, user, data owner, or a combination of them. Otherwise, human programmers and users will not be incentivized to create, improve, and use “creative” AI. Thus, even if or when AI does reach a point where it could truly be developing “original intellectual conceptions” of its own, granting copyright to an algorithm would not further the purposes of copyright law; nor does it fit well with its incentive structure. AI has already changed the world, and it will continue to do so in the future—the question is whether we will properly harness its potential for creativity.


Footnotes

* For helpful comments and conversations, I thank Shyam Balganesh, Barton Beebe, Mala Chatterjee, Mariano-Florentino Cuellar, Jeanne Fromer, Jared Greenfield, Luke Hedrick, Thomas Kadri, Ari Lipsitz, Giuseppe Mazziotti, Ken Rubenstein, Jason Schultz, Scott Smolka, Christopher Sprigman, Fred von Lohmann, Ari Waldman, and Amy Whittaker. This article also benefited from feedback at the Engelberg Tri-State Region IP Workshop.

[1] AI Takeover, Wikipedia, http://en.wikipedia.org/wiki/AI_takeover (last visited May 16, 2018). See also Adam Rogers, The Way The World Ends: Not With A Bang But A Paperclip, Wired (Oct. 21, 2017, 7:00 AM), https://www.wired.com/story/the-way-the-world-ends-not-with-a-bang-but-a-paperclip/ (using a game by Frank Lantz as an example of how extremely intelligent AI asked to optimize a specific output could quickly run amok in its pursuit; that is, “maybe at first it does stuff that looks helpful to humanity, but in the end, it’s just going to turn us into paperclips”). Interested readers can play the paperclip game here: http://www.decisionproblem.com/paperclips/index2.html.

[2] See, e.g., Rory Cellan-Jones, Stephen Hawking Warns Artificial Intelligence Could End Mankind, BBC News (Dec. 2, 2014), http://www.bbc.com/news/technology-30290540; Matt Chessen, Artificial Intelligence Will Be the End of Humanity, But Not for the Reasons You Think, Medium (May 24, 2016), https://medium.com/short-bytes/artificial-intelligence-will-be-the-end-of-humanity-but-not-for-the-reasons-you-think-482fbfa6858f; Samuel Gibbs, Elon Musk: Artificial Intelligence Is Our Biggest Existential Threat, Guardian (Oct. 27, 2014, 6:26 AM), https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat; Terminator (Orion Pictures 1984); Westworld: Journey into Night (HBO television broadcast Apr. 22, 2018).

[3] See, e.g., Daniel Burrus, The Internet of Things Is Far Bigger Than Anyone Realizes, Wired (Nov. 2014), https://www.wired.com/insights/2014/11/the-internet-of-things-bigger/ (discussing “smart cement” and suggesting that the Internet of Things is “going to make everything in our lives from streetlights to seaports ‘smart’”); Shane Greenstein, The Expanding Internet of Things Creates Significant Challenges for Telecom Companies, Forbes (Apr. 13, 2017, 1:30 PM), https://www.forbes.com/sites/quora/2017/04/13/the-expanding-internet-of-things-creates-significant-challenges-for-telecom-companies/#75bb95b8c24e (discussing the burden on telecommunications companies resulting from the proliferation of sensors in the Internet of Things); Scott Stephenson, No Place Like Home: The Internet of Things and Its Promise for Consumers, Forbes (Dec. 18, 2017, 11:41 AM), https://www.forbes.com/sites/scottstephenson/2017/12/18/no-place-like-home-the-internet-of-things-and-its-promise-for-consumers/#66ab4fcb5fe2 (describing the existing elements of the “connected home”).

[4] Wojciech Samek, Thomas Wiegand & Klaus-Robert Muller, Explainable Artificial Intelligence: Understanding Visualizing and Interpreting Deep Learning Models, ARXIV (Aug. 28, 2017), https://arxiv.org/pdf/1708.08296.pdf.

[5] Tim Macuga, Austl. Ctr. for Robotic Vision, What Is Deep Learning and How Does It Work?, Cosmos Mag. (Aug. 24, 2017), https://cosmosmagazine.com/technology/what-is-deep-learning-and-how-does-it-work.

[6] Cortana, Microsoft, https://www.microsoft.com/en-us/AI/cortana (last visited May 14, 2018); What Is Deep Learning? 3 Things You Need to Know, MathWorks, https://www.mathworks.com/discovery/deep-learning.html (last visited May 16, 2018).

[7] MathWorks, supra note 6.

[8] Vanessa Ho, ‘Heritage Activists’ Preserve Global Landmarks Ruined in War, Threatened by Time, Microsoft (Apr. 23, 2018), https://news.microsoft.com/transform/heritage-activists-preserve-global-landmarks-ruined-in-war-threatened-by-time/?utm_source=Direct (last visited May 16, 2018).

[9] AlphaGo, Deep Mind, https://deepmind.com/research/alphago/ (last visited May 16, 2018); Watson, IBM, https://www.ibm.com/watson/ (last visited May 16, 2018); Macuga, supra note 5.

[10] Radu Raicea, Want to Know How Deep Learning Works? Here’s a Quick Guide for Everyone., Medium (Oct. 23, 2017), https://medium.freecodecamp.org/want-to-know-how-deep-learning-works-heres-a-quick-guide-for-everyone-1aedeca88076.

[11] See, e.g., AI-Powered Advertising: From Personalization to Hyper Relevance, Criteo (Mar. 12, 2019), https://www.criteo.com/insights/hyper-relevant-ai-powered-advertising/; Deepa Naik, Buying Ads Online — Programmatic Advertising and AI, Medium (Jan. 9, 2019), https://medium.com/@humansforai/buying-ads-online-programmatic-advertising-and-ai-59df20e49b85.

[12] Tim Moynihan, How Google’s AI Auto-Magically Answers Your Emails, Wired (Mar. 17, 2016, 6:23 AM), https://www.wired.com/2016/03/google-inbox-auto-answers-emails/.

[13] Annemarie Bridy, The Evolution of Authorship: Work Made by Code, 39 Colum. J.L. & Arts 395, 397 (2016) (discussing AARON, a music-writing AI); Will Knight, This AI-Generated Musak Shows Us the Limit of Artificial Creativity, MIT Tech. Rev. (April 26, 2019), https://www.technologyreview.com/s/613430/this-ai-generated-musak-shows-us-the-limit-of-artificial-creativity/; James Vincent, This AI-Written Pop Song Is Almost Certainly a Dire Warning for Humanity: Let’s Not Rule It Out, Anyway, Verge (Sept. 26, 2016, 7:21 AM), https://www.theverge.com/2016/9/26/13055938/ai-pop-song-daddys-car-sony.

[14] See, e.g., Ben Snell, Dio, https://www.phillips.com/detail/BEN-SNELL/NY000219/10 (noting that the sculpture was not only designed by the AI, but also that it was made from the AI, in that the physical computer was ground up and used as a raw material in the work).

[15] See, e.g., Selmer Bringsjord & David A. Ferrucci, Artificial Intelligence and Literary Creativity: Inside the Mind of BRUTUS, a Storytelling Machine (1999) (discussing BRUTUS, a short-story-writing AI); Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 5 Stan. Tech. L. Rev. 1, 16–18 (2012) (discussing BRUTUS).

[16] Heather Kelly, Google’s Plans to Use AI to Help the Blind, CNN (May 11, 2018, 3:13 PM), http://money.cnn.com/2018/05/11/technology/google-lookout-app/index.html.

[17] See, e.g., Issie Lapowsky, One State’s Bail Reform Exposes the Promise and Pitfalls of Tech-Driven Justice, Wired (Sept. 5, 2017, 7:00 AM), https://www.wired.com/story/bail-reform-tech-justice/; Julia Powles, New York City’s Bold, Flawed Attempt to Make Algorithms Accountable, New Yorker (Dec. 20, 2017), https://www.newyorker.com/tech/elements/new-york-citys-bold-flawed-attempt-to-make-algorithms-accountable; Christopher Bavitz & Kira Hessekiel, Algorithms and Justice: Examining the Role of the State in the Development and Deployment of Algorithmic Technologies, Berkman Klein Ctr. for Internet & Soc’y Harv. Univ. (July 11, 2018), https://cyber.harvard.edu/story/2018-07/algorithms-and-justice; Vincent Sutherland, With AI and Criminal Justice, the Devil Is in the Data, ACLU (Apr. 9, 2018, 11:00 AM), https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data.

[18] See, e.g., David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653 (2017).

[19] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884).

[20] In this Article, “AI,” “algorithm,” “program,” “computer,” and other related terms are used interchangeably. While there are clear differences among them, this Article discusses whether any of these varieties of non-human, digital tools of creation are capable of undermining a human’s claim to their outputs. For the purposes of this Article, there is no difference between them; they are all referring to code that is capable of generating a creative (and potentially copyrightable) work.

[21] Bridy, supra note 13, at 400 (citing U.S. Office of Tech. Assessment, Intellectual Property Rights in an Age of Electronics and Information 69 (1986) [https://perma.cc/XUV3-E979]).

[22] Cf. Pamela Samuelson, Allocating Ownership Rights in Computer-Generated Works, 47 U. Pitt. L. Rev. 1185, 1205 n.90 (1986) (quoting John Haugeland, Artificial Intelligence: The Very Idea 4, 9-12 (1985)). Literary works and films have also invoked the idea of autonomous, sentient AI, and this (for now) fictional possibility deserves some attention. See, e.g., Star Wars (Lucasfilm), Her (Warner Brothers Pictures 2013).

[23] See, e.g., Bridy, supra note 13, at 395–401; Kalin Hristov, Artificial Intelligence and the Copyright Dilemma, 57 IDEA 431 (2017); Yvette Joy Liebesman, The Wisdom of Legislating for Anticipated Technological Advancements, 10 J. Marshall Rev. Intell. Prop. L. 153 (2010); Karl F. Milde, Jr., Can a Computer Be an “Author” or an “Inventor”?, 51 J. Pat. Off. Soc’y 378 (1969). But see James Grimmelmann, There’s No Such Thing as a Computer-Authored Work – And It’s a Good Thing, Too, 39 Colum. J.L. & Arts 403 (2016).

[24] Samuelson, supra note 22, at 1205–09.

[25] Id. at 1200 n.67 (quoting Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harv. L. Rev. 281, 284–93 (1970); Ralph S. Brown, Eligibility for Copyright Protection: A Search for Principled Standards, 70 Minn. L. Rev. 579, 596 (1985)).

[26] Samuelson, supra note 22, at 1221–24.

[27] Daniel Schönberger, Deep Copyright: Up- and Downstream – Questions Related to Artificial Intelligence (AI) and Machine Learning (ML), 10 Zeitschrift für geistiges Eigentum (ZGE)/Intell. Prop. J. (IPJ) 35 (2018); Samuelson, supra note 22, at 1224–28.

[28] Bridy, supra note 13, at 395; Alex di nunzio, Push Button Bertha, YouTube (Jan. 18, 2014), https://www.youtube.com/watch?v=V-XZKS4BItI (originally written in 1956, facilitated by Martin Klein and Douglas Bolitho).

[29] U.S. Copyright Office, Register of Copyrights, Sixty-Eighth Annual Report of the Register of Copyrights 5 (1966).

[30] Samuelson, supra note 22, at 1212.

[31] Nat’l Comm’n on New Tech. Uses of Copyrighted Works, Final Report 46 (1979) [hereinafter CONTU Final Report].

[32] Samuelson, supra note 22, at 1187 n.4.

[33] U.S. Cong., Office of Tech. Assessment, Intellectual Property Rights in an Age of Electronics and Information 301 (1986) [hereinafter OTA Report].

[34] See, e.g., Schönberger, supra note 27; Grimmelmann, supra note 23; Bridy, supra note 13; Bridy, supra note 15.

[35] Dan Brown, Origin 66 (2017) (“Langdon had recently read about . . . teaching computers to create algorithmic art—that is art generated by highly complex computer programs. It raised an uncomfortable question: When a computer creates art, who is the artist – the computer or the programmer? At MIT, a recent exhibit of highly accomplished algorithmic art had put an awkward spin on the Harvard humanities course: Is Art What Makes Us Human?”).

[36] Copyright, Designs and Patents Act, 1988, c. 48, § 9(3) (U.K.) (emphasis added); see also Copyright Act 1994 cl 5(2)(a) (N.Z.); Bridy, supra note 13, at 400 (noting that Hong Kong and India (also common law countries) take a similar approach). This language does not choose ex ante between the programmer and the user (where they are different people); for reasons discussed in Part I.C infra, this is a wise choice by the legislators.

[37] Bridy, supra note 13, at 400–01 (noting that all of these are civil law countries); Schönberger, supra note 27, at 45.

[38] Compendium of U.S. Copyright Office Practices § 306 (3d ed. 2014) [hereinafter Compendium] (the Copyright Office “will register an original work of authorship, provided that the work was created by a human being. . . . Because copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”) (quoting Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884)); see also Naruto v. Slater, 2016 U.S. Dist. LEXIS 11041 (N.D. Cal. Jan. 28, 2016).

[39] There are many different types of outputs for an algorithm (ranging from a simple prediction or number to a full novel). In this article, “outputs” refers to creative works that would be eligible for copyright protection, such as poems, novels, images, music, or even other software.

[40] 17 U.S.C. § 102 (2012).

[41] Id.

[42] U.S. Const. art. I, § 8, cl. 8; Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345 (1991) (“The sine qua non of copyright is originality.”).

[43] Feist, 499 U.S. at 346, 362.

[44] Naruto v. Slater, 2016 U.S. Dist. LEXIS 11041, at *10 (N.D. Cal. Jan. 28, 2016) (“In section 306 of the Compendium, entitled ‘The Human Authorship Requirement,’ the Copyright Office relies on citations from Trade-Mark Cases, and Burrow-Giles to conclude that it ‘will register an original work of authorship, provided that the work was created by a human being.’ Similarly, in a section titled ‘Works That Lack Human Authorship,’ the Compendium states that, ‘[t]o qualify as a work of ‘authorship’ a work must be created by a human being. Works that do not satisfy this requirement are not copyrightable.’”) (citations omitted); Compendium, supra note 38, §§ 306, 313.2; Id. at § 802.5(C) (addressing human authorship of musical works) (“To be copyrightable, musical works, like all works of authorship, must be of human origin. . . . [M]usic generated entirely by a mechanical or an automated process is not copyrightable. For example, the automated transposition of a musical work from one key to another is not registrable. Nor could a musical composition created solely by a computer algorithm be registered.”).

[45] Bridy, supra note 13, at 395.

[46] Naruto, 2016 U.S. Dist. LEXIS 11041, at *8–9; Bridy, supra note 13<, at 399 n.30.

[47] CONTU Final Report, supra note 31, at 45 (emphasis added).

[48] U.S. Const., art. 1, § 8, cl. 8.

[49] 1 Melville B. Nimmer & David Nimmer, Nimmer on Copyright § 1.03[A][1] (Matthew Bender & Co., 2018) (“[T]he authorization to grant copyright to individual authors is predicated on the dual premises that the public benefits from the creative activities of authors and that the copyright protection is a necessary condition to the full realization of those creative activities.”).

[50] See generally OTA Report, supra note 33, at 76 (“When the element of human labor involved in the processing of information is replaced by automation, the incentive of copyright protection may become entirely disconnected from the authorship that it seeks to inspire. Information that is automatically generated by a computer is ‘authored, if at all, by a program that is indifferent to legal incentives.’”); James Grimmelmann, Copyright for Literate Robots, 101 Iowa L. Rev. 657 (2016); Samuelson, supra note 22, at 1199 (“The system has allocated rights only to humans for a very good reason: it simply does not make any sense to allocate intellectual property rights to machines because they do not need to be given incentives to generate output. All it takes is electricity (or some other motive force) to get the machines into production.”); Schönberger, supra note 27, at 46 (“Robots do not need protection, because copyright’s incentives for creativity will and naturally must remain entirely unresponded to by them.”); Mike Masnick, Another Dumb Idea Out of the EU: Giving Robots & Computer Copyright, TechDirt (June 28, 2016, 3:20 AM), https://www.techdirt.com/articles/20160624/17260834817/another-dumb-idea-out-eu-giving-robots-computers-copyright.shtml.

[51] It is also worth noting that software and computer code is at this point indisputably copyrightable. Apple Comput., Inc. v. Franklin Comput., Inc., 714 F.2d 1240 (3d Cir. 1983), cert. dismissed, 464 U.S. 1033 (1984); Nimmer & Nimmer, supra note 49, § 2A.10[E] (“Regardless of one’s perspectives, there would seem to be no turning back: Congress enacted CONTU’s recommendations into law in the 1980 amendment . . . . In addition, copyright protection for software has become far too embedded in the world trade order to permit any realistic prospect of its abandonment in the foreseeable future.”); Samuelson, supra note 22, at 1187 n.5.

[52] See, e.g., Bridy, supra note 13, at 400 (Bridy, however, uses the work made for hire doctrine as a means of enabling the programmer to retain rights in the work, finding the ultimate grant of copyright to AI to be “impracticable”); Bridy, supra note 15, at 3, 26–28.

[53] Cmty. for Creative Non-Violence v. Reid, 490 U.S. 730, 751–52 (1989).

[54] Rouse v. Walter & Assocs., L.L.C., 513 F. Supp. 2d 1041, 1056 (S.D. Iowa 2007) (listing whether an employee’s conduct “is actuated, at least in part, by a purpose to serve the master” as one element in determining whether the work was created within the scope of employment, which is itself an element in determining whether the work in question is a work made for hire by an employee).

[55] Samuelson, supra note 22, at 1195 (summarizing CONTU Final Report).

[56] CONTU Final Report, supra note 31, at 44.

[57] Lehr & Ohm, supra note 18, at 669–702.

[58] Id.

[59] There are many types of models (including supervised and unsupervised models, or reinforcement learning) of varying levels of complexity (from simple computational algorithms to deep learning models (e.g., deep neural networks) that integrate multiple layers of algorithms).

[60] Lehr & Ohm, supra note 18, at 688–95.

[61] Id.

[62] See id. at 696–97.

[63] Id. at 683–84.

[64] Id. at 677–81.

[65] Id. at 695–701.

[66] Id.

[67] See generally id.; see also infra Part III.

[68] See, e.g., Bridy, supra note 15, at 5–6, 10 (explaining the causation theory of authorship by referencing Burrow-Giles and the justification for copyright in photographs, and further analogizing to computer programmers: “[l]ike the photographer standing behind the camera, an intelligent programmer . . . stands behind every artificially intelligent machine. People create the rules, and machines obediently follow them . . . .”); Samuelson, supra note 22, at 1195 (discussing CONTU’s comparison of a computer to a camera); CONTU Final Report, supra note 31, at 45 (“The computer may be analogized to or equated with, for example, a camera, and the computer affects the copyright status of a resultant work no more than the employment of a . . . camera . . . .”).

[69] This author has done just this many times using both her digital point-and-shoot and DSLR cameras.

[70] Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 103 (2d Cir. 1951) (quoting Chamberlin v. Uris Sales Corp., 150 F.2d 512, 513 (2d Cir. 1945).

[71] Samuelson, supra note 22, at 1194–96.

[72] See Margaret Boden, Creativity: How Does It Work?, Creativity East Midlands *1 (2007); see also Bridy, supra note 15, at 12–14.

[73] See infra Part II.C for a detailed discussion of this issue.

[74] Samuelson, supra note 22, at 1201 n.74, 1205 n.87. But see Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 349–50 (1991). Samuelson’s arguments in favor of copyright ownership by the programmer are based on the programmer being a “substantial contributor to the production of any output.” She argues that the programmer deserves to be rewarded (impliedly, through at least partial ownership of copyright) because the work of programming is “intellectually demanding, as well as time-consuming and expensive for the programmer.” She also notes that “[t]he effort that is put into creation of a copyrightable work is sometimes said to be among the things the copyright laws intend to protect.” It should be noted, however, that that article was written prior to the seminal opinion in Feist, which dismissed the idea of using Lockean labor theory as a basis for granting copyright. Samuelson, supra note 22, at 1205, 1205 n.87.

[75] Samuelson, supra note 22, at 1205.

[76] Id. at 1205 n.74.

[77] Aalmuhammed v. Lee, 202 F.3d 1227, 1234 (9th Cir. 2000).

[78] Id.; Lindsay v. The Wrecked & Abandoned Vessel R.M.S. Titanic, 52 U.S.P.Q.2d 1609 (S.D.N.Y. 1999).

[79] Samuelson, supra note 22, at 1209 (first emphasis added).

[80] Bridy, supra note 13, at 398.

[81] Id. at 397–98.

[82] CONTU Final Report, supra note 31, at 45. Interestingly, the analogy the Commission made to drive this point home was to compare the outputs of an algorithm to a translation of a book—thereby implying that the outputs are actually, in some sense, derivative works of the algorithm or of the data.

[83] Id.

[84] See generally Lehr & Ohm, supra note 18.

[85] Bridy, supra note 13, at 398.

[86] Id.; see also Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58–59 (1884).

[87] Lindsay v. The Wrecked & Abandoned Vessel R.M.S. Titanic, 52 U.S.P.Q.2d 1609, 1614 (S.D.N.Y. 1999).

[88] Id.; Aalmuhammed v. Lee, 202 F.3d 1227, 1234 (9th Cir. 2000).

[89] Nimmer & Nimmer, supra note 49, § 1.06[A] (quoting Andrien v. S. Ocean Cty. Chamber of Com., 927 F.2d 132, 135 (3d Cir. 1991)).

[90] Lindsay, 52 U.S.P.Q. 1609, 1614.

[91] Aalmuhammed, 202 F.3d at 1234 . But see Garcia v. Google, 786 F.3d 733 (9th Cir. 2015) (holding that actors may own a copyright in their own performance within a larger motion picture).

[92] Bridy, supra note 13, at 398.

[93] Jeff Dean, Keynote Address on Large Scale Deep Learning at Conference on Information and Knowledge Management (“CIKM”), (Nov. 2014), https://static.googleusercontent.com/media/research.google.com/en//people/jeff/CIKM-keynote-Nov2014.pdf.

[94] Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 105 (2d Cir. 1951).

[95] Samuelson, supra note 22, at 1207–09.

[96] Id.

[97] See supra note 76 and accompanying text.

[98] Samuelson, supra note 22, at 1200 n.67 (“Machines may not need rights to be induced to generate output, but that, of course, does not mean that no one needs incentives in order for products of generator programs to be made available.”); Schönberger, supra note 27, at 51; OTA Report, supra note 33, at 158 (“In the marketplace for printed works, governed by copyright, the incentive to produce was linked to the incentive to disseminate printed copies as widely as possible; for selling copies was how producers generated income.”).

[99] Samuelson, supra note 22, at 1227 (arguing that publishers are the true creators of value by bringing works to market, and therefore deserve (and usually receive) the lion’s share of the profits).

[100] See Nimmer & Nimmer, supra note 49, § 1.03(A); see also Eldred v. Ashcroft, 537 U.S. 186, 244 (2003) (Breyer, J., dissenting) (“The Copyright Clause and the First Amendment seek related objectives—the creation and dissemination of information. When working in tandem, these provisions mutually reinforce each other, the first serving as an ‘engine of free expression,’ the second assuring that government throws up no obstacle to its dissemination.” (citation omitted)); Harper & Row Publishers v. Nation Enters., 471 U.S. 539, 558 (1985) (“[I]t should not be forgotten that the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.”); Acuff-Rose Music, Inc. v. Campbell, 754 F. Supp. 1150, 1153 (M.D. Tenn. 1991) (“To foster the widespread dissemination of ideas, the copyright system is ‘designed to assure contributors to the store of knowledge a fair return for their labors.’”) (quoting Harper & Row, 471 U.S. at 546). While publication is no longer required by copyright law in order to receive protection, dissemination remains one of the primary motivations behind offering copyright incentives to authors.

[101] U.S. Const., art. 1, § 8, cl. 8.

[102] Lehr & Ohm, supra note 18, at 677–81.

[103] Samuelson, supra note 22, at 1202.

[104] See, e.g., id. at 1216–19 (suggesting that the user’s claim to the copyright would actually be as a derivative work of the raw outputs of the algorithm). This formulation of the right trivializes the user’s contribution and does not sufficiently recognize the elements of control discussed below in Part II.

[105] See, e.g., Midway Mfg. Co. v. Artic Int’l, Inc., 704 F.2d 1009, 1111–12 (7th Cir. 1983).

[106] Id. at 1011-12.

[107] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884).

[108] For a detailed analysis of this issue, see Grimmelmann, supra note 23, at 409–12.

[109] As a more specific example, a programmer (or, more likely, a massive team of programmers) created both Microsoft Word and Google Docs, but that does not mean that they own or should own the copyrightable expression in, say, this article.

[110] Bridy, supra note 13, at 395.

[111] One version of this argument can be seen in cases that allow the programmer to retain copyright in randomly-generated levels of video games, or even in the version of the game that is produced by the user’s interaction with the software. See, e.g., Micro Star v. FormGen Inc., 154 F.3d 1107, 1111–14 (9th Cir. 1998); Midway Mfg. Co. v. Artic Int’l, Inc., 704 F.2d 1009, 1111–12 (7th Cir. 1983).

[112] Roald Dahl, The Great Automatic Grammatizator, in The Umbrella Man and Other Stories (1996).

[113] Samuelson, supra note 22, at 1187 n.3.

[114] Bridy, supra note 13, at 399.

[115] Dean, supra note 93, at 4; Lehr & Ohm, supra note 18, at 664–81 (“[A]n algorithm is, at the end of the day, only as good as its data.”).

[116] Lehr & Ohm, supra note 18, at 664–78, 677–81.

[117] But see CONTU Final Report, supra note 31, at 45 (“It appears to the Commission that authorship of the program or of the input data is entirely separate from authorship of the final work . . . .” (emphasis added)).

[118] For example, neither Samuelson, supra note 22, nor Grimmelmann, supra note 23, mentioned the possible claim of the data owner in their reasonably thorough discussions of the range of potential authors.

[119] Richard Lea, Google Swallows 11,000 Novels to Improve AI’s Conversation, Guardian (Sept. 28, 2016, 5:00 AM), https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation.

[120] See Authors Guild v. Google Inc., 804 F.3d 202 (2d Cir. 2015); Perfect 10, Inc. v. Amazon.com, Inc. 508 F.3d 1146 (9th Cir. 2007).

[121] See supra note 88 and accompanying text.

[122] See Nimmer & Nimmer, supra note 49, § 1.03[A].

[123] See, e.g., Harper & Row Publishers v. Nation Enters., 471 U.S. 539, 558 (1985). Cf. Jeanne Fromer, Expressive Incentives in Intellectual Property, 98 Va. L. Rev. 1745 (2012); Christopher Jon Sprigman, Lecture: Copyright and Creative Incentives: What We Know (and Don’t), 55 Hous. L. Rev. 451, 465 (2017).

[124] See, e.g., Eric E. Johnson, Intellectual Property and the Incentive Fallacy, 39 Fla. St. U. L. Rev. 623, 628-31 (2012) (summarizing the incentive theory).

[125] Russ Versteeg, Defining “Author” for Purposes of Copyright, 45 Am. U. L. Rev. 1323, 1326 (1996) (“Who is an author? In other words, what does a person have to do in order to be characterized as an ‘author’ for purposes of copyright? This seemingly simple question is actually complex.”).

[126] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884).

[127] Id.

[128] Id.

[129] Lindsay v. The Wrecked & Abandoned Vessel R.M.S. Titanic, 52 U.S.P.Q.2d 1609, 1613 (S.D.N.Y. 1999) (citing Burrow-Giles, 111 U.S. at 58).

[130] Id. at 1613.

[131] Erickson v. Trinity Theatre, Inc., 13 F.3d 1061, 1071 (7th Cir. 1994); see also 28 U.S.C. § 102(b) (2012).

[132] Aalmuhammed v. Lee, 202 F.3d 1227, 1234 (9th Cir. 2000) (“[A]n author ‘superintends’ the work by exercising control.”) (citing Thomson v. Larson, 147 F.3d 195, 202 (2d Cir. 1998)); Burrow-Giles v. Sarony, 111 U.S. 53, 61 (1884) (“Lord Justice Cotton said: ‘In my opinion, “author” involves originating, making, producing, as the inventive or master mind, the thing which is to be protected.’”).

[133] Aalmuhammed, 202 F.3d at 1235.

[134] It is interesting to note that Aalmuhammed also held that joint authors must “intend their contributions be merged into inseparable or interdependent parts of a unitary whole.” Id. at 1231. To meet that requirement in this context, the AI would have to be seen as possessing the capacity for true “intent” and would have to actually intend that its contributions be fused into a whole with those of its human creators or users. However, if the algorithm is seen instead as a tool, or even as a helpful crew member, then the analysis might be more like that in Lindsay, where the human’s “original intellectual conceptions” have been embodied in the work, and the human is therefore the author—just as Lindsay was for that documentary film. See Lindsay, 52 U.S.P.Q.2d at 13–14.

[135] Bridy, supra note 15, at 5.

[136] Burrow-Giles, 111 U.S. at 61.

[137] Grimmelmann, supra note 23, at 408.

[138] Bridy, supra note 15, at 10.

[139] Burrow-Giles, 111 U.S. at 58 (“By writings in that clause is meant the literary productions of those authors . . . by which the ideas in the mind of the author are given visible expression.”).

[140] Grimmelmann, supra note 23, at 407; see also Bridy, supra note 15, 10–12 (discussing algorithmic composition by humans).

[141] Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 105 (2d Cir. 1951).

[142] See Bridy, supra note 15, for a thorough discussion.

[143] Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345–46 (1991); Bleistein v. Donaldson Lithographing Co., 188 U.S. 239, 250 (1903); Burrow-Giles, 111 U.S. at 58–60.

[144] Feist, 499 U.S. at 363.

[145] See Burrow-Giles, 111 U.S. at 57 (“An author . . . is ‘he to whom anything owes its origin; originator; maker; one who completes a work of science of literature.’”).

[146] Bleistein, 188 U.S. at 250 (in the context of an artist drawing something from the physical world, such as a nature landscape).

[147] Baltimore Orioles, Inc. v. Major League Baseball Players Ass’n, 805 F.2d 663, 668 n.6 (7th Cir. 1986); see also Burrow-Giles, 111 U.S. at 59 (“[T]he remainder of the process is merely mechanical, with no place for novelty, invention or originality.”).

[148] Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 102 (2d Cir. 1951).

[149] H.R. Rep. No. 94-1476, at 51 (1976), reprinted in 1976 U.S.C.C.A.N. 5659, 5664.

[150] Boden, supra note 72, at *7; see also Bridy, supra note 15, at 12–14.

[151] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 60 (1884).

[152] Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345, 348, 362 (1991).

[153] Id. at 345.

[154] Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 105 (2d Cir. 1951).

[155] Id.

[156] Samuelson, supra note 22, at 1209.

[157] Bridy, supra note 15, at 8.

[158] Id. at 21–22, 24.

[159] Id. at 16–18 (including a story that certainly comes close to passing the Turing test, if not clears it with flying colors).

[160] See, e.g., Schonberger, supra note 27, at 39, 47 (discussing Isaac Asimov’s works).

[161] Id. at 47.

[162] Bridy, supra note N13, at 398 (citing Richard Taylor, Note G., in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and Learned Societies, and from Foreign Journals 722 (1837)). Lovelace was a collaborator with Charles Babbage in developing the Analytical Engine, and recognized by many as being one of the first computer programmers. Ada Lovelace, Wikipedia, https://en.wikipedia.org/wiki/Ada_Lovelace (last visited May 16, 2018).

[163] CONTU Final Report, supra note 31, at 44.

[164] Id.

[165] Compendium, supra note 38, § 306.

[166] CONTU Final Report, supra note 31, at 45.

[167] See, e.g., Bridy, supra note 15, at 10 (“Like the photographer standing behind the camera, an intelligent programmer . . . stands behind every artificially intelligent machine.”). Bridy also explains that:

According to the Court’s reasoning in Burrow-Giles, the machine taking the picture mediated but neither negated nor co-opted the process of artistic production, which could be traced quite directly back to the governing consciousness and sensibility of the photographer, the person behind the lens who posed the subject just so and altered the lighting just so. The camera functioned merely as an instrument, a means to the end of realizing the human operator’s creative vision, which is the basis for copyright in the resulting photograph.

Id. at 5–6.

[168] See also David Shultz, Which Movies Get Artificial Intelligence Right, Sci. Magazine (July 17, 2015, 8:30 AM), http://www.sciencemag.org/news/2015/07/which-movies-get-artificial-intelligence-right (“All the experts are quick to point out that robots do not change their programming, and the notion that they could spontaneously develop new agendas is pure fiction. Hutter says the underlying goals programmed into the machine are ‘static.’ ‘There are mathematical theories that prove a perfectly rational goal-achieving agent has no motivation to change its own goals.’”).

[169] Bridy, supra note 13, at 397. It is worth noting that Bridy ironically then concluded that Cohen was not the author of AARON’s outputs because he didn’t fix the works (AARON did), because the outputs were unpredictable, and because Cohen “d[id]n’t lift a finger to create them.” See also Knight, supra note 13 (suggesting that AI-generated music is not creative, despite reflecting and approximating existing creative works like the music of the Beatles). But see supra Part I.B for a rejection of each of these points.

[170] For example, engineers can adjust the weights and connections of the layers in deep neural networks in order to adjust the outcomes. See Jeff Dean, supra note 93, at 14–23.

[171] Alan M. Turing, Computing Machinery and Intelligence, 49 Mind 433 (1950), http://cogprints.org/499/1/turing.html.

[172] See Bridy, supra note 13, at 397.

[173] Samuelson, supra note 22, at 1205 n.90 (quoting John Haugeland, Artificial Intelligence: The Very Idea 4, 9–12 (1985)).

[174] Id.

[175] See Bridy, supra note 13, at 397.

[176] Id.

[177] See infra Part I.B.

[178] 2 William F. Patry, Patry on Copyright § 3:19 n.1 (2019) (emphasis added).

[179] Compendium, supra note 38, § 306 (quoting Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884)).

[180] Patry, supra note 178.

[181] The creator of the algorithm, however, would be wise to closely cabin the means of maximizing the objective function. See, e.g., Universal Paperclips, DecisionProblem.com, http://www.decisionproblem.com/paperclips/ (last visited Apr. 13, 2019) (illustrating the potential dangers of setting objective functions without further supervision of the AI).

[182] See Bridy, supra note 15, at 16–18.

[183] Bridy, supra note 13, at 399.

[184] Id.

[185] See, e.g., Deep Mind, supra note 9; IMB, supra note 9; Macuga, supra note 5.

[186] CONTU Final Report, supra note 31, at 45.

[187] Grimmelmann, supra note 23, at 407.

[188] Jeff Dean, supra note 93, at 26.

[189] Jason Tanz, Soon We Won’t Program Computers. We’ll Train Them Like Dogs, Wired (May 17, 2016, 6:50 AM), https://www.wired.com/2016/05/the-end-of-code.

[190] See, e.g., Ron Miller, Artificial Intelligence Is Not as Smart as You (or Elon Musk) Think, TechCrunch (July 25, 2017), https://techcrunch.com/2017/07/25/artificial-intelligence-is-not-as-smart-as-you-or-elon-musk-think/.

[191] Note that either one could be programmed to inject randomness into the user’s creations—the programmers have simply chosen not to do so.

[192] Martin Klein, Syncopation in Automation, Radio-Electronics, June 1957, at 36, http://www.americanradiohistory.com/Archive-Radio-Electronics/50s/1957/Radio-Electronics-1957-06.pdf; see also Bridy, supra note 13, at 395–96.

[193] See Schönberger, supra note 27, at 42 (“Another attempt to approximate creativity tested against the criteria of ‘response uniqueness’ and understood as ‘the ability to do the unexpected or to deviate from rules’ is the introduction of randomness into the algorithmic process.”).

[194] These choices may be represented in the model selected for the algorithm. Feeding the algorithm data that is labeled as a positive outcome or a negative outcome and having it learn from the sheer scale of the data would be a form of supervised learning, and allowing it to test options and learn by winning or losing would be a form of reinforcement learning. See Lehr & Ohm, supra note 18, at 673, 676–77, 676 n.83; Dean, supra note 93, at 10.

[195] If the data set included all past NFL games, this play call would in fact be available to the algorithm, as this example is based on a real NFL game where the Philadelphia Eagles (in)famously punted on second down against the Washington Redskins in 1986. It was on second and 40, followed four penalties, resulted in a blocked kick and a turnover for a touchdown. See 2nd Down Punt, Eagles-Redskins 1986, YouTube (Mar. 7, 2015), https://www.youtube.com/watch?v=kO2ILLMWEKs&feature=player_embedded (commenters uniformly denouncing the play as one of the worst plays (and worst drives) in NFL history).

[196] U.S. Copyright Office, Register of Copyrights, Sixty-Seventh Annual Rep. of the Register of Copyrights 7–8 (1964) (discussing the then-pending mandamus suit of Armstrong Cork Co. v. Kaminstein). Armstrong brought a suit to compel registration, but it was dismissed when Armstrong refused to reveal details about the way the machine operated, which it considered a trade secret.

[197] See supra note 162 and accompanying text.

[198] Bridy, supra note 13, at 399 (citing David Levy, Robots Unlimited: Life in a Virtual Age (2005)).

[199] See, e.g., Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 105 (2d Cir. 1951).

[200] See, e.g., Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harv. Univ. Press 2015); Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash L. Rev. 1 (2014) (defining black boxes as algorithms that transform data sets (inputs) into outputs without giving the user any information about how they do so); Roger Allan Ford & W. Nicholson Price II, Privacy and Accountability in Black-Box Medicine, 23 Mich. Telecomm. Tech. L. Rev. 1, 11 n.38 (2016) (describing some algorithms as being either “unavoidably opaque” or “deliberately opaque”); Lehr & Ohm, supra note 18; see also John Searle, Minds, Brains and Programs, 3 Behav. & Brain Sci. 417 (1980) (discussing his famous “Chinese Room” experiment and the possibly false assumptions we draw when we can’t access or can’t understand the steps the algorithm is taking).

[201] See, e.g., Kalev Leetaru, In Machines We Trust: Algorithms Are Getting Too Complex to Understand, Forbes (Jan. 4, 2016, 10:18 AM), https://www.forbes.com/sites/kalevleetaru/2016/01/04/in-machines-we-trust-algorithms-are-getting-too-complex-to-understand/#5c5b55d633a5; Marianne Lehnis, Can We Trust AI if We Don’t Know How It Works?, BBC News (June 15, 2018), https://www.bbc.com/news/business-44466213. But see Phil Wainewright, Why Humans Will Always Be Smarter Than Artificial Intelligence, Diginomica (Feb. 15, 2018), https://diginomica.com/why-humans-will-always-be-smarter-than-artificial-intelligence/.

[202] See, e.g., U.S. Copyright Office, supra note 196, at 7 (discussing Armstrong Cork Co. v. Kaminstein, No. 119-64 (D.D.C. filed Jan. 16, 1964), later dismissed because Armstrong did not wish to disclose how the machine operated, which it considered a trade secret).

[203] For a detailed discussion of how copyright law affects access to data sets that could mitigate bias in algorithms, see Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018).


[204] U.S. Copyright Office, supra note 29, at 5.

[205] Lindsay v. The Wrecked & Abandoned Vessel R.M.S. Titanic, 52 U.S.P.Q.2d 1613 (S.D.N.Y. 1999).

[206] Dean, supra note 93, at 12.

[207] Id.

[208] Id. at 2, 10, 24.

[209] MathWorks, supra note 6 (“When choosing between machine learning and deep learning, consider whether you have a high-performance GPU and lots of labeled data. If you don’t have either of those things, it may make more sense to use machine learning instead of deep learning.”).

[210] See Chris Woodford, Neural Networks, Explain That Stuff! (last updated April 4, 2019), https://www.explainthatstuff.com/introduction-to-neural-networks.html; see also Nikhil Buduma, Deep Learning in a Nutshell—What It Is, How It Works, Why Care?, KDnuggets (Jan. 2015), https://www.kdnuggets.com/2015/01/deep-learning-explanation-what-how-why.html.

[211] Lehr & Ohm, supra note 18, at 696–701.

[212] See, e.g., OTA Report, supra note 33, at 69 (“The proportion of the work that is the product of the machine, and the proportion that is the product of a human may vary. In many cases, as with word processing programs, the machine contributes little to the creation of a work; it is ‘transparent’ to the writer’s creativity. But with some programs, such as those that summarize (abstract) written articles, the processing done by the computer could constitute ‘an original work of authorship’ if it were done by a human being.”); Samuelson, supra note 22, at 1195–96 (questioning “whether interactive computing employs the computer as a co-creator, rather than as an instrument of creation”); Schönberger, supra note 27, at 41, 44 (“[S]ome of these systems have alienated themselves from human creatorship to a degree of autonomy where the contribution of the robot is substantial enough to acknowledge the artificial agent as co- or even main creator. . . . [I]t remains to be seen whether the initial programming of an artificial agent will keep sufficient legal proximity to the resulting work, even if the program has further developed possibly on its own account and to a degree of autonomy not predicted at its launch.”).

[213] Erickson v. Trinity Theatre, Inc., 13 F.3d 1061, 1071 (7th Cir. 1994) (stating that an author “must supply more than mere direction or ideas”).

[214] See Dean, supra note 93, at 21–23; Raicea, supra note 10.

[215] Lehr & Ohm, supra note 18, at 665, 684, 696, 698–700.

[216] Id. at 671–77.

[217] See, e.g., Selle v. Gibb, 741 F.2d 896 (7th Cir. 1984).

[218] See, e.g., Executive Office of the President, Big Data: Seizing Opportunities, Preserving Values (2014) at 60 (recommending that “the federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law.”).

[219] See, e.g., Citron & Pasquale, supra note 200; Pasquale, supra note 200; Lehr & Ohm, supra note 18, at 706 n.193; Ford & Price, supra note 200.

[220] See, e.g., Lehr & Ohm, supra note 18, at 704–05.

[221] Id. at 705–06.

[222] Id. at 708–09.

[223] Id. at 710.

[224] Id. at 708.

[225] Id. at 709–10.

[226] Explainable Artificial Intelligence, Wikipedia, https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence (last visited May 16, 2018).

[227] David Gunning, Explainable Artificial Intelligence, Def. Advanced Res. Projects Agency, https://www.darpa.mil/program/explainable-artificial-intelligence (last visited May 18., 2019) (providing a useful visual representation of the effect that explainable AI can have on the creative process in Figure 2).

[228] Katherine McTole, Bonsai Speaks on Explainability of Deep Learning at SF Meetup, Medium (Jan. 27, 2017), https://medium.com/@BonsaiAI/bonsai-speaks-on-explainability-of-deep-learning-at-sf-meetup-bef4c8a4e14e.

[229] Paul Voosen, How AI Detectives Are Cracking Open the Black Box of Deep Learning, Sci. Magazine (July 6, 2017, 2:00 PM), http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning.

[230] Ben Buchanan & Taylor Miller, Belfer Ctr. for Sci. & Int’l Aff., Machine Learning for Policymakers 32–43 (2017), https://www.belfercenter.org/sites/default/files/files/publication/MachineLearningforPolicymakers.pdf.

[231] See Samek, Wiegand & Muller, supra note 4.

[232] David Gunning, supra note 227.

[233] Powles, supra note 17.

[234] Id.

[235] Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119/1) 13–14.

[236] Lehr & Ohm, supra note 18, at 706.

[237] Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884).

[238] U.S. Copyright Office, supra note 29, at 5.