Abstract

It sounds like a judge. It presents the facts of a case like a judge. It summarizes the evidence like a judge. It states the law like a judge. It offers a legal opinion like a judge. Artificial Intelligence seems to tick all the boxes when it comes to judicial decision-making. And yet: Are we persuaded by AI? Can we rely on AI? Should we let AI judge us? Some scholars think so.

This article challenges the notion that AI systems can effectively assume the role of judges. Despite AI’s impressive advancements, particularly in natural language processing, this article illustrates that the essence of judicial decision-making extends far beyond the mere production of legal texts. The judicial function encompasses complex cognitive processes, evaluative skills, and a nuanced understanding of societal contexts as well as legal knowledge that AI systems fundamentally lack.

Drawing on prevailing normative frameworks governing the role of the judge, this article demonstrates that AI falls short when it comes to all core capabilities required to fulfill the judicial task: social skills for fact-finding, legal skills for assessing the established facts, and the ability to provide reasoned justifications. Contrary to a popular narrative in scholarship, these failings are not merely quantitative but rather display qualitative differences between human and machine intelligence. This, in turn, results in insurmountable structural incompatibilities between AI and the judicial decision-making process.

The article concludes that continuously tweaking AI in hope that it will eventually become “good enough” to overcome these structural incompatibilities merely sets such efforts up for failure. Instead, any attempt to recalibrate legal systems to accommodate AI judges would require nothing short of a radical reconceptualization of the judiciary’s role in democratic societies, and thus of law itself.

Download a PDF version of this article here.