Site icon

The Mechanical Clerk? A Federal Judge’s “Modest Proposal” on AI

Written by: Daniel Grant

 

Landscaping and Trampolines
Is an in-ground trampoline “landscaping”? Reasonable minds may differ. Some may think this is not a very important question, as was the case for the 11th Circuit when deciding Snell v. United Specialty Insurance Company. This appeal dealt with a contractor whose liability insurance carrier refused to defend him in a case arising out of the installation of a trampoline because his policy specifically excluded landscaping work. The contractor countered that the installation of a trampoline is not landscaping work, but his argument ultimately failed to sway the trial court. On appeal, the 11th Circuit’s opinion noted that both sides “expend significant energy” on this question before declining to answer it at all. Finding for the insurance company, they held the case could instead be decided on the basis of certain statutory technicalities in Alabama insurance law.

While he agreed with the majority’s reasoning, Judge Kevin Newsom, as a “self-respecting textualist,” was unsatisfied with the punt on definitions, and he set out to write a concurrence resolving the trampoline question once and for all. Instantly, however, he hit a snag: he did not know what “landscaping” meant either. He consulted several dictionaries and compared them to his vague intuitions about landscaping but found them lacking. He looked at pictures of the installation and asked his gut for the answer but found his gut could not put words to why the pictures did not feel like landscaping.

But he had an idea. He had his clerk ask ChatGPT, and a few other large language models (LLMs), for the ordinary meaning of landscaping as well as the ultimate question of whether a trampoline is landscaping. They gave a suitable definition and answered in the affirmative with an explanation that made sense to him, so he sided with the majority. Then he wrote a 29-page opinion imploring judges everywhere to “consider—consider” the use of AI in legal decisions.

Caution and Humility
In his 2023 Year-End Report on the Federal Judiciary, Chief Justice Roberts foretold that the work of judges will “be significantly affected by AI,” but warned readers that “any use of AI requires caution and humility.” Judge Newsom’s concurrence takes the former as a tacit recognition that “AI is here to stay,” and the latter as his cue to start figuring out how to use AI responsibly. Responses from the legal community have been mixed, with some judges calling it “brave and brilliant” and others describing it as “brave” but “misleading.” Newsom, for his part, spends much of his opinion describing the benefits and risks of this project he has chosen to undertake.

As a textualist, Newsom is primarily concerned with the plain meaning of texts, so his first argument in favor of the use of LLMs is that LLMs are trained on text written in ordinary language by people of varied backgrounds. As a result, he asserts, they could serve as a useful baseline for what “ordinary people” think words mean, which could be a useful starting point in textualist applications of the plain meaning rule. However, he recognizes too that the backgrounds of the ordinary people whose writing is sampled for the LLM may not be as varied as one would hope — a well-known problem with LLMs in general — though he can only respond by noting that there is room for improvement and hoping the problem is not too severe. Crucially for Newsom, tools like ChatGPT are quite accessible to the bar and the public, and far more so than convoluted empirical methods, such as corpus linguistics, that struggling textualists have turned to for plain meanings of words in the past.

Newsom acknowledges the risks of now-infamous LLM “hallucinations” impacting decisions but believes they will get better in the future and that “flesh-and-blood lawyers hallucinate too.” He also recognizes that models could be maliciously altered to manipulate the justice system, though he thinks this risk can be reduced by querying multiple models. Finally, he deals with fears that this could lead to a “dystopia” of “robo judges” by citing Chief Justice Roberts’s report for the proposition that weighing and consideration by human judges will always be necessary. Newsom is careful to note that he is “not, not, not” arguing for judges to blindly rely on ChatGPT, but rather for the “pretty modest [proposal]” that it be one of many sources to consult in deciding the meaning of terms. This, he believes, significantly mitigates each of the risks described above.

Yet, while some onlooking judges have called his argument “brilliant,” others have suggested it displays several popular misunderstandings of how LLMs work. For instance, the utility of LLMs for establishing plain meanings of words is undercut not just by their hallucinations but by their inability to produce consistent outputs for fixed inputs. Meanwhile, his references to LLMs “understanding context” seem to contradict the technological reality that an LLM is incapable of understanding anything at all. His use of AI resources also arguably falls with Wikipedia and other “unconventional sources” outside the bounds of evidentiary rules and judicial guidelines.

If AI is here to stay, then Judge Newsom’s call for AI to be used in courtrooms is, if nothing else, a wake-up call. If the reader supports judicial use of ChatGPT, then Newsom boldly raises important questions about how best to professionally use AI tools. If the reader does not, then he draws attention to the need to institute rules around AI use to keep other judges from following his lead. Either way, his opinion is a shot of adrenaline to the heart of an already contentious cultural conversation about the use of AI in the law and in society at large.

Sources:

Isha Marathe, Judges React to 11th Circuit’s Gen AI Use: “Creative,” Occasionally “Misleading,” and “Brave”, Legaltech News (June 6, 2024, 3:42 PM)

John Roberts, 2023 Year-End Report on the Federal Judiciary, Supreme Court of the United States (December 31, 2023).

Scott Schlegel, The 11th Circuit’s Experiment with AI: Balancing Innovation and Judicial Integrity, Substack: [Sch]legal Tech (June 5, 2024).

Snell v. United Specialty Ins. Co., No. 22-12581 (11th Cir. May 28, 2024).

Exit mobile version