From Dictionaries to AI: The Evolution of Statutory Interpretation
Austin Gergen*
Snell v. United Specialty Ins. Co. is a relatively routine case coming out of an insurance dispute in the Eleventh Circuit.[1] However, Judge Newsom wrote a concurring opinion where he proposed a thought-provoking idea: the potential use of AI-powered large language models (LLMs) like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude in the interpretation of legal texts.[2] This proposal, while unconventional, opens up a fascinating discussion about the future of legal interpretation and the role of technology in the judicial process.
A. Leveraging AI for Legal Interpretation
While it may seem out of the blue, the idea stems from the challenges of determining the ordinary meaning of terms not explicitly defined within legal documents. Traditionally, courts rely on dictionaries and other textual sources to ascertain the plain meaning of such terms. However, Judge Newsom suggests that LLMs, trained on vast amounts of data from the internet, could provide valuable insights into how ordinary people use language in their everyday lives. Indeed, he claims those “who believe that ‘ordinary meaning’ is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models . . . might—might—inform the interpretive analysis.”[3]
B. The Strengths of Using LLMs
LLMs’ primary strengths are their ability to process and analyze large datasets, which include a wide range of linguistic inputs from various sources. This capability allows LLMs to offer statistical predictions about the ordinary meaning of words and phrases based on how they are commonly used in everyday language. According to Newsom’s understanding, “LLMs train on ordinary-language inputs.”[4] “Because they cast their nets so widely, LLMs can provide useful statistical predictions about how, in the main, ordinary people ordinarily use words and phrases in ordinary life.”[5]
Additionally, LLMs can understand context, which is crucial in legal interpretation. They can distinguish between different meanings of the same word based on the surrounding text, making them potentially more reliable than traditional dictionaries in specific contexts. Newsom recognizes that “LLM predictions about how we use words and phrases have gotten so sophisticated that they can . . . produce full-blown conversations, write essays and computer code, draft emails to co-workers, etc.”[6] Using an LLM to ask which of a word’s meanings is most applicable to the controverted sentence, for example, could help determine what a lay person would understand that word to mean.
C. Potential Drawbacks and Concerns
The advantages of using LLMs are not also without drawbacks. There are a few significant concerns about using this technology in any application, let alone in a legal context. For example, one important problem is that of “hallucinations,” where the model generates incorrect or fabricated information.[7] Newsom recognizes that oftentimes, an LLM “generates facts that, well, just aren’t true—or at least not quite true.”[8] Indeed, attorneys have been caught using hallucinated cases in their briefs.[9] Accordingly, sole reliance on LLMs at this stage in their development is inappropriate. This does not mean, however, that they should not be used as one of the tools available in interpreting a contract or a statute.
As with all technologies, another concern is those who are left behind. Indeed, depending on their training set, LLMs may not fully capture the linguistic nuances of underrepresented populations, as their training data primarily come from online sources and book sources, which may not be fully representative of all demographics. Newsom notes that because “[p]eople living in poorer communities . . . are less likely to have ready internet access [they] may be less likely to contribute to the sources from which LLMs draw in crafting their responses to queries.”[10]
D. Integrating LLMs into Legal Interpretation
To maximize the utility of LLMs in legal interpretation, refining how they are used is essential. Newsom provides several options to counteract some of these problems. His recommendations include carefully crafting queries (or “prompts”) to ensure the models offer relevant and accurate information. Additionally, using multiple models and comparing their outputs can help mitigate the risk of relying on any single model’s potentially flawed response. Indeed, he goes so far as to recommend that users “report the prompts they use and the range of results they obtain.”[11]
Furthermore, it is crucial to consider the temporal aspect of language. If one believes that legal texts must be interpreted based on the meaning of words when they were written, developing LLMs that can focus on specific timeframes would enhance their applicability in legal contexts. Newsom notes, “[i]f LLMs are to be deployed to aid more broadly in the search for ordinary meaning, it would be enormously helpful . . . for AI engineers to devise a way in which queries could be limited to particular timeframes.”[12]
Conclusion
The integration of AI-powered large language models into legal interpretation holds immense promise. These advanced tools could significantly enhance our understanding of statutory language, providing a more nuanced and comprehensive approach to determining the plain meaning of legal texts. By leveraging LLMs’ vast linguistic data and sophisticated contextual analysis capabilities, we may stand on the brink of a new era in legal interpretation—one that is more aligned with the everyday language and understanding of ordinary people.
Newsom notes, “[a]t the very least, it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.”[13]
While challenges and ethical considerations remain, the potential benefits of incorporating AI into the judicial process are too significant to ignore. The journey towards integrating AI in the courtroom is just beginning, and it promises to be exciting and transformative.
*J.D. Candidate, University of Tennessee College of Law, Class of 2025.
[1] Snell v. United Specialty Ins. Co., 102 F.4th 1208, 1211 (11th Cir. 2024).
[2] Id. at 1221 –34 (Newsom, J., concurring).
[3] Id.at 1221.
[4] Id. at 1226.
[5] Id.
[6] Id. at 1228.
[7] Id. at 1230.
[8] Id. (citing Yonathan A. Arbel & David A. Hoffman, Generative Interpretation, 99 N.Y.U. L. Rev. 451, 502–03 (2024).
[9] Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, N.Y. Times (May 27, 2023), https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.
[10] Snell, 102 F.4th at 1231.
[11] Id. at 1233 (citing Arbel & Hoffman, supra note 8, at 460).
[12] Id. at 1233–34.
[13] Id.
[14] Id. at 1234.

