You Practice How You Play: How Over Relying on AI in Law School Can Lead to Diminished Critical Thinking Skills

By: Riley Walker

Our greatest skill as attorneys will be problem solving. Our clients will pay us to solve their legal problems, and law school teaches us to do that through understanding the facts and issues, identifying possible solutions, and determining the best course of action. Problem solving is inextricably tied to and impacted by our ability to critically think. The sum of our critical thinking skills is the value we will bring to the client. To maximize our value, we need to maximize our critical thinking skills.

Our critical thinking skills are under attack, and we cannot be idle. These skills are like muscles that need to be worked in the gym: the only way to make them stronger is by exercising. The easiest way to make them weaker is by leaving them alone. And it is the latter that we all can fall victim to—consciously or subconsciously—by finding shortcuts and allowing AI to think for us in law school.

Neil Postman, author of Amusing Ourselves to Death, a book that analyzes contrasting views of a bleak future overrun by technology, notes that as “[Aldous Huxley] saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.”[1] Huxley wrote Brave New World, a dystopian warning about how our human love of technology will be our demise. Huxley’s sentiment seems to be the most prescient in our modern relationship to the internet and generative AI. When applied in the educational context, students can become infatuated with AI because of the mental energy, time, and critical thinking it saves.

This blog post addresses our desire to take the AI shortcut in school. It is a warning that an over-dependence on AI can lead to diminished critical thinking skills. Let me be clear: AI is not necessarily the enemy in and of itself, but rather as students, our decision to overrely on, increase dependence on, and prioritize AI above our own learning is. The purpose of this blog post is to identify the issue that this generation of law students is facing, how students are complicit, and how students may mitigate AI usage and exercise those critical thinking skills.

The Problem

According to one report, American adults spend an average of 7 hours and 3 minutes per day looking at their phones.[2] It isn’t hard to convince the average person that this is a problem, one that pervades every aspect of our lives. AI presents a new risk because of AI’s “thinking” capabilities. On a very broad scale, generative AI can receive input and generate tailored output that mimics human conversation and thinking. A study by Carnegie Mellon and Microsoft researchers has found that human critical thinking efforts have shifted in three ways because of AI use. First, there is a shift from information gathering to information verification.[3] Second, there is a shift from problem-solving to response integration.[4] Third, there is a shift from task execution to oversight.[5]

As the next section of the article discusses, these shifts can be applied to a law student’s use of AI, and I argue that these shifts are not to our benefit. These shifts present a framework for our understanding of how AI use can weaken our critical thinking skills as we engage AI to do the critical thinking for us. Before the introduction of AI into our legal problem solving, clients were able to receive greater value from our own abilities to critically think. Without the human element, allowing AI to shift our critical thinking efforts only makes our value as good as the AI tool we use.

How we are complicit

The Shift from Information Gathering to Information Verification

We gather information when we read a textbook, filled with cases and notes for further analysis. This is an important aspect of being a student because it prepares and trains us for taking in information in practice. Information gathering is the first step of a legal problem-solving analysis: it is this step where we must determine the high points of a fact pattern, the issues presented, the current legal landscape, and so on. As every 1L knows all too well, learning how to outline a case is critical to gathering the information necessary to understand the textbook reading.

When students use AI to summarize the textbook or cases, the student’s active role in their own learning becomes diminished. Instead of taking in the information first-hand, the student must instead (or should, at a minimum) review the textbook or case to ensure that the information gathered is accurate. Gathering information becomes a degree removed—instead of gathering the information using our own reading comprehension skills, instead we gather the information from a source that regurgitates the information to us. We acquiesce our role as information gatherers and depend on the computer do it for us.

The more we allow AI to do this first step, the more removed we become. If we allow another thinker to tell us what is important, our ability to recognize important facts falls to the background. We become more passive. And while we may be the victims of our own actions, if this methodology persists into practice, our clients will become victims of our own acquiescence, too.

The Shift from Problem Solving to Response Integration

Our problem-solving skills are in part built by our ability to apply the law to a set of facts, synthesize cases to see how they fit together, make comparisons between cases, etc. A good lawyer knows how to argue a favorable case is similar to theirs, and a not-so-favorable case is different from theirs. The better one is at these skills, the better the client is served.

AI can also be utilized by students in place of our own active involvement in these cases. When we allow AI to take charge of these activities, allowing it to tell us how facts are different or the same between cases, our role changes from active problem solver to response integrator. As response integrator, we allow AI to do the problem-solving for us, and then the only thinking we are required to do is figuring out where the AI’s response fits into the solution we are navigating.

When we take an active role in problem solving, we are in the driver’s seat for analysis, synthesis, and comparison. Our clients will only be served as well as the AI can analyze, compare, and synthesize. It is no longer our personal value we bring to the table, but rather the value of AI’s ability to problem solve.

The Shift from Task Execution to Oversight

As lawyers, we should be in charge of completing our own problem-solving tasks. This means that we assemble the final product—whether that is a draft of a contract, a plaintiff’s complaint, or a demand letter. At the end of the day, we are the ones responsible for our finished product. A finished product that reflects our own time, effort, and skills will be a finished product that serves the client well.

Allowing AI to execute our tasks for us puts us in charge of overseeing the tasks and the finished product. Again, we allow ourselves to move away from an active, participatory role and into a passive, idle role. As the executor of tasks, AI becomes fully and completely in charge. As students, we can allow AI to take the driver’s seat and execute our tasks for us.

Our job will be to create products that bring value to our clients. Our ultimate goal should be to serve our clients well with this mission in mind. If we over rely on AI to execute tasks for us, then our value is found in the products AI creates, rather than the products we create.

The Way Forward

As the old saying goes, “the way you practice is the way you play.” There is a certain reward to be gained in knowing that our own skills make us better advocates, counselors, and attorneys. The path to becoming better attorneys begins as students. When we practice being lazy and allowing AI to take the driver’s seat, we are preparing for a career of passivity.

To be better, we need to mitigate our reliance on AI. It is important to read actively, to practice synthesizing and analyzing case law, and to prioritize our own critical thinking skills. To heed Huxley’s warning, we need to love our future selves and clients more than we love the present, short term benefits of the glittering technology before us.


[1] Neil Postman, Amusing Ourselves to Death, at xix (20th anniversary ed. 2005).  

[2] Revealing Average Screen Time Statistics, Backlinko (Jan. 30, 2025), https://backlinko.com/screen-time-statistics.

[3] Hao-Ping (Hank) Lee et al., The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers, in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Article No. 1121, § 6.2, https://dl.acm.org/doi/10.1145/3706598.3713778.

[4] Id.

[5] Id.


Posted

in

by

Tags: