1Don’t PanicKim Jones Penepacker is a lawyer in Dallas, Texas, who often spends the better part of her workday in the company of AI. She’s not just using AI; she’s working alongside it. It monitors her constantly and gives her feedback on her job performance.
That might sound dystopian, even Orwellian, but Kim wouldn’t have it any other way. And I’m willing to bet that were you in her shoes, you’d feel the same. Or at least I hope so, because whatever your job may be, it’s likely that your experience of work will someday resemble hers.
In the not-too-distant future, almost all of us will do our work both better and faster as a result of AI. There will be holdouts, people who remain convinced that the usurpation of their cognitive labors by AI will compromise the quality of their work or could lead to their own irrelevance. But for most of us, delegating to AI the parts of our jobs that are tedious and time consuming will be liberating.
Kim is a character—maybe even a bit of a caricature. She’s a personal injury lawyer at Aulsbrook Car & Truck Wreck Injury Lawyers, which bills itself as “The Home of the Texas Law Dog.” (The “law dog” in question is the firm’s founder, Matthew E. Aulsbrook, who never appears in court without his trademark jet-black Stetson cowboy hat.) Kim got her law degree at Baylor, a Christian university in central Texas, met her husband when volunteering at an animal shelter, dotes on her two dogs, and is the brainy, slightly nerdy counterpoint to Matthew’s twangy braggadocio.
On its face, using AI in the legal field feels like a recipe for disaster. And Kim is well aware of the pitfalls. One of the first things she told me when we talked about her work was how her approach to using AI was nothing like the very public disasters faced by certain other attorneys who have tried to cut corners by using AI.
Ever since the debut of ChatGPT, less-than-careful attorneys have gotten into trouble for attempting to use it to write legal briefs, leading to citations of made-up cases, nonsense legal reasoning, and even judges issuing sanctions. In one infamous case, a New York judge fined a pair of lawyers $5,000 for using ChatGPT to write a brief that included a half-dozen fictitious cases, for making “false and misleading statements to the court.” Their response, which would be laughable if it wasn’t typical of the way many Americans think modern AI works, was that they had “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
People who use AI well know you can’t simply ask AI to do your job for you, whether in law or any other field. Yet there are many, many tasks in which AI can help lawyers—as long as they use it appropriately. From legal discovery, in which varieties of AI that allow for new kinds of fuzzy or “semantic” searches are tremendously helpful, to the appropriate use of AI in writing briefs, it’s possible that no field will be more thoroughly transformed by AI in the coming years than the law.
Taking depositions is a prime example. It’s part of the research and fact-gathering part of a legal confrontation, and it happens before two parties settle or go to trial. A deposition usually takes place in an office or over Zoom and is a kind of structured interview, a more sedate version of the questioning of a witness we’ve all seen in courtroom dramas.
For almost all of history, a deposition required nothing more technologically sophisticated than pen and paper. Lawyers would come up with a list of questions relevant to the case and try to pin down the person they were deposing in the hope that they would say things that would help their client’s case. Lying in a deposition is perjury, so this process isn’t about tricking someone so much as it’s about asking direct questions and getting clear answers.
The pre-AI version of this process required the kind of mental gymnastics that would lead experienced lawyers to fail to get what they needed out of the person they were deposing. Simultaneously, they had to listen closely to what someone was actually saying as it might be rendered in a transcript, improvise appropriate follow-up questions, and all the while verify that they were getting what they needed out of the interview.
This is where AI comes in. Kim uses an AI “copilot” from the legal tech startup Filevine. Before the deposition, she uploads a list of all of her questions. During the deposition, Filevine’s copilot records the conversation and transcribes it in real time. This is itself a minor miracle of AI technology, but not a new one. AI transcription systems have existed for decades but have only in recent years come close to the accuracy of an experienced human who is familiar with the specialized vocabulary of a field.
As the deposition proceeds, Filevine’s copilot feeds the transcript to a large language model—the same thing that powers cutting-edge chatbots such as the ones behind OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and the like. That model has been primed with plain-English written instructions. Those instructions tell the model to compare all of Kim’s questions to the transcript of the deposition as the deposition is happening.
This is where the magic of modern AI kicks in. Unlike a conversation with a chatbot, which plods along at a pace dictated by a human’s ability to read and respond, this is a conversation between an AI agent and itself. And the information being fed to the AI agent and compared, over and over again, to Kim’s initial list of deposition questions is the transcript of the conversation between two humans. Filevine’s AI is, in short, tasked with completing a task on its own. That task is relatively simple for a human but until quite recently was utterly impossible for a machine: to determine whether or not Kim has gotten sufficiently clear answers to all the questions she brought to the deposition.
In some ways, the deposition AI copilot is worse than a human at this task. It can be pedantic, simplistic, too literal; its judgment isn’t always great. But even if it has its limitations, its mind doesn’t wander and it can catch things that Kim would otherwise miss.
“We are attorneys, we’re biased,” said Kim, recalling a deposition she had been conducting for a case following a car accident. “Broadly, my goal was, I wanted this defendant to say my client did not cause the crash.” She got the defendant to admit that her client had never left their lane, and in her mind, that was the answer she needed. But the deposition AI copilot wasn’t satisfied. It stubbornly refused to check off the question she’d entered into it before the deposition had begun: Was her client not responsible for the crash? “Logically, I know when someone says that my client never left their lane, then the defendant must have left their lane and caused the crash. But the deposition copilot said that while I did get them to admit they changed lanes, I didn’t complete the goal.”
This is the kind of subtlety that’s easy to miss, especially when an attorney is working by themselves and facing a reluctant witness. Were one present, a second attorney on Kim’s side might catch the moment a defendant answers a question in a way that’s unsatisfactory. But in this case, the AI is that second attorney. It’s not perfect and it can’t do her job for her, but it gives Kim a leg up on the competition.
Kim said that having a deposition copilot listen in on her interviews and automatically flag when she hasn’t met her goals means that she’s that much more likely to get the information she needs to leverage a better deal in a settlement—or to get exactly the testimony she needs to win if the case goes to trial. She uses it almost every time she does a deposition. The first time we talked, that meant she’d used it every day the week before, a mark of how it had become indispensable to her. “I feel like a lot of lawyers are looking at AI as ‘How can it do my job for me?’ and not ‘How can I enhance my job performance with it?’ ” she said. “At the end of the day, it’s a tool. As exciting as AI is, using it to enhance what you are already doing is key for me.”
Copyright © 2026 by Christopher Mims. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.