Before You Ask ChatGPT About Your Legal Case, Read This

Here's a situation I've seen play out more than once: a client has an accident, a contract dispute, or a legal problem. Before they call a lawyer, they spend an hour talking to an AI. They describe what happened, ask whether they have a case, vent about how unfair the situation is,and maybe ask the AI to research their options. By the time they walk into my office, they've already built a whole framework in their head about their situation. One the AI helped construct.
Some of what the AI told them was helpful. Some of it was wrong. This isn't even a new phenomenon. I used to beg clients to stop Googling things as it never helped their case. Now I do the same thing just with ChatGPT. The main difference is, their search history almost certainly wasn't going to be relevant in discovery, their AI use potentially is.
That last point is what I want to focus on here. AI tools are genuinely useful, and I'm not writing this to tell you to avoid them entirely. But before you type anything related to a legal matter into a consumer AI tool, you need to understand what you're actually doing, and what it could cost you.

Understand: AI Is Not Your Lawyer
When you talk to your attorney, those conversations are protected by the attorney-client privilege. That protection exists because the law recognizes that people need to be able to speak candidly with their lawyers without fear that those conversations will be used against them later. It is one of the oldest protections in American law, and it is genuinely robust.
When you type into ChatGPT, Claude, or any other consumer AI tool, that protection does not exist. Not even close.
The AI company's terms of service (the long document you clicked through without reading) typically permit the company to store your conversations, use them to train its models, and disclose them to government authorities when legally required. To put it concretely: one major AI provider's published privacy policy states that it collects what you type in ("inputs") and what the AI generates back ("outputs"), that it uses this data to train and improve the AI, and that it reserves the right to share this data with "third parties" including "governmental regulatory authorities." I am not speculating about what might happen. The leading AI providers have published privacy policies that explicitly say these things. In a federal criminal case in New York earlier this year, a defendant tried to use attorney-client privilege to block the government from using his AI chat records. The judge issued first a bench-ruling and later a 12-page written opinion rejecting that argument. The AI company's own privacy policy, which permitted the company to collect user inputs, use them for training, and share them with government authorities, destroyed any claim of confidentiality. A Harvard Law professor had recently published an article explaining why courts are unlikely to ever treat AI conversations as privileged, because privilege exists to protect human relationships of trust with licensed professionals, not interactions with commercial software. The judge cited that article directly.
The rule is straightforward: if you typed it into a consumer AI tool, you should assume someone else can read it.
Pause Before You Prompt
The period between when something happens and when you retain an attorney is the most dangerous time to be using AI for anything case-related. There is no attorney-client privilege yet. There is no work product protection (a doctrine that shields the things attorneys do to prepare for litigation). There is just you, a commercial software product, and a stored record of everything you said.
♦ The "venting" problem. Many people use AI as a sounding board after a difficult event. That is natural. If you just had an accident, a bad business deal, or a dispute with a contractor, the impulse to process it is real, and AI is available at midnight in a way that your friends aren't. But those conversations become a written record. If you described the accident and said something like "I should have been paying more attention," or described the dispute and acknowledged the other party had a point, those statements exist, they are stored, and they are potentially available to opposing counsel.
Think of it this way: if you wrote all of that in a letter and mailed it to a stranger, you would not expect it to be confidential. A consumer AI tool is, legally speaking, that stranger.
♦ The research problem. Researching your situation is also risky in a way people don't anticipate. Not because research is bad, but because the process of researching creates a record. The specific questions you ask reveal what you are thinking, or what you know about, with your own case. "I had a hole in my parking lot for the last six months I've been meaning to fix, someone just fell in it, what happens next?" or "I'm really struggling with going to those doctors my attorney wanted me to go to, would stopping those appointment effect my case?" Those prompts, those questions, tell the other side a great deal about your concerns, your knowledge, and your thinking. That is information you would not volunteer in a deposition, but you may have already handed it over.
Avoid Legal Hiccups & Hallucinations
Once you retain an attorney and litigation is underway, the calculus changes somewhat, but it does not go away. Here are the three specific problems I see most often.
♦ Problem 1: Consumer tools still have the same confidentiality problem. Having an attorney does not retroactively make your AI conversations privileged, and using AI on your own, separate from your attorney's work, still creates the same disclosure risk. If you take it upon yourself to research your case, draft a timeline of events in ChatGPT, or describe your situation to an AI tool to "prepare" for a deposition, those records are still potentially discoverable. Your attorney's work is protected. Your independent AI use is probably not.
There is an important distinction here. Courts have always allowed your attorney to bring in a specialist (an accountant, a consultant, an investigator) to help the attorney understand your situation, and those communications can be protected. That specialist is basically an extension of your attorney. The same principle could, but might not, apply to AI: if your attorney specifically directs you to use an AI tool to prepare materials for their review, using a secure platform with proper confidentiality protections, that use may be protected. But the key words are "your attorney directs you" and "secure platform." If you are using AI on your own initiative, on a consumer tool, without your attorney's involvement, that is not protected, regardless of whether you later share the results with your lawyer.
♦ Problem 2: AI filters what you share with your attorney, and that filter costs you. This is the problem I think I worry about most as a litigator, not because it is as damaging as the discovery issue could be, but because it can be so insidious and invisible. When a client uses AI to research their case (of course this applies to Google or your best buddy Bob) before meeting with me, they arrive having already decided what matters. The AI told them "in a case like this, the key issues are X, Y, and Z," so they tell me about X, Y, and Z.
What they don't tell me is A, B, and C. Not because they're hiding anything, but because the AI didn't flag those things as important. The problem is that I needed to know about A, B, and C. Maybe there's a prior incident that affects our statute of limitations. Maybe there's a prior relationship between the parties that creates a contractual issue. Maybe something the client said to a witness right after the incident is critical. Maybe there is another solution out there that better fits your goals, but I can't know about it if you don't tell me. The AI, working with general knowledge and no understanding of the specific legal and factual context of this particular matter, made judgment calls about relevance that were wrong, and because of that you weren't fully open with your attorney.
Your attorney needs the unfiltered version of what happened. The AI-filtered version leaves gaps, and those gaps become problems. Sometimes serious ones. It's a high-tech game of telephone where a computer decides what matters and what to pass on.
♦ Problem 3: AI hallucinations create work, not savings. AI tools sometimes state false things confidently. AI research is ongoing and I've seen a variety of statistics for frequency of hallucinations, but it can be difficult to pin down. This rate is going down as these tools get better, but that rate will likely never be zero. Hallucinations can also be more, or less, likely based on the context you give AI or the prompt you use. Your attorney won't necessarily know what prompt, or context, you provided the AI tool you used, so can't assess that risk. What we do know is any amount of error rate, particularly without accountability or a way to figure out a mistake was made, can be costly in litigation.
When a client arrives with an AI-generated understanding of their case that is partially wrong, I have to figure out what is accurate, correct the misconceptions, and explain why the AI was wrong. That takes time. Your attorney bills by the hour. The AI was supposed to save you money; instead, it created a set of incorrect assumptions that now need to be dismantled and rebuilt.
Worse, these issues can create communication issues between you and your attorney that it may take many meetings, even the passage of months, before you realize what the source of the miscommunication is.
I have had clients come in convinced of a legal principle that does not exist in Texas, or certain that a particular outcome is likely, based on AI research that simply was not accurate. Correcting course after the fact sometimes takes longer than starting from scratch.
AI works best as a tool for organizing and preparing, not as a substitute for legal expertise. AI is unbelievably helpful for a lot of things in the law, but it isn't at the point where it can replace the counseling role of an attorney.
Use AI for Strategy & Efficiency
None of this means AI has no role in your legal matter. It can genuinely help, in the right hands, used the right way. AI can serve as a force-multiplier for your attorney. Your attorney may be using AI tools to research, draft, and analyze materials more efficiently on your behalf. Done correctly, with enterprise-grade tools that have proper confidentiality protections, that work is protected. An attorney's use of AI to develop legal strategy, analyze case law, and draft arguments is shielded from disclosure under the work product doctrine, one of the strongest protections in litigation. You benefit from that efficiency without the exposure. This represents increased efficiency which could mean decreased cost, or even more likely, increased effectiveness.
Your attorney may also ask you to use AI as part of the legal process. Maybe to organize a timeline of events, prepare a list of key documents, or compile information before a meeting. When your attorney gives you that direction and points you to a secure tool, the law can treat that the same way it treats your conversations with your lawyer. It works the same way as when your attorney asks you to use other software sources, or a vendor. Their work is protected too. What matters is that your attorney told you to do it and pointed you to a tool that keeps things confidential.

Communicate with Your Attorney IRL
If you are involved in litigation or anticipate becoming involved in one, here is specifically what to address with your attorney:
- TELL YOUR ATTORNEY WHAT AI YOU HAVE ALREADY USED. If you used any AI tool in connection with this matter before you retained counsel, or after, tell your attorney. Provide details. Explain what you discussed and when. Remember that your attorney is on your side and they can best help you the better the information you provide is. Your attorney needs to know what exists before opposing counsel asks for it in discovery.
- ASK WHAT AI USE IS APPROPRIATE GOING FORWARD. Your attorney can tell you what, if anything, you can appropriately use AI for during the representation. The answer will depend on the nature of your matter, what tools you have access to, and what your attorney's own workflow looks like. If your attorney does ask you to use AI for something specific (preparing a timeline, organizing documents, compiling facts) follow their instructions on which tool to use and what to include. That direction from your attorney is what makes the difference between AI use that is protected and AI use that is not.
- ASK WHETHER YOUR ATTORNEY USES AI AND HOW. This is a fair question. Attorneys using AI tools are required to use tools with proper confidentiality protections; the consumer versions are not appropriate for client work under Texas ethics rules. Ask whether your attorney has a policy, and what tools they use. Ask them about their competency in using those tools. Get an understanding of how that may effect their billing or what benefits they see for their use.
- COMMIT TO THE UNFILTERED VERSION. Tell your attorney everything relevant, without prescreening it through an AI lens. Let your attorney decide what matters. That is what you are paying for.
The Bottom Line
AI is not your lawyer, it is not your therapist, and for legal purposes it is not your confidant. Consumer AI tools store what you type, their terms permit disclosure, and their outputs are sometimes simply wrong. The risks of unsupervised AI use in connection with a legal matter, before and during litigation, are real and concrete.
Used correctly, with your attorney's guidance and with the right tools, AI can make legal representation more efficient and more thorough. The path to that outcome runs through your attorney, not around them.
So go ahead and use AI to choose your outfit for the day. Get your horoscope, generate silly pictures of your cat, think through that letter to your mom, or use it for the million other amazing uses this amazing technology can bring you. Just don't unknowingly damage your case before it gets started.
If you have a legal matter and questions about how AI fits into it, I'm glad to have that conversation.
DISCLAIMER: The information in this article is for educational purposes only and does not constitute legal advice. If you have specific questions about your situation, consult an attorney licensed in your jurisdiction.
Mark Altman is an attorney at Naman, Howell, Smith & Lee, PLLC in Waco, Texas. His practice focuses on personal injury litigation in Texas state and federal courts. He is also a reserve Navy JAGC officer. To learn more about Mark Altman, please visit his website bio HERE.
For more information about Naman Howell legal capabilities, please visit HERE.
About Naman Howell
Since 1917, Naman Howell Smith & Lee has provided individuals and businesses throughout Texas with the personal attentiveness and expertise they need on their legal matters when they need it most. We pride ourselves on our heritage, vision, and exceptional results. For more information about Naman Howell, please visit namanhowell.com.