Is your AI chat discoverable?

On February 17, 2026, Judge Jed Rakoff of the Southern District of New York ruled in United States v. Heppner that a criminal defendant's written exchanges with an AI platform were not protected by either the attorney-client privilege or the work product doctrine, and therefore had to be turned over to federal prosecutors. The court described the question as one of first impression nationwide.

The practical takeaway is simple. If you type something into a public AI tool, assume the government, a regulator, or an adversary in litigation can read it later. This is going to be a hard lesson for self-represented non-attorney litigants especially, who may not understand the benefits of attorney-client privileged communications.

What Happened in the Case

Bradley Heppner was indicted for securities fraud and related charges arising out of his role at GWG Holdings, Inc. After he received a grand jury subpoena and learned he was a target of the investigation, but before he was indicted, Heppner used Claude to prepare approximately thirty-one documents analyzing his potential defenses, the facts, and the law. He later shared those documents with his lawyers. The FBI seized them during a search of his home.

Heppner's counsel argued the documents were privileged because Heppner had prepared them to help his lawyers give him legal advice, and because he later shared them with counsel. Judge Rakoff rejected both theories.

Why the Privilege Claim Failed

The attorney-client privilege protects confidential communications between a client and attorney made for the purpose of obtaining legal advice. Judge Rakoff found the AI exchanges failed on at least two of those elements.

First, Claude is not an attorney. Discussions between a client and a non-attorney are not privileged, and AI platforms have no licensed professional on the other end who owes fiduciary duties or is subject to professional discipline.

Second, the communications were not confidential. Judge Rakoff pointed specifically to Anthropic’s Consumer Terms of Service in effect at the time, which disclosed that the company collects user inputs and outputs, can use them to improve its models, and reserves the right to share data with third parties including governmental authorities. That disclosure, in the court's view, defeated any reasonable expectation of confidentiality. Heppner was using the consumer version of Claude, where the terms expressly contemplated this kind of data handling. It is worth noting, by contrast, that the Commercial Terms of Service explicitly do not permit training of data and sharing of customer data.

The work product doctrine failed for a related reason. That doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Heppner's lawyers conceded they had not directed him to use Claude. He did it on his own. The court held that documents a client generates without counsel's direction, even when the client intends to share them with counsel later, do not become work product.

What This Means for You

Heppner is a district court opinion, so it is not binding on any other court. But the reasoning is straightforward, rests on well-settled privilege principles, and will almost certainly be followed elsewhere. Expect to see it cited in civil litigation and regulatory matters, not just criminal cases.

A few practical consequences for business owners, executives, and individuals:

If you are facing any kind of investigation, lawsuit, regulatory inquiry, or potential dispute, do not use a public AI tool to analyze it. Once you type the facts into ChatGPT or Claude, you have likely waived any claim to confidentiality over that information. The platform provider, not you, controls what happens to it next.

If your lawyer uses AI to help prepare your case, that is a meaningfully different situation, but only if the lawyer is using it the right way. This is worth unpacking, because it drives the whole analysis. If you are unsure whether your lawyer knows how to safely use AI, ask them. They should be able to explain how they use AI while working on your case.

There are two versions of most major AI products. The consumer version (Claude Free, Pro, Max; ChatGPT Free, Plus, Pro; and similar) is governed by consumer terms of service. The enterprise or commercial version (Claude for Work, Claude Enterprise, the Anthropic API, ChatGPT Enterprise, Microsoft Copilot for Business, and similar) is governed by commercial terms of service, which are materially more protective. On the enterprise tiers, the providers contractually commit not to train models on customer inputs and outputs, treat the customer's organization as the data controller, and generally operate more like a traditional SaaS vendor handling confidential information.

That distinction matters under Judge Rakoff's reasoning. His confidentiality analysis turned on what Anthropic's consumer privacy policy told users about data handling. A lawyer using an enterprise-tier AI product under a business contract is operating under a fundamentally different set of representations, closer in structure to how lawyers have long used outside vendors such as document review platforms, e-discovery processors, and transcription services, all of which can fall within the privilege when used in the course of legal representation.

The other half of the equation is direction. Heppner turned on the fact that the defendant used Claude on his own, without counsel's direction. Judge Rakoff expressly distinguished that situation from one where a lawyer directs the use of a tool as part of preparing the case, which has long been recognized as potentially falling within the privilege under cases like United States v. Kovel, 296 F.2d 918 (2d Cir. 1961). A lawyer using an enterprise-grade AI tool, under the lawyer's direction, as part of the actual work of representation, is a far stronger candidate for privilege and work product protection than a client freelancing on a consumer account.

None of this is guaranteed. The law here is brand new, and courts will work out the edges case by case. But the short version is this: a client alone on a free or Pro account is Heppner. A lawyer using an enterprise tool in the course of representing that client is not.

If you run a business and your employees use AI tools in their work, assume anything they put into those tools is discoverable. Sensitive business strategy, personnel matters, legal exposure, confidential client information, and trade secrets should not be entered into public AI platforms. Consider an enterprise-tier AI product with contractual data protections, or an internal policy limiting what employees can input.

If you are already using AI to draft or analyze anything touching on a potential legal problem, stop, save what you have, and call your lawyer before you type another word.

A Warning About Policies That Change

The privacy policy Judge Rakoff cited was the version in effect in early 2025. In September 2025, Anthropic revised its consumer terms so that Free, Pro, and Max user conversations are, by default, retained for up to five years and used to train future models, unless the user actively opts out. Business and enterprise customers were not affected. This is the pattern across the industry. AI companies adjust their terms frequently, and the defaults usually tilt toward more data collection over time, not less.

Therefore, the terms you agreed to when you first signed up are probably not the terms governing your account today. If you are using an AI tool for anything sensitive, check your privacy settings, and do not assume today's defaults match what you consented to last year.

The Bigger Picture

Judge Rakoff closed his opinion by noting that AI's novelty does not exempt it from longstanding legal principles. That framing is going to matter. Courts are not going to invent new privileges to protect AI conversations. If anything, Heppner suggests the opposite trend: privilege rules are going to be applied strictly, and the widespread assumption that "my chat history is private" is legally wrong.

AI is a genuinely useful tool for brainstorming, drafting, research, and thinking through problems. But it is not a lawyer, it is not a confidant, and under current law, it is not confidential. Use it accordingly.

If you have questions about how to use AI responsibly in your business, or about any pending legal matter, contact Long Law, P.C.

This post is for general informational purposes only and does not constitute legal advice or create an attorney-client relationship. Consult a licensed attorney about your specific situation.

Next
Next

Six Years of Long Law: Growth, Gratitude, and Building from the Ground Up