Your AI chats can now be used against you in court

Your AI chats can now be used against you in court

·4 min readPractical Legal for Life & Work

Every prompt you have ever typed into ChatGPT, Claude, or Gemini exists on a server you do not control, governed by a privacy policy you almost certainly did not read. On February 17, 2026, a federal judge made clear exactly what that means.

Your AI is not your lawyer, and that changes everything

Judge Jed Rakoff of the Southern District of New York ruled that AI-generated documents are not protected by attorney-client privilege or the work product doctrine. The case involved Bradley Heppner, a former CEO who used Anthropic's Claude to prepare roughly 31 documents after learning he was the target of a grand jury investigation. Heppner argued the exchanges were privileged because he discussed them with his lawyer.

The court dismantled that argument in three strokes. First, an AI model is "obviously not an attorney." Second, Heppner did not use Claude to obtain legal advice. Third, the conversations were never confidential to begin with: Anthropic's own privacy policy states that prompts and outputs may be used for model training and disclosed to regulatory authorities and third parties.

That last point is the one most people miss. The moment you type into a public AI tool, you are sharing information with a third party. In legal terms, that is a waiver of privilege, making everything you typed fully discoverable by adversaries, regulators, and opposing counsel.

The fabrication epidemic courts cannot ignore

The Heppner ruling did not emerge in a vacuum. Courts have been grappling with AI in legal proceedings for years, and the results are alarming. More than 300 documented cases of lawyers submitting AI-fabricated citations have been recorded since mid-2023, with over 200 occurring in 2025 alone. In one Wyoming case, eight of nine cited cases were entirely fictional. A California attorney was fined $10,000 after 21 of 23 quotes in a brief turned out to be fabricated.

Courts have declared that monetary sanctions alone are failing to deter these submissions. Judges are now revoking attorneys' right to practice in their courtrooms and referring cases to state bar associations for disciplinary action.

What this means if you are not a lawyer

Here is the part that should concern the other 330 million Americans using AI tools: this ruling's logic does not stop at legal professionals. If your AI conversations can be subpoenaed in a criminal case, they can be subpoenaed in civil litigation, divorce proceedings, employment disputes, and regulatory investigations.

Consider what you have asked an AI chatbot in the past month. Tax strategy questions. Medical concerns you did not want on your search history. Business plans involving competitors. Relationship problems. Every one of those conversations is a timestamped record of your beliefs, knowledge, and intentions, sitting on a corporate server.

As legal analysts at Ward and Smith note, AI chat discovery is moving from "novel issue" to routine. Litigators are already drafting discovery requests that specifically target AI platform usage, and judges are compelling production when adequate privacy safeguards exist.

The privacy shield that does not exist

Most AI platforms' terms of service contain language that undermines any expectation of privacy. OpenAI's CEO has acknowledged there is no legal confidentiality for users' conversations with ChatGPT. Anthropic, Google, and other providers maintain similar policies. Your prompts may be reviewed by human trainers, flagged by automated systems, and produced in response to legal process.

The GDPR in Europe and state privacy laws in the US provide some data protection rights, but none of them create a legal privilege. There is a critical difference between "a company should protect my data" and "a court cannot force disclosure of my data." The Heppner ruling makes that distinction painfully clear.

What actually protects you

Enterprise AI deployments with contractual confidentiality agreements may preserve privilege, according to Crowell & Moring's analysis of the ruling. The key factors: the tool must be non-public, prompts and outputs cannot be used for training, and the platform must not reserve rights to disclose data to third parties.

For individuals, the calculus is simpler. Treat every AI conversation as if it could appear in a courtroom, because after February 2026, it can. Do not share information with a chatbot that you would not put in an email to a colleague. If you need actual legal advice, talk to an actual lawyer, not a large language model that is statistically confident but legally meaningless.

The legal shield millions of people assumed existed between them and their AI assistant was never there. Judge Rakoff simply said it out loud.

Sources and References

  1. Gibson Dunn (SDNY Court Analysis)Judge Jed Rakoff ruled on February 17, 2026, that AI-generated documents created by defendant Bradley Heppner using Claude are not protected by attorney-client privilege, as the AI is obviously not an attorney and conversations with public AI tools constitute waiver of privilege.
  2. Jones Walker LLPMore than 300 documented cases of lawyers submitting AI-fabricated citations have been recorded since mid-2023, with over 200 in 2025 alone. Three separate federal courts issued sanctions in just the first two weeks of August 2025.
  3. Tyson & Mendes LLPAI chatbot communications lack attorney-client privilege protection. OpenAI CEO acknowledged there is no legal confidentiality for ChatGPT conversations. AI conversations are timestamped records of beliefs, knowledge, and intentions that are fully discoverable.
  4. Crowell & Moring LLPTo preserve privilege when using AI, organizations must use non-public, closed AI tools where prompts and outputs are not subject to training and not exposed to third parties via privacy policies.
  5. Ward and Smith PAAI chat discovery is moving from novel issue to routine litigation practice. Sharing privileged content with a public AI tool constitutes waiver, making information fully discoverable by adversaries and regulators.

Read about our editorial standards

You might also like:

AI chats as court evidence: landmark ruling ends privilege | Outlier Report