Pasting Legal Advice into AI Can Destroy Its Protection
A US court denied privilege over 31 AI-processed documents in February 2026. Australian courts and regulators are aligning. Here’s what to do now.
The shortcut that costs you your best legal defence
You receive a 15-page legal opinion from your lawyer. It cost $8,000. You paste it into ChatGPT to get a plain-English summary. In that moment, you may have just destroyed the very protection that opinion carried.
In February 2026, a US federal court in New York ruled in United States v Heppner that 31 documents processed through a public AI platform lost legal professional privilege entirely. The court’s reasoning was straightforward: the platform’s terms allowed third-party disclosure, the user had no expectation of confidentiality, and an AI tool cannot stand in for a lawyer. The same month, a second US court in Warner v Gilbarco treated AI as “a tool, not a person” — reinforcing that AI-assisted work can retain protection only when a lawyer directs it within a confidential system.
These aren’t hypothetical risks. They’re rulings. And Australian courts are moving in the same direction.
Australian courts and regulators are aligning
In Mastercard Asia/Pacific v Australian Competition and Consumer Commission [2026] FCAFC 37, the Federal Court confirmed the principle that privilege can be lost when a client “behaves inconsistently with its maintenance.” The case wasn’t about AI specifically, but the principle applies directly: feeding privileged material into a tool whose terms permit third-party access is conduct inconsistent with confidentiality.
The Federal Court’s Practice Note on Generative AI explicitly warns against inputting confidential and privileged material into public AI tools. Queensland’s courts went further in a September 2025 guideline, stating that information entered into public chatbots “should be seen as published.” The Law Council of Australia warned in June 2025 of “inadvertent waiver of client legal privilege.” And in December 2024, the Law Society of NSW, Legal Practice Board of WA, and Victorian Legal Services Board issued a joint statement: lawyers “cannot safely enter confidential or privileged client information into public AI chatbots.”
In the UK, the Upper Tribunal ruled in Munir v Secretary of State (November 2025) that uploading confidential documents to an open-source AI tool “places them in the public domain, breaching confidentiality and waiving privilege.” The international consensus is forming fast.
This isn’t just a law firm problem
The most common privilege waiver risk isn’t inside law firms. It’s inside the businesses they advise.
A managing director receives a legal opinion on an unfair dismissal claim and pastes it into ChatGPT for a summary to share with the management team. A board secretary uses an AI transcription tool during a meeting where legal advice is discussed — the transcript uploads to a third-party server. An accountant feeds a client’s privileged tax advice into a public AI tool to cross-reference a compliance issue. In every case, the intent is reasonable. The outcome is the same: confidential legal advice, processed through a tool whose terms permit third-party access, may no longer be privileged. And privilege, once waived, cannot be restored.
Hamilton Locke flagged a particularly dangerous scenario for board governance. AI-generated draft minutes that incorporate the substance of legal advice may give a regulator like ASIC — exercising compulsory information-gathering powers — grounds to argue that privilege has been waived. Even if the final approved minutes are carefully drafted, an AI-generated draft sitting on a third-party server is discoverable as evidence.
The fix is about which AI, not whether AI
The critical distinction — emphasised by Clayton Utz, Hamilton Locke, and the Federal Court alike — is between public and enterprise AI systems.
Public or consumer AI tools (free-tier ChatGPT, free Gemini, consumer Claude) typically include terms that permit data retention, model training, and third-party disclosure. Those terms are fundamentally incompatible with the confidentiality privilege requires. Enterprise AI systems (ChatGPT Enterprise, Claude for Business, Microsoft 365 Copilot with Business Premium licensing) can maintain confidentiality through contractual safeguards, encrypted environments, and access controls — provided the contracts are reviewed and the systems are properly configured.
The answer isn’t to stop using AI with legal content. It’s to know which tools your business is authorised to use, and to treat the line between public and enterprise as hard, not grey.
Three things to do this week
First, ask your team one question: “Has anyone pasted legal advice into a free AI tool?” You’ll get honest answers faster than you expect. The result tells you whether you have a privilege exposure right now.
Second, adopt a clear policy: no privileged or confidential legal content goes into any AI tool that isn’t on an approved list. That list should include only enterprise-grade tools with reviewed terms. If you don’t have a list yet, everything is off-limits until you do.
Third, talk to your lawyer about it. Ask what their firm’s AI policy says about handling your information — and tell them what tools your team actually uses. The Queensland Law Society has published a client warning template specifically for this risk. If your lawyer hasn’t raised it, you should.
Key takeaways
▶Assumptions & methodology
- United States v Heppner (S.D.N.Y., February 2026) and Warner v Gilbarco (E.D. Mich., February 2026) are US federal court decisions. They are not binding in Australia but address the same foundational principle — that confidentiality is required for privilege to attach — which Australian courts apply under both the uniform Evidence Acts and common law.
- Mastercard Asia/Pacific v ACCC [2026] FCAFC 37 establishes that conduct inconsistent with maintaining privilege can constitute waiver under Australian law. The case does not specifically address AI use, but the principle applies directly to scenarios where privileged material is processed through tools whose terms permit third-party access.
- The distinction between “public” and “enterprise” AI is based on contractual terms, not the underlying technology. Enterprise tiers of ChatGPT, Claude, and Microsoft Copilot include contractual commitments against data retention and model training on user inputs. Whether these contractual safeguards are sufficient to maintain privilege in all circumstances has not yet been tested in Australian courts.
Next
Australia's Financial Regulator Just Exposed an AI Governance Gap
Field Notes are general commentary on AI trends for Australian businesses. They don’t constitute professional advice. Talk to your accountant, lawyer, or IT adviser before acting on anything specific to your situation — or talk to us if you want help working out where AI fits.
Worried about how your team uses AI with sensitive information?
A short conversation can identify where your AI usage creates risk — and where it’s already safe. Book a call to talk it through.
Book a call →