Articles

How Artificial Intelligence Is Reshaping Attorney-Client Privilege: A Global Analysis for 2026

By Tika R. Basnet is the founder of Global Law Lists.org, an international legal network connecting verified lawyers and law firms across 240+ jurisdictions worldwide. With a background in law and technology, he writes on the intersection of legal practice, regulatory developments, and innovation in the global legal industry. 33 min read

Introduction: The Day a Judge Told a Client His AI Chats Were Not Privileged

On February 10, 2026, something happened in a federal courtroom in Lower Manhattan that sent shockwaves through every law firm, corporate legal department, and bar association on the planet. Judge Jed S. Rakoff of the Southern District of New York ruled, from the bench, that documents a criminal defendant had created using a commercially available AI tool were not protected by attorney-client privilege. Not partially protected. Not conditionally protected. Not protected at all.

The case was United States v. Heppner, and it arrived like a thunderclap. For years, lawyers had debated whether conversations with AI tools could fall under the protective umbrella of privilege. Some argued that if a client used AI to prepare materials for their attorney, those materials should be treated no differently than handwritten notes on a legal pad. Others warned that feeding confidential information into a third-party AI platform was the digital equivalent of shouting your legal strategy from a rooftop.

Judge Rakoff settled the debate with the subtlety of a sledgehammer. And in doing so, he created a roadmap that lawyers around the world now ignore at their peril.

This article is a deep dive into the collision between artificial intelligence and one of the oldest, most fundamental protections in the legal profession. We will travel across jurisdictions, from the courtrooms of New York to the regulatory corridors of London, Brussels, Singapore, and Hong Kong. We will examine what courts have actually said, what regulators are demanding, and what practical steps law firms must take right now to protect their clients in an age where AI is no longer optional but omnipresent.

Think of this as your field guide to a legal landscape that is being rewritten in real time. Because the question is no longer whether AI will reshape privilege. It already has. The question is whether you are prepared for what comes next.

Part I: Understanding the Foundations Before the Earthquake

What Attorney-Client Privilege Actually Protects

Before we can understand how AI is disrupting privilege, we need to understand what privilege actually is and why it matters so much. Attorney-client privilege is not some bureaucratic technicality. It is the bedrock of the entire legal system's ability to function.

Here is the basic idea: when you talk to your lawyer, you need to be able to speak freely. You need to tell them the ugly truth, the embarrassing details, the facts that make you look bad. Because a lawyer who does not know the full picture is like a surgeon operating blindfolded. They might get lucky, but the odds are not in your favor.

To encourage that kind of radical honesty, the law created a shield. Communications between a client and their attorney, made in confidence for the purpose of seeking legal advice, are protected from forced disclosure. Your opponent in a lawsuit cannot demand to see those communications. The government cannot compel your lawyer to reveal what you told them. The privilege belongs to the client, and only the client can waive it.

But privilege is not absolute. It comes with conditions. Think of it like a lock with three tumblers, all of which must click into place:

First, the communication must be between a client and an attorney (or someone acting as the attorney's agent). Second, the communication must be made in confidence, meaning the client had a reasonable expectation that no one else would see or hear it. Third, the communication must be for the purpose of seeking or providing legal advice.

Remove any one of those tumblers, and the lock does not open. The communication is not privileged.

Now here is where AI creates problems. Massive, structural, keep-your-general-counsel-awake-at-night problems. Because when a client or a lawyer feeds information into an AI tool, each of those three tumblers is suddenly in question.

The Work Product Doctrine: Privilege's Close Cousin

Alongside privilege sits the work product doctrine, which protects materials prepared in anticipation of litigation. If a lawyer creates a memo analyzing the strengths and weaknesses of a case, that memo is generally protected from discovery. The doctrine exists to prevent one side from free-riding on the other side's legal analysis and strategy.

Work product protection comes in two flavors. Ordinary work product, which includes factual information gathered in anticipation of litigation, can be discovered if the opposing party shows substantial need and an inability to obtain the equivalent information without undue hardship. Opinion work product, which reflects the attorney's mental impressions, conclusions, and legal theories, receives near-absolute protection.

AI complicates the work product doctrine in subtle but important ways. When a lawyer uses AI to analyze case law and generate strategic recommendations, whose mental impressions are reflected in the output? The lawyer's? The AI's? Some hybrid of the two? And when a client independently uses AI to research their own legal situation, is that preparation for litigation, or is it something else entirely?

These questions might sound academic. They are not. They are being litigated right now, in courtrooms around the world, with real consequences for real people.

Part II: The Heppner Earthquake and Its Aftershocks

The Facts That Made History

Bradley Heppner was indicted on October 28, 2025, on charges of securities fraud, wire fraud, conspiracy, obstruction, and making false statements. Federal agents searched his mansion and seized electronic devices containing approximately thirty-one documents that Heppner had generated using Anthropic's AI assistant, Claude.

Here is what happened: after learning he was under investigation and engaging defense counsel at Quinn Emanuel, Heppner took matters into his own hands. He fed information he had learned from his defense lawyers into the public version of Claude. He asked the AI about the government's investigation and possible defenses. He generated thirty-one documents of prompts and responses. And then he sent those documents to his lawyers.

When the government sought to use these documents at trial, Heppner's defense team asserted attorney-client privilege and work product protection. The government moved to compel production, and the case landed on Judge Rakoff's desk.

What followed was a masterclass in how privilege doctrine applies to emerging technology.

The Three-Part Privilege Test: All Three Legs Collapse

Judge Rakoff applied the standard privilege test and found every element lacking. His analysis was methodical and, for the defense, devastating.

On the first element, the court held that the communications were not between Heppner and his counsel. This might seem obvious in retrospect, but the defense had crafted a clever argument. They contended that because Heppner was preparing materials to share with his lawyers, the entire process was part of the attorney-client relationship. Judge Rakoff rejected this reasoning. The AI tool was not an attorney. Heppner was consulting it on his own. The fact that he later shared the results with his lawyers did not retroactively transform a conversation with a machine into a conversation with counsel.

Think of it this way: if you walk into a public library, pull books off the shelf, take notes on your legal problem, and then hand those notes to your lawyer, those library notes are not privileged. The library is not your lawyer. The books are not your lawyer. And the fact that you eventually gave the notes to your lawyer does not change what they are. Judge Rakoff essentially said that an AI tool is the digital equivalent of that library, only with the added problem that the library might be keeping copies of everything you wrote.

On the second element, the court found no confidentiality. Heppner had used the public, consumer version of Claude. He had not used an enterprise version with enhanced privacy protections. The platform's terms of service and privacy policy did not guarantee that his inputs would remain confidential. In fact, consumer AI platforms routinely reserve the right to use customer inputs for model training and improvement. Judge Rakoff found that Heppner could not have had a reasonable expectation of confidentiality when communicating through a platform that made no such promise.

This is a critical distinction that lawyers everywhere need to internalize. Using a consumer AI tool is not like having a private conversation in a soundproof room. It is more like having a conversation in a crowded restaurant, speaking loudly enough for the waiter, the busboy, and the couple at the next table to overhear. The information might not be broadcast to the world, but you have no control over who hears it or what they do with it.

On the third element, the court examined whether the communications were for the purpose of obtaining legal advice. Defense counsel conceded that they had not directed Heppner to use AI. He acted on his own initiative. The court then asked the logical follow-up question: was Heppner seeking legal advice from the AI platform? Given that Claude's own terms of service include a disclaimer that it cannot provide formal legal advice, the answer was no. Heppner was not obtaining legal advice. He was conducting independent research using a tool that explicitly disclaimed any ability to provide what he was looking for.

The Kovel Doctrine: A Creative Argument That Fell Flat

The defense team tried one more approach. They invoked the Kovel doctrine, named after a 1961 Second Circuit case, United States v. Kovel. Under this doctrine, privilege can extend to communications with third parties, such as accountants, translators, or experts, when those third parties are necessary for the attorney to provide effective legal representation.

The argument was creative: Claude functioned as a kind of digital translator or analyst, helping Heppner communicate more effectively with his lawyers. If an accountant hired by a lawyer to interpret financial records is covered by privilege, why not an AI tool that helps a client organize and analyze legal issues?

Judge Rakoff found this argument unpersuasive for two reasons. First, the AI was not necessary for counsel to understand Heppner's communications. Unlike a foreign-language translator without whom communication would be impossible, or an accountant without whom complex financial documents would be incomprehensible, the AI was simply a convenience. Heppner could have communicated directly with his lawyers without the AI intermediary.

Second, and perhaps more importantly, Heppner engaged the AI entirely on his own initiative. The Kovel doctrine typically applies when the attorney engages the third party or directs the client to communicate through the third party. Here, the lawyers neither selected the AI tool nor instructed Heppner to use it. The entire interaction was the client's unilateral decision.

Work Product: Denied on Different Grounds

The work product argument fared no better. Judge Rakoff acknowledged that Heppner undoubtedly prepared the AI documents in anticipation of litigation. He knew he was under investigation. He was actively preparing his defense. But the work product doctrine protects materials that reflect the mental impressions, conclusions, and legal strategies of an attorney. The AI documents reflected Heppner's own thinking, not the work product of his counsel.

The court also addressed the question of whether sharing the documents with counsel could retroactively transform them into work product. It could not. As Judge Rakoff noted, materials that would not be privileged if they remained in the client's hands do not acquire protection merely because they were transferred to an attorney.

The Privilege Waiver Problem: The Gift That Keeps on Taking

Perhaps the most troubling aspect of the Heppner decision for practitioners is the privilege waiver analysis. The government argued that by feeding information learned from his defense counsel into a third-party AI platform, Heppner had waived the privilege over the original attorney-client communications themselves.

Judge Rakoff agreed. This is the nightmare scenario that every lawyer needs to understand. When Heppner input information from his privileged conversations with Quinn Emanuel into Claude, he was not just creating new, unprotected documents. He was potentially waiving the privilege over the original communications from which that information was drawn.

Imagine the implications. A client has a two-hour strategy session with their lawyer. Later that evening, the client opens ChatGPT or Claude and types: "My lawyer told me that the prosecution's case is weak on the following three points..." That single prompt could waive the privilege over the entire strategy session. The client has voluntarily disclosed the substance of privileged communications to a third party without any confidentiality protections.

This is not hypothetical. This is happening right now, in living rooms and home offices around the world, every single day. And most clients have no idea they are doing it.

Part III: The American Regulatory Response

ABA Formal Opinion 512: The Ethical Framework

On July 29, 2024, the American Bar Association's Standing Committee on Ethics and Professional Responsibility released Formal Opinion 512, its first comprehensive guidance on generative AI in legal practice. While the opinion does not carry the force of law, it serves as an influential framework that state bar associations across the country have used as a template for their own guidance.

The opinion addresses four core ethical obligations that intersect with AI use: competence, confidentiality, communication, and fees.

On competence, Opinion 512 reinforces what many lawyers have been slow to accept: technological competence is no longer optional. Model Rule 1.1 requires lawyers to provide competent representation, and in 2026, competent representation requires understanding how AI tools work, what their limitations are, and how they can go wrong. A lawyer who blindly relies on AI-generated legal research without understanding that the tool might fabricate citations is not meeting the standard of competence. Period.

On confidentiality, the opinion is particularly pointed. Under Model Rule 1.6, a lawyer must keep confidential all information relating to the representation of a client, regardless of its source. The opinion warns that many AI tools are "self-learning," meaning they may incorporate user inputs into future outputs. This creates a risk that confidential client information entered into an AI tool could appear, in some form, in responses generated for other users.

The practical implication is significant: a client's informed consent is required before inputting their confidential information into a self-learning AI tool. And the consent must be genuinely informed, not a boilerplate clause buried in an engagement letter. The lawyer must explain, in plain language, what the risks are. They must describe how the AI tool processes data. They must give the client a realistic understanding of what could go wrong.

On communication, the opinion requires lawyers to keep clients informed about how AI is being used in their matters. This does not mean sending a detailed technical memo about neural network architectures. But it does mean telling the client, in substance: "We are using AI tools in your matter. Here is how we are using them. Here are the safeguards we have in place. Here are the limitations."

On fees, the opinion addresses the thorny question of how to bill for AI-assisted work. If an AI tool completes in thirty seconds a research task that would have taken an associate three hours, can the firm bill for three hours? The answer, unsurprisingly, is no. Fees must be reasonable, and billing a client for time that was not actually spent is not reasonable, regardless of whether the savings came from a new associate, a paralegal, or a machine learning model.

State-Level Guidance: The Patchwork Expands

Following the ABA's lead, state bar associations have issued their own guidance, creating a patchwork of rules that varies significantly from jurisdiction to jurisdiction.

Texas was among the first movers. In February 2025, the Texas State Bar Professional Ethics Committee issued Opinion No. 705, providing specific guidance on lawyers' use of generative AI. The Texas opinion emphasizes that lawyers must understand the technology they use, must exercise caution when inputting confidential information, remain responsible for verifying accuracy, cannot charge clients for time saved by AI, and should consider informing clients when generative AI is being used in their matters.

The New York City Bar Association followed with Formal Opinion 2025-6, which tackled a specific and increasingly common scenario: the use of AI tools to record, transcribe, and summarize conversations between attorneys and their clients. The opinion concludes that attorneys must obtain client consent before recording calls, even when the recording is handled by an AI tool rather than a traditional recording device. The opinion also addresses the confidentiality and privilege implications of having AI tools process the content of attorney-client conversations.

Oregon's State Bar issued Formal Opinion No. 2025-205, requiring lawyers to understand how any AI tool they utilize stores information and responds to prompts. This might sound like a modest requirement, but it has significant practical implications. It means lawyers cannot simply accept an AI vendor's marketing claims at face value. They need to understand, at a functional level, what happens to the data they input and how the system generates its outputs.

California, true to form, has been the most active state on AI regulation, enacting twenty-four AI-related laws across the 2024 and 2025 legislative sessions. While not all of these are specific to legal practice, they create a regulatory environment that affects how California lawyers can use AI tools and how AI-generated evidence is treated in California courts.

Part IV: Across the Atlantic: The UK Approach

The SRA's Evolving Framework

The Solicitors Regulation Authority in England and Wales has taken what might charitably be called a measured approach to AI regulation. Less charitably, one might call it slow. While the ABA issued its formal opinion in mid-2024, the SRA has been slower to produce concrete guidance, relying instead on the existing Standards and Regulations to cover AI-related issues.

In February 2026, the SRA delivered a webinar titled "AI Policy and Regulation," outlining its developing framework for enabling firms and individuals to safely and ethically incorporate AI tools into their practice. The SRA has announced that it will release two resources in the coming months: a FAQ document called "GenAI FAQ" and a Good Practice Note on AI use and client data.

The SRA's current expectations, while not codified in specific AI rules, are nonetheless substantial. Firms are expected to appoint a senior individual with overall oversight of AI system use. Compliance officers for legal practice are expected to be responsible for regulatory compliance when new technology is introduced. The SRA expects firms to set up committees with responsibility for training staff and monitoring AI usage, to carry out regular audits, and to ensure that AI-specific risks are reflected in firm-wide risk assessments.

The Mazur Ruling and Its Implications

One of the most significant developments in the UK has been the fallout from the Mazur ruling, which has raised fundamental questions about whether AI can perform activities that constitute the "conduct of litigation" under the Legal Services Act 2007. The Law Society has called on the SRA to provide urgent advice on this point, and as of early 2026, the guidance has been updated four times since its first publication.

The core issue is this: under the Legal Services Act, certain activities, including the conduct of litigation, are reserved to authorized persons. If an AI tool makes key decisions in a case, such as which arguments to pursue, which evidence to present, or how to respond to procedural motions, is that AI tool "conducting litigation"? And if so, is the law firm that deployed it operating unlawfully?

The Law Society has noted that this question represents a novel development that was clearly not within the contemplation of the drafters of the 2007 Act. The answer will have profound implications not just for how AI tools are used in UK litigation, but for the entire structure of legal services regulation.

Hallucinations in the Courtroom: The Ndaryiyumvire Warning

If American lawyers needed the Mata v. Avianca case as their wake-up call about AI hallucinations, UK lawyers got their own version in Gloriose Ndaryiyumvire v Birmingham City University and Others. In that case, a wasted costs order was made against a firm that filed pleadings citing fictitious cases produced by generative AI. The judge found the administrative failures to be "improper, unreasonable and negligent."

The case illustrates a principle that transcends jurisdictions: AI tools do not understand truth. They generate plausible-sounding text based on statistical patterns. They are extraordinarily good at producing content that looks right, even when it is completely fabricated. A lawyer who submits AI-generated work to a court without independent verification is not just risking their client's case. They are risking their career.

Legal Professional Privilege in the UK: Similar but Not Identical

UK legal professional privilege operates on principles similar to, but distinct from, US attorney-client privilege. It encompasses two main categories: legal advice privilege, which protects confidential communications between a lawyer and client made for the purpose of giving or receiving legal advice, and litigation privilege, which protects documents created for the dominant purpose of pending or contemplated litigation.

The confidentiality requirement is just as central in the UK as in the US. If a solicitor or client inputs privileged information into a consumer AI platform, the confidentiality requirement may be breached, potentially destroying the privilege. The UK has not yet had a case directly analogous to Heppner, but the legal principles point in the same direction. Consumer AI platforms, by their nature, do not provide the confidentiality guarantees necessary to maintain privilege.

The SRA-commissioned research, due to be published in April 2026, has already revealed a concerning trend: roughly a third of the UK public has used generative AI to help identify legal issues, with many using a hybrid approach of consulting both a solicitor and an AI tool. This means that privilege waiver risks are not limited to sophisticated corporate clients. They extend to everyday individuals who may not understand the legal consequences of asking an AI tool about their pending divorce, employment dispute, or criminal charge.

Part V: The European Union and the AI Act's Shadow

The World's Most Ambitious AI Regulation

The European Union's AI Act represents the most comprehensive regulatory framework for artificial intelligence ever enacted. It follows a risk-based approach, classifying AI systems into four categories: unacceptable risk (banned outright), high risk (subject to extensive compliance requirements), limited risk (transparency obligations), and minimal risk (largely unregulated).

For the legal profession, the high-risk classification is where the action is. When a general-purpose AI model is integrated into a system used for legal work, that system's risk classification is determined by its use. Legal research and document analysis in connection with court proceedings qualifies as high risk under Annex III of the Act.

The practical implications are sweeping. By August 2, 2026, when the high-risk provisions become fully enforceable, AI systems used in legal services will need to comply with requirements covering risk management, technical documentation, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity. Organizations must complete conformity assessments, finalize technical documentation, affix CE marking, and register their high-risk AI systems in the EU database.

The penalty structure is designed to get attention. For the most serious violations, fines can reach 35 million euros or 7% of global annual turnover, whichever is higher. For non-compliance with high-risk obligations, penalties can reach 15 million euros or 3% of global turnover. These numbers exceed even the GDPR's already substantial fines, sending an unmistakable message about the EU's seriousness.

The Privilege Dimension of the AI Act

The AI Act does not directly address attorney-client privilege, but it creates indirect pressures that could affect privilege in significant ways. Consider the transparency requirements. High-risk AI systems must be sufficiently transparent to enable users to interpret the system's output and use it appropriately. If a law firm uses a high-risk AI system to analyze case strategy, the transparency obligation could require the firm to document how the system processed privileged information, what inputs were provided, and what outputs were generated.

Now imagine an opposing party or a regulator demands access to that documentation. The firm claims privilege. But the transparency documentation was created not for the purpose of legal advice, but for the purpose of regulatory compliance. Is it privileged? The answer is far from clear, and European courts have not yet addressed this question.

Similarly, the AI Act's data governance requirements could create tension with privilege. High-risk AI systems must use training data that meets specified quality criteria, and deployers must maintain records of how data flows through the system. If privileged client data flows through a high-risk AI system, the logging and documentation requirements could create records that are themselves discoverable, even if the underlying data is privileged.

The Digital Omnibus Wrinkle

Adding to the complexity, the European Commission proposed a "Digital Omnibus" package in late 2025 that could postpone the high-risk obligations for Annex III systems until December 2027. This has created uncertainty in the compliance timeline, with some organizations using the potential delay as an excuse to defer preparation. This is a dangerous gamble. Prudent organizations are treating August 2026 as the binding deadline and viewing any extension as a bonus rather than a baseline assumption.

Part VI: Asia-Pacific: A Mosaic of Approaches

Singapore: Privilege Meets Pragmatism

Singapore recognizes legal professional privilege in broadly the same manner as the UK and Hong Kong, with two limbs: legal advice privilege and litigation privilege. Singapore's Ministry of Law has been developing guidelines for lawyers' use of generative AI tools, driven by concerns about both accuracy and security.

The privilege analysis in Singapore follows familiar principles. Input by lawyers into generative AI tools and the resulting output may be protected by legal advice privilege or litigation privilege if sufficiently connected to the provision of legal advice or made for the predominant purpose of litigation. However, the confidentiality requirement remains determinative. If the AI tool does not maintain confidentiality, the privilege may be lost.

Singapore has chosen not to pursue a comprehensive AI statute, instead relying on a sector-specific regulatory model that addresses risks through existing frameworks. The city-state has positioned itself as a hub for AI governance innovation, launching the AI Verify testing framework and signing interoperability agreements with the United States, Australia, and the EU AI Office.

For law firms operating in Singapore, the practical guidance is similar to other common law jurisdictions: use enterprise AI tools with robust confidentiality protections, ensure that any AI use in connection with legal matters is directed or supervised by counsel, and maintain clear documentation of how AI tools are deployed in client matters.

Hong Kong: Tradition Meets Technology

Legal professional privilege in Hong Kong is safeguarded under both common law and the Basic Law, covering legal advice and litigation communications. Hong Kong applies a dominant purpose test to litigation privilege and has a well-developed body of case law on the confidentiality requirement.

The Hong Kong Privacy Commissioner for Personal Data (PCPD) issued a "Checklist on Guidelines for the Use of Generative AI by Employees" in March 2025, emphasizing the importance of internal policies and clear guidelines for AI use. These guidelines recommend that organizations define permissible AI tools, limit data input and sharing, mandate data retention and deletion protocols, and ensure compliance with the Personal Data (Privacy) Ordinance.

For legal privilege specifically, Hong Kong faces the same fundamental challenge as every other common law jurisdiction: the use of public AI tools risks destroying the confidentiality that privilege requires. The PCPD's guidance implicitly acknowledges this risk by emphasizing that privileged and confidential communications may be inadvertently leaked when inputted to open and public generative AI platforms.

The cross-border dimension is particularly acute in Hong Kong, given its position as a bridge between common law and Chinese legal traditions. Documents prepared in Hong Kong may be subject to privilege rules in mainland China, the UK, or other jurisdictions, making it essential for lawyers to understand how privilege operates across all relevant legal systems when deciding whether to use AI tools.

Japan: Innovation with Guardrails

Japan took a significant step in May 2025 with the enactment of the Act on Promotion of Research and Development, and Utilization of AI-related Technology (the AI Promotion Act), which came into full effect in September 2025. Unlike the EU's prescriptive approach, Japan's law is designed primarily to support and accelerate AI development while implementing transparency measures and risk mitigation.

Japan's approach to legal privilege in the AI context is shaped by its civil law tradition, which treats privilege differently from common law systems. Japan's Attorney Act provides for a duty of confidentiality rather than a privilege doctrine, and the scope of protection varies depending on the legal proceeding.

Japan's amended Copyright Act, which permits the use of copyrighted works for AI development and training purposes, also has indirect implications for legal privilege. If AI training data includes materials that were originally privileged, the copyright exception does not create a privilege exception. The two doctrines operate independently, and firms must ensure that privilege considerations are addressed separately from copyright analysis.

Part VII: The Enterprise vs. Consumer AI Divide

Why the Platform Matters More Than the Prompt

If there is one lesson that emerges from every jurisdiction we have examined, it is this: the distinction between enterprise and consumer AI platforms is not a minor technical detail. It is the single most important factor in determining whether privilege survives.

Consumer AI tools, the public versions of ChatGPT, Claude, Gemini, and their competitors, operate on terms of service that prioritize the platform's interests over the user's privacy. They may retain user inputs. They may use those inputs for model training. They may share data with third-party service providers. And their disclaimers explicitly state that they do not provide legal advice and cannot guarantee confidentiality.

Enterprise AI platforms, by contrast, are designed for organizational use and typically offer contractual guarantees regarding data handling. Enterprise agreements commonly include provisions stating that customer data will not be used for model training, that data will be stored in specified geographic regions, that access will be limited to authorized personnel, and that the platform will comply with specified security standards.

These contractual protections matter enormously for privilege analysis. When a lawyer uses an enterprise AI platform under a contract that guarantees confidentiality, they have a much stronger argument that the communication was made in confidence. The platform is functioning more like a secure research tool within the firm's infrastructure than like a public forum accessible to anyone with an internet connection.

The early court decisions, including Heppner, support this distinction. Judge Rakoff's analysis focused heavily on the fact that Heppner used the public version of Claude, noting the platform's privacy policy and the absence of confidentiality guarantees. The clear implication is that the result might have been different if Heppner had used an enterprise version with contractual confidentiality protections, although the other elements of the privilege test would still need to be satisfied.

The "Shadow AI" Problem

Even firms that have carefully selected and deployed enterprise AI tools face a persistent challenge: shadow AI. This is the use of unauthorized AI tools by employees who find the approved enterprise tools too slow, too limited, or too cumbersome. A junior associate who needs a quick answer at 11 PM might bypass the firm's approved AI platform and type their question into the free version of ChatGPT. A partner traveling internationally might use a personal AI app on their phone because they cannot access the firm's enterprise platform from their mobile device.

Every instance of shadow AI use is a potential privilege breach. And the problem is widespread. According to the 2025 Clio Legal Trends report, 79% of legal professionals utilized AI tools, but 44% of law firms had not yet implemented formal governance policies. That gap between usage and governance is where privilege goes to die.

Addressing shadow AI requires a multi-pronged approach: clear policies that define approved tools and prohibited alternatives, technical controls that limit access to unauthorized platforms on firm devices, training programs that help employees understand why the rules exist, and a culture that makes it easy for people to use approved tools and hard for them to use unapproved ones.

Part VIII: Practical Frameworks for Protecting Privilege

The Five-Layer Defense Model

Based on the regulatory guidance, court decisions, and best practices emerging across jurisdictions, law firms can implement what we call the Five-Layer Defense Model to protect privilege in the age of AI.

Layer One: Platform Selection and Vendor Due Diligence. Every AI tool used in connection with client matters must be evaluated for its data handling practices, confidentiality protections, and compliance with applicable regulations. This is not a one-time exercise. It must be repeated whenever the vendor updates its terms of service, changes its data processing practices, or introduces new features. Vendor contracts must include explicit provisions regarding data confidentiality, restrictions on data use for model training, data residency requirements, breach notification obligations, and audit rights.

Layer Two: Client Consent and Communication. Following ABA Opinion 512 and its international equivalents, firms must obtain informed consent from clients before using AI tools in their matters. The consent must be genuine, not a boilerplate clause. It must describe the specific tools being used, the types of data that will be processed, the safeguards in place, and the residual risks. Firms should update their engagement letters to address AI use, but the engagement letter alone is not sufficient. There must be an actual conversation with the client.

Layer Three: Data Classification and Access Controls. Not all client data should be treated the same way. Firms should implement data classification systems that distinguish between highly sensitive privileged communications, ordinary confidential information, and non-sensitive materials. AI tool access should be calibrated to these classifications. Highly sensitive privileged materials should be processed only through the most secure, enterprise-grade AI tools with the strongest confidentiality protections. Ordinary confidential information may be processed through a wider range of approved tools. Non-sensitive materials may be processed through any approved tool.

Layer Four: Usage Policies and Training. Every firm needs a comprehensive AI use policy that specifies which tools are approved for which purposes, who is authorized to use them, what types of information can and cannot be input, how outputs must be verified, and how usage must be documented. Training must be mandatory and ongoing, not a single session that employees sit through and immediately forget. The training should include concrete examples of how privilege can be waived through careless AI use, drawn from actual cases like Heppner.

Layer Five: Monitoring, Auditing, and Incident Response. Firms must implement systems to monitor AI usage, audit compliance with policies, and respond to incidents. This includes maintaining logs of which AI tools were used on which matters, conducting periodic reviews to identify unauthorized AI use, and having a clear incident response plan for situations where privileged information may have been compromised. The plan should address not just the technical response (e.g., requesting data deletion from the AI provider) but also the legal response (e.g., assessing whether a privilege waiver has occurred and whether disclosure obligations are triggered).

The Engagement Letter Revolution

Engagement letters are being rewritten across the profession to address AI use. The best examples include clear descriptions of the AI tools the firm uses, explanations of how client data is processed and protected, disclosures of any risks associated with AI use, provisions for obtaining informed consent, mechanisms for clients to opt out of AI processing for their matters, and commitments regarding fee adjustments when AI reduces the time required for a task.

Some firms are going further, creating separate AI disclosure agreements that supplement the engagement letter with more detailed information about the firm's AI governance framework, data handling practices, and quality assurance processes.

The Governance Board Model

The most forward-thinking firms have established AI governance boards, cross-functional bodies that bring together partners, technology leaders, ethics and compliance professionals, and risk managers to oversee all aspects of AI adoption and use. According to recent industry data, 80% of AmLaw 100 firms have now established such boards, signaling a shift from experimental AI adoption to enterprise-wide governance.

An effective AI governance board typically handles vendor evaluation and approval, policy development and enforcement, training program design and implementation, incident review and response, regulatory monitoring and compliance, and strategic planning for future AI adoption. The board should have real authority, including the power to approve or reject AI tools, mandate compliance with policies, and impose consequences for violations.

Part IX: Looking Ahead: The Privilege Landscape in 2027 and Beyond

Predictions for the Next Wave of Cases

The Heppner decision answered some questions but raised many more. As AI adoption accelerates, courts will increasingly face scenarios that go beyond Heppner's relatively straightforward facts. Several categories of cases are likely to emerge.

First, enterprise AI privilege challenges. What happens when a lawyer uses an enterprise AI tool, under the direction of the firm, with contractual confidentiality protections in place? The privilege argument is much stronger, but it is not ironclad. Opposing parties may argue that the AI provider's employees had access to the data, that the provider's subprocessors in foreign jurisdictions created additional risks, or that the enterprise agreement's confidentiality provisions were insufficient.

Second, AI-assisted work product disputes. When a lawyer uses AI to draft a brief, analyze case law, or develop litigation strategy, the resulting work product reflects a blend of human and machine analysis. Courts will need to develop frameworks for evaluating the extent to which AI-assisted work product reflects the attorney's mental impressions versus the AI's pattern matching. The outcome of these cases could significantly affect how firms document and present AI-assisted work.

Third, cross-border privilege conflicts. As firms use AI tools that process data across multiple jurisdictions, privilege questions will increasingly involve conflicts of law. A document created in the US using an AI tool whose servers are in the EU, for use in litigation in Singapore, presents a three-way privilege analysis. The law governing privilege may differ in each jurisdiction, and the AI tool's data processing practices may satisfy confidentiality requirements in one jurisdiction but not another.

Fourth, privilege in internal investigations. Companies routinely use privilege to protect the findings of internal investigations. As AI tools are increasingly used to review documents, analyze communications, and identify potential misconduct in these investigations, questions will arise about whether AI-generated investigation findings are privileged and whether the use of AI tools in an investigation affects the privilege analysis for human-generated investigation materials.

The AI Literacy Imperative

Across every jurisdiction we have examined, one theme is constant: the legal profession needs AI literacy. Not every lawyer needs to understand transformer architectures or reinforcement learning from human feedback. But every lawyer needs to understand, at a functional level, how AI tools process information, what the risks are, and how to mitigate them.

The EU AI Act's Article 4 literacy requirement, enforceable since February 2025, makes this explicit: providers and deployers of AI systems must take measures to ensure that their staff have a sufficient level of AI literacy. While this requirement applies broadly, it has particular force in the legal profession, where the consequences of misusing AI can include privilege waiver, malpractice liability, regulatory sanctions, and harm to clients.

Bar associations around the world are beginning to integrate AI literacy into continuing legal education requirements. Some jurisdictions may eventually require specific AI competency certifications for lawyers who use AI tools in their practice. Whether these requirements emerge through regulation or market demand, the direction of travel is clear.

Part X: Conclusion: The Privilege Is Not Dead, But It Must Be Earned

Attorney-client privilege has survived for centuries because it serves a purpose that transcends any particular technology. The need for clients to communicate freely with their lawyers is as pressing in 2026 as it was in 1826. AI does not eliminate that need. If anything, as legal matters become more complex and data-intensive, the need for confidential attorney-client communication grows stronger.

But privilege in the AI age is no longer something that lawyers can take for granted. It must be actively constructed, maintained, and defended. The three tumblers of the privilege lock, communication with counsel, confidentiality, and purpose of seeking legal advice, still apply. But satisfying them requires deliberate choices about which AI platforms to use, how to configure them, what data to input, and how to document the entire process.

The firms that thrive will be those that treat AI governance not as a compliance burden but as a competitive advantage. Clients will increasingly demand assurance that their lawyers are using AI responsibly, that their privileged communications are genuinely protected, and that the efficiency gains from AI are not coming at the cost of confidentiality.

Judge Rakoff's decision in Heppner was not the end of the story. It was the opening chapter. The next chapters will be written by courts in London, Brussels, Singapore, Hong Kong, and a hundred other jurisdictions around the world. They will be written by regulators, by bar associations, by law firms, and by the technology companies that build the AI tools lawyers use every day.

The lawyers who have read this article already have a head start. But a head start is only valuable if you keep moving. The landscape is shifting beneath your feet. The question is not whether you will adapt, but how quickly.

References and Citations

1. United States v. Heppner, No. 25-cr-XXX (S.D.N.Y. Feb. 10, 2026) (Rakoff, J.).
2. ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512: Generative Artificial Intelligence Tools (July 29, 2024).
3. New York City Bar Association, Formal Opinion 2025-6: Ethical Issues Affecting Use of AI to Record, Transcribe, and Summarize Conversations with Clients (2025).
4. Texas State Bar Professional Ethics Committee, Opinion No. 705: Lawyers' Use of Generative Artificial Intelligence (Feb. 2025).
5. Oregon State Bar, Formal Opinion No. 2025-205: Artificial Intelligence Tools (2025).
6. SRA, Compliance Tips for Solicitors Regarding the Use of AI and Technology (2025).
7. Gloriose Ndaryiyumvire v Birmingham City University & Others (wasted costs order, AI-fabricated case citations).
8. United States v. Kovel, 296 F.2d 918 (2d Cir. 1961).
9. Regulation (EU) 2024/1689 of the European Parliament and of the Council (EU AI Act).
10. Hong Kong PCPD, Checklist on Guidelines for the Use of Generative AI by Employees (March 2025).
11. Singapore Ministry of Law, Guidelines on Use of Generative AI Tools for Lawyers (forthcoming 2026).
12. Japan, Act on Promotion of Research and Development, and Utilization of AI-related Technology (May 2025).
13. Clio, 2025 Legal Trends Report.
14. Gibson Dunn, AI Privilege Waivers: SDNY Rules Against Privilege Protection for Consumer AI Outputs (2026).
15. Morgan Lewis, When AI Meets Privilege: Early Court Decisions (Feb. 2026).
16. Arnold & Porter, The Attorney-Client-Machine Relationship: When AI Use Jeopardizes Privilege (Feb. 2026).
17. Baker McKenzie, Global Attorney-Client Privilege Guide: Artificial Intelligence (2025-2026).
18. International Bar Association, Digital Strangers in Litigation: Does Sharing with AI Breach Privilege? (2025).
19. Law Society of England and Wales, AI Action Plan for Justice (2025).
20. Norton Rose Fulbright, Privilege Challenges in the Era of Generative AI (March 2026).
Topics Articles
T
About the Author Tika R. Basnet is the founder of Global Law Lists.org, an international legal network connecting verified lawyers and law firms across 240+ jurisdictions worldwide. With a background in law and technology, he writes on the intersection of legal practice, regulatory developments, and innovation in the global legal industry. International Legal Network & Client Referral Platform

Tika R. Basnet is the founder of Global Law Lists.org, an international legal network connecting verified lawyers and law firms across 240+ jurisdictions worldwide. With a background in law and technology, he writes on the intersection of legal practice, regulatory developments, and innovation in the global legal industry.

Published March 24, 2026
Reading Time 33 minutes
Category Articles