Legal News

The 10 Most Consequential Legal Rulings on AI in 2025-2026: What Every Lawyer Must Know

By Global Law Lists 20 min read

Introduction: The Legal Profession Meets Its Technological Reckoning



The legal profession has long prided itself on precedent, on the careful, deliberate process of building frameworks from the decisions that came before. But between 2025 and early 2026, courts around the world have been forced to confront a category of questions for which there is almost no precedent at all. Can an artificial intelligence system be named as the inventor on a patent? Who owns the copyright to an image that a human being prompted but a machine generated? What happens when a lawyer submits fabricated case citations that were hallucinated by a large language model? And when an AI company trains its systems on millions of copyrighted works without permission, does that constitute fair use or wholesale theft?

These are not hypothetical questions posed in law school seminars. They are the live controversies that have produced binding judicial opinions, regulatory frameworks, and enforcement actions across multiple jurisdictions in the span of roughly eighteen months. The velocity of these developments is without modern parallel. In the time it typically takes for a single piece of complex litigation to move from filing to trial, entire bodies of AI jurisprudence have begun to crystallize in the United States, the European Union, the United Kingdom, Germany, China, and beyond.

For practicing lawyers, the implications are immediate and practical. Compliance obligations under the European Union's AI Act are already in force, with the full suite of high-risk system requirements arriving in August 2026. Federal courts in the United States have imposed sanctions on attorneys who failed to verify AI-generated research, and those sanctions are growing steeper with each new case. Patent and copyright offices around the world have drawn firm lines around the question of AI authorship, while at least one jurisdiction, China, has moved in a strikingly different direction.

This article examines the ten most consequential legal rulings and regulatory developments affecting artificial intelligence between January 2025 and March 2026. Each section provides factual analysis of the decision itself, places it within the broader trajectory of AI governance, and identifies the specific compliance obligations and strategic considerations that flow from it. The goal is not merely to catalog these developments but to help legal professionals understand what they mean in practice, right now, for the advice they give and the work they do.

The rulings covered here span intellectual property, professional responsibility, data protection, and regulatory compliance. They involve courts in New York, Delaware, Munich, London, and Beijing. They implicate individual solo practitioners who submitted fabricated citations and multinational technology companies whose business models depend on the legality of training AI systems on copyrighted material. Taken together, they represent the first comprehensive wave of judicial and regulatory engagement with artificial intelligence, and they will shape the legal landscape for years to come.

1. Mata v. Avianca and the Expanding Sanctions Regime for AI Hallucinations



The Original Case and Its Aftermath



The case that launched a thousand standing orders began with a routine personal injury claim. In 2022, Roberto Mata filed suit against Avianca Airlines in the United States District Court for the Southern District of New York, alleging that he was injured when a metal serving cart struck his knee during an international flight. The legal issues were straightforward. The consequences were not.

Attorney Steven Schwartz of the firm Levidow, Levidow and Oberman, faced with a motion to dismiss grounded in the Montreal Convention's two-year statute of limitations, turned to ChatGPT for legal research rather than traditional databases like Westlaw or LexisNexis. The chatbot produced six case citations that appeared plausible on their face, complete with realistic case names, docket numbers, and summaries of judicial reasoning. All six cases were entirely fabricated.

When opposing counsel challenged the citations, Schwartz compounded the problem by asking ChatGPT to confirm the cases were real. The AI system obligingly affirmed its own fabrications. In June 2023, Judge P. Kevin Castel of the Southern District of New York imposed sanctions, ordering a $5,000 fine and finding that the attorneys had acted in bad faith. Judge Castel described one of the fabricated legal analyses as gibberish and held that the conduct warranted sanctions under Federal Rule of Civil Procedure 11.

The 2025-2026 Wave of Escalating Sanctions



If the Mata decision was the warning shot, 2025 was the year courts began firing with live ammunition. The number of documented cases involving AI-hallucinated citations has exploded. Researcher Damien Charlotin, a Paris-based law lecturer, maintains a database tracking legal decisions addressing hallucinated content. By early 2026, that database contained approximately 712 decisions, with roughly 90 percent of them written in 2025 alone.

Several cases from this period stand out for the severity of their sanctions and the new legal principles they establish.

In Johnson v. Dunn, decided in July 2025 by the United States District Court for the Northern District of Alabama, a well-regarded law firm submitted a motion containing hallucinated legal citations. Instead of imposing monetary sanctions, which the court suggested were proving ineffective as a deterrent, Judge Maze disqualified the offending attorneys from representing the client for the remainder of the case and directed the court clerk to notify bar regulators in every state where the responsible attorneys held licenses. The decision marked a significant escalation, sending the message that monetary fines alone could not address the problem.

In August 2025, the Eastern District of Louisiana sanctioned attorney Hamilton for violating Rule 11(b)(2) by citing fabricated, AI-generated cases without verification. Hamilton was ordered to pay $1,000 personally and complete one hour of continuing legal education specifically focused on generative AI. The court also referred the matter to the Disciplinary Committee.

The California Court of Appeal added another dimension in September 2025 in Noland v. Land of the Free, imposing a $10,000 sanction on an attorney who filed briefs with fake citations. But the court also declined to award attorneys' fees to opposing counsel because of their failure to detect the fabrications. This ruling may represent the first judicial suggestion that lawyers have a professional duty not only to verify their own citations but also to identify fabricated authorities in their opponents' filings.

In February 2025, Wadsworth v. Walmart in the District of Wyoming revealed that hallucination problems are not limited to consumer AI tools. The fabricated citations in that case were generated by an AI system trained on the law firm's own proprietary database of case materials, undermining the widespread assumption that domain-specific training prevents hallucinations. The court revoked the drafting attorney's pro hac vice admission and imposed monetary fines on supervisory attorneys as well.

The most recent significant development came in March 2026, when the Sixth Circuit Court of Appeals imposed steep sanctions on two lawyers for filing briefs containing fabricated citations and misrepresentations. The court ordered the attorneys to reimburse appellees for their full reasonable attorney fees across all three consolidated appeals, marking one of the highest financial penalties yet imposed in an AI hallucination case.

The Institutional Response: Standing Orders and Court Rules



As of early 2026, more than 40 federal district courts have adopted standing orders or local rules addressing the use of artificial intelligence in legal filings. Judge Brantley Starr of the Northern District of Texas was among the first, requiring all attorneys to file a certificate confirming that any AI-generated text was verified for accuracy by a human being. Many courts have followed with similar requirements.

The American Bar Association issued its first formal ethics opinion on generative AI in July 2024, a 15-page document outlining how the Rules of Professional Conduct apply to AI tools. The opinion makes clear that lawyers cannot reasonably rely on the accuracy, completeness, or validity of AI-generated content without independent verification.

Despite these measures, the problem persists. According to the 2025 ABA TechReport, 79 percent of lawyers report using AI tools in some capacity. The gap between adoption rates and verification practices remains wide, and courts are signaling that the era of leniency is over. Looking ahead, legal commentators predict that courts will adopt mandatory hyperlink rules requiring every cited case, statute, or regulation to link to a verified legal database, effectively making unverifiable citations procedurally deficient on their face.

2. Thaler v. Vidal and the Supreme Court's Refusal to Recognize AI Inventorship



The Federal Circuit's Statutory Interpretation



Stephen Thaler is perhaps the most persistent litigant in the short history of AI intellectual property law. His quest to have an artificial intelligence system named as the inventor on a patent application has taken him through patent offices and courts on multiple continents. The AI system in question, known as DABUS (Device for the Autonomous Bootstrapping of Unified Science), was listed as the sole inventor on two patent applications filed with the United States Patent and Trademark Office.

The USPTO determined that the applications were incomplete because they did not list a human inventor. Thaler challenged that determination in federal court. In 2022, the Federal Circuit held in Thaler v. Vidal that the Patent Act requires an inventor to be a natural person and that an AI system cannot be listed as the inventor on a patent. The court's reasoning was grounded in statutory text. The Patent Act uses the word individual to describe an inventor, and the Supreme Court has previously held in Mohamad v. Palestinian Authority that when used as a noun, individual ordinarily means a human being. The Patent Act further reinforces this reading by using personal pronouns such as himself and herself to refer to an individual. It does not use itself.

The Supreme Court denied certiorari in April 2023, and because the Federal Circuit has exclusive jurisdiction over intermediate appeals of patent suits, the holding became settled law throughout the United States.

The 2025 USPTO Guidance Shift



While the Thaler decision settled the question of whether AI alone can be an inventor, it left open the more practically important question of how much human contribution is necessary when AI plays a significant role in the inventive process. The USPTO has addressed this question through a series of evolving guidance documents.

In February 2024, under Director Kathi Vidal, the USPTO published guidance establishing that patents could be obtained for AI-assisted inventions so long as a natural person made a significant contribution to the invention. This framework required patent applicants to demonstrate meaningful human involvement in the conception of each claimed element.

In November 2025, newly appointed USPTO Director John Squires issued updated guidance that took a markedly different approach. The new policy established what commentators have described as a do not ask, do not tell framework for AI-assisted inventions. Under this approach, the USPTO would create a presumption of human inventorship so long as any natural person is willing to sign the inventor's oath. The practical effect is to lower the evidentiary burden on applicants, making it significantly easier to obtain patents for inventions in which AI played a substantial role, provided that a human being is willing to attest to their inventorship.

This policy shift has drawn criticism from IP scholars who argue that it effectively circumvents the holding of Thaler by allowing AI-assisted inventions to receive patent protection without meaningful inquiry into the extent of human contribution. Supporters counter that the previous framework was unworkable in practice because it required patent examiners to make subjective assessments about the degree of human involvement in increasingly complex AI-assisted invention processes.

The Parallel Copyright Battle: Thaler v. Perlmutter



Thaler's campaign extended to copyright law with similar results. In Thaler v. Perlmutter, he sought copyright protection for a piece of visual art that he claimed DABUS autonomously created. The Copyright Office refused registration, and the district court upheld that refusal. The D.C. Circuit affirmed in 2025, and on March 2, 2026, the Supreme Court denied certiorari, effectively ending Thaler's quest.

The common thread across both patent and copyright law is now clear in the United States: only human beings can be inventors or authors under existing law. AI, regardless of its sophistication, is treated as a tool rather than a creator. Whether Congress will eventually amend these statutes to account for increasingly autonomous AI systems remains an open question, but for now, the judiciary has spoken with unusual unanimity.

3. Thomson Reuters v. ROSS Intelligence: The First Major Fair Use Ruling on AI Training Data



The Dispute



On February 11, 2025, Judge Bibas of the Third Circuit, sitting by designation in the United States District Court for the District of Delaware, issued the first major federal court ruling on whether using copyrighted material to train an AI system constitutes fair use. The decision in Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc. sent a clear signal that courts will not automatically accept fair use as a blanket defense for AI training practices.

The underlying facts were straightforward. Thomson Reuters owns Westlaw, the dominant legal research platform, including its proprietary headnote system, which provides editorial summaries and classifications of judicial opinions. ROSS Intelligence, a startup developing a competing AI-powered legal research tool, sought to license Westlaw's headnotes for training purposes. Thomson Reuters declined. ROSS then engaged a third-party company, LegalEase Solutions, to create bulk memoranda that substantially incorporated Westlaw headnotes, and used those memoranda to train its AI search system.

The Fair Use Analysis



Judge Bibas conducted a detailed four-factor fair use analysis. On the first factor, purpose and character of the use, the court found that ROSS's use was commercial and not transformative. The court reasoned that using copyrighted headnotes as AI training data does not transform the works into something new; rather, it uses them for their original purpose of summarizing and organizing legal principles. The court distinguished this from prior cases involving intermediate copying of software code, where the copying was necessary to access uncopyrightable elements.

The second factor, nature of the copyrighted work, favored ROSS because Westlaw headnotes involve only minimal creativity. However, the court found they clear the low threshold required for copyright protection.

The third factor, amount and substantiality of the portion used, also favored ROSS because the AI system's output to end users did not include Thomson Reuters headnotes directly.

The fourth factor, effect on the market, favored Thomson Reuters decisively. ROSS was developing a product intended to compete directly with Westlaw, and its use of the headnotes harmed both the primary market for legal research platforms and the potential licensing market for AI training data.

Weighing these factors together, the court rejected ROSS's fair use defense as a matter of law and granted Thomson Reuters partial summary judgment on the question of direct copyright infringement.

The Third Circuit Appeal and Broader Implications



In May 2025, the district court certified two questions for interlocutory appeal to the Third Circuit: whether Westlaw headnotes are sufficiently original to merit copyright protection, and whether ROSS's use constitutes fair use. This appeal will make the Third Circuit the first federal appellate court to address the application of fair use principles to AI training, setting a precedent that will influence dozens of pending cases.

The ruling is significant for what it does and does not decide. It does not hold that all use of copyrighted material for AI training constitutes infringement. The court was careful to note that ROSS's system was not generative AI and that its ruling may not apply to cases involving different types of AI systems or different training methodologies. But it does establish that the fair use defense is not automatically available simply because copyrighted material is used at an intermediate stage of AI development, particularly when the resulting product competes with the copyright holder's market.

4. The New York Times v. OpenAI: Discovery Battles and the Road to Trial



The Highest-Profile AI Copyright Case in the World



When The New York Times filed suit against OpenAI and Microsoft in December 2023, it launched what has become the most closely watched AI copyright case in the world. The Times alleged that OpenAI trained its GPT models on millions of Times articles without permission, and that ChatGPT could reproduce substantial portions of that copyrighted content in its outputs.

In March 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI's motion to dismiss, allowing the case's central copyright infringement claims to proceed toward trial. The court dismissed the Times' Digital Millennium Copyright Act and unfair competition claims but preserved the direct and contributory infringement claims and a trademark dilution claim. The ruling meant that the fundamental question, whether training AI on copyrighted news articles constitutes fair use, would be decided at trial rather than on the pleadings.

The Battle Over 20 Million ChatGPT Logs



The most consequential developments in the case during 2025 and early 2026 involved discovery. In May 2025, Magistrate Judge Ona T. Wang issued a sweeping preservation order directing OpenAI to preserve and segregate all output log data that would otherwise be deleted, regardless of whether deletion was requested by users or required by privacy regulations.

The Times initially demanded access to 1.4 billion private ChatGPT conversations. After negotiation, the parties agreed to a sample of 20 million logs. But OpenAI then reversed course, proposing instead to run keyword searches and produce only conversations that specifically implicated the plaintiffs' works. In November 2025, Judge Wang rejected that approach, and in January 2026, Judge Stein affirmed her ruling in full, ordering OpenAI to produce the entire 20 million-log sample.

The court's reasoning on the privacy question was notable. OpenAI had argued that producing user conversations would violate privacy rights. Judge Stein found that ChatGPT users voluntarily submitted their communications to OpenAI and therefore could not claim the same privacy protections as subjects of government wiretaps. This ruling has significant implications not only for the Times case but for discovery practice in AI litigation more broadly, establishing that user interactions with AI chatbots may be discoverable in copyright disputes.

The Fair Use Question Remains Open



No trial date has been set, and summary judgment motions on the fair use question are not expected before summer 2026 at the earliest. Across the broader landscape of AI copyright litigation, three federal judges have ruled on fair use to date, with two finding in favor of AI companies and one against. The legal landscape remains unsettled, and the Times case, given the prominence of the parties and the volume of evidence, is likely to produce the most detailed judicial analysis of fair use in the AI training context when it eventually reaches resolution.

5. GEMA v. OpenAI: Germany Delivers Europe's First AI Copyright Ruling



The Munich Regional Court Decision



On November 11, 2025, the Munich I Regional Court (Landgericht Munchen I) issued the first European court decision directly addressing whether training AI models on copyrighted works constitutes copyright infringement. The case, GEMA v. OpenAI (Case No. 42 O 14139/24), involved Germany's music collecting society, GEMA, which alleged that OpenAI had used protected song lyrics to train its GPT-4 and GPT-4o models without obtaining a license.

The dispute centered on the lyrics of nine well-known German songs. GEMA demonstrated that ChatGPT could reproduce these lyrics, in some cases nearly verbatim, when prompted by users. The question before the court was whether this reproduction constituted copyright infringement under German law.

The Court's Analysis



The court's reasoning rested on several key findings. First, it accepted GEMA's argument that copyrighted material can become embedded in AI model weights during training and remain retrievable, a phenomenon known in the technical literature as memorization. The court held that it is irrelevant that the content exists in the model only as probability values distributed across parameters. What matters is that the model is capable of reproducing the content in a recognizable form.

Second, the court found that when ChatGPT outputs copyrighted lyrics in response to user prompts, this constitutes public communication under German copyright law because the chatbot makes the content accessible to an unlimited public. OpenAI, as the operator of the system, bears direct liability for these outputs, not the end users whose prompts trigger the reproduction.

Third, the court rejected OpenAI's attempt to invoke the text and data mining exception under European copyright law. The court reasoned that this exception applies only to the initial analytical phase of AI model training and does not extend to the memorization and reproduction of entire copyrighted works.

Finally, the court rejected OpenAI's argument that it should benefit from an exemption available to nonprofit research institutes. While OpenAI's parent entity was originally organized as a nonprofit, the court found that OpenAI would need to demonstrate that it reinvests 100 percent of its profits in research and development or operates under a governmental mandate in the public interest, neither of which it could show.

Remedies and Next Steps



The court issued an injunction requiring OpenAI to cease storing unlicensed German song lyrics on infrastructure located in Germany. It also ordered publication of the judgment in a local newspaper, a traditional remedy in German copyright law that carries symbolic weight. The court denied OpenAI a six-month grace period for compliance, finding that the company had acted with at least negligence.

OpenAI has announced plans to appeal. The case may eventually reach the Munich Higher Regional Court, and a reference to the Court of Justice of the European Union remains possible, which could produce the first CJEU ruling on AI and copyright. A related case, GEMA v. Suno (an AI music generator), is expected to produce a ruling in June 2026.

The decision has drawn both praise and criticism. Supporters argue it correctly applies existing copyright principles to new technology. Critics contend that the court conflated targeted regurgitation under engineered conditions with the default behavior of AI systems, and that the ruling mischaracterizes how machine learning works at a technical level. Regardless of the merits of these arguments, the decision provides the clearest statement yet from a European court on the copyright implications of AI training.

6. Getty Images v. Stability AI: The UK's First AI Copyright Judgment



The High Court Decision



On November 4, 2025, the UK High Court delivered its judgment in Getty Images (US) Inc and Others v. Stability AI Limited, the first UK court decision to address copyright infringement arising from generative AI model training. Getty Images alleged that Stability AI had scraped millions of Getty photographs from the internet to train Stable Diffusion, its text-to-image generation model, without authorization. Reports indicated that Stable Diffusion was trained on more than 12 million Getty images.

The result was mixed, and in important respects, it disappointed copyright holders. Justice Joanna Smith rejected Getty's secondary infringement claim, ruling that merely exposing model weights to copyrighted works during training does not render the resulting model an infringing copy of those works. Under the UK Copyright, Designs and Patents Act 1988, an AI model that does not retain training data in a directly extractable form is not itself an infringing copy.

Getty's primary infringement claim was never adjudicated because Getty offered no evidence that Stability AI's training and development occurred within the United Kingdom, a necessary element of the claim. The court did find limited and historical infringement related to the reproduction of Getty's trademarks in certain generated outputs.

The Appeal and Legislative Context



Getty was granted permission to appeal in December 2025, and the appellate proceedings will provide an opportunity for a higher court to address the questions the High Court left unanswered, including whether the act of training itself constitutes reproduction under UK copyright law.

The decision arrived during a period of intense legislative activity in the United Kingdom. The Data (Use and Access) Act received Royal Assent in June 2025, but Parliament declined to include the controversial provisions on AI and copyrighted works that had been debated throughout the legislative process. Instead, the Act required the government to publish a report on copyright and AI by March 18, 2026, along with an economic impact assessment.

The government published that report on schedule in March 2026. Among its key conclusions were recommendations to remove copyright protection for computer-generated works under Section 9(3) of the Copyright, Designs and Patents Act, and to create a new text and data mining exception with opt-out provisions for rights holders and transparency requirements for AI developers. New draft legislation could reach Parliament by late 2026, potentially reshaping the legal framework for AI and copyright in the UK.

7. The Beijing Internet Court and China's Divergent Approach to AI Copyright



Li v. Liu: Granting Copyright to AI-Generated Images



While courts in the United States, the United Kingdom, and Germany have largely focused on the rights of copyright holders whose works are used to train AI systems, the Beijing Internet Court has been addressing a different question: whether the outputs of AI systems can themselves receive copyright protection. China's answer, in sharp contrast to the United States, has been a qualified yes.

In November 2023, the Beijing Internet Court ruled in Li v. Liu that an AI-generated image is copyrightable and that the person who prompted the AI to create the image is entitled to authorship under Chinese Copyright Law. The plaintiff had generated an image of a woman using Stable Diffusion, published it on the social media platform Xiaohongshu, and discovered that the defendant had used the same image to illustrate an article on a different website without permission.

The court ruled out the possibility that the AI model itself could be an author because Chinese Copyright Law restricts authorship to natural persons or legal entities. It also declined to attribute authorship to the model's designers. Instead, the court focused on the plaintiff's deliberate selection and arrangement of multiple prompts and attributed authorship based on this direct intellectual contribution. The court found that the resulting image met the originality requirement of Chinese copyright law because it reflected the plaintiff's original intellectual investment.

This approach directly contradicts the position taken by the United States Copyright Office, which has denied registration to AI-generated images in cases like Zarya of the Dawn on the grounds that sufficient human authorship was not demonstrated.

The September 2025 Refinements



In September 2025, the Beijing Internet Court issued two significant decisions that refined and, in some respects, tightened its approach. In a case involving a content creator identified as Zhou and an unnamed Beijing technology company, the court upheld the principle that copyright can exist in AI-generated images but imposed stricter evidentiary requirements. The party claiming copyright must demonstrate creative effort by documenting their creative thinking, the specific prompts they used, and the process of selecting and modifying the generated content. This documentation must be supported by evidence.

The court ruled against the plaintiff in this case, finding insufficient evidence of creative input, and the decision was upheld on appeal. The practical implication is significant: the Beijing Internet Court recommended that AI platform developers implement features that automatically preserve generation logs, prompts, and iterative processes so that users can meet the evidentiary burden if they later need to assert copyright.

The court also published a set of eight model AI cases in September 2025, including the first Chinese case addressing personality rights infringement through AI-generated content. In one model case, a defendant used AI to transform a plaintiff's photograph into an anime-style image with revealing clothing and posted it in a group chat where members made vulgar comments. The court held that the defendant had infringed both the plaintiff's image rights and reputation rights.

In 2026, the Beijing Internet Court released additional model cases addressing virtual human figures, holding that such figures created by production teams with unique aesthetic choices meet originality requirements for artistic works. These cases reflect the court's growing role as a specialized forum for AI-related disputes, with the volume of such cases increasing year over year.

8. The EU AI Act: From Theory to Enforcement



The Implementation Timeline



The European Union's Artificial Intelligence Act entered into force on August 1, 2024, establishing the world's first comprehensive regulatory framework for AI systems. The Act follows a phased implementation schedule, with different categories of obligations taking effect at different times. Rules prohibiting certain AI practices, such as social scoring and manipulative AI systems, along with AI literacy requirements, became applicable on February 2, 2025. Obligations for providers of general-purpose AI models, including transparency requirements and systemic risk assessments, took effect on August 2, 2025. The full suite of requirements for high-risk AI systems, including conformity assessments, technical documentation, CE marking, and EU database registration, will become applicable on August 2, 2026.

The Penalty Structure



The AI Act's penalty provisions are among the most aggressive in the history of technology regulation, exceeding even those established under the General Data Protection Regulation. Violations involving prohibited AI practices can trigger fines of up to 35 million euros or 7 percent of global annual turnover, whichever is higher. Other violations can result in fines of up to 15 million euros or 3 percent of global annual turnover. Supplying incorrect, incomplete, or misleading information to public authorities carries fines of up to 7.5 million euros or 1 percent of turnover.

Enforcement is divided between the European AI Office, which has exclusive jurisdiction over general-purpose AI model provisions, and national market surveillance authorities appointed by each member state. Each member state is required to designate at least one market surveillance authority and one notifying authority to monitor AI systems and certify conformity assessment bodies.

Early National Implementation



Finland became the first EU member state to establish full AI Act enforcement powers when its national supervision laws took effect on January 1, 2026. The Finnish Transport and Communications Agency became the first active national enforcer under the Act. Italy moved even earlier with its own national implementation, enacting Law No. 132/2025 in October 2025, which established fines of up to 774,685 euros for certain violations and introduced criminal penalties for the unlawful dissemination of AI-generated deepfakes, including imprisonment of one to five years.

The European Commission has made clear that the implementation timeline will not be delayed or extended, despite calls from some industry groups for transition periods. Organizations that have not yet begun their compliance programs face an increasingly compressed timeline, as conformity assessments alone typically require six to twelve months.

The Prohibited Practices Already in Force



Since February 2025, the following AI practices have been banned throughout the European Union: AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behavior; systems that exploit vulnerabilities related to age, disability, or socioeconomic circumstances; social scoring systems that evaluate individuals based on social behavior or personal characteristics; predictive policing based solely on profiling; untargeted scraping of facial images from the internet or CCTV to build facial recognition databases; emotion recognition in workplaces and educational institutions; biometric categorization to infer sensitive attributes such as race, political opinions, or sexual orientation; and real-time remote biometric identification in publicly accessible spaces for law enforcement, subject to narrow exceptions.

9. Emerging AI Jurisprudence in Australia, Canada, and India



Australia: Voluntary Standards and a Coming Regulatory Framework



Australia has taken a measured approach to AI regulation, relying primarily on voluntary standards and existing regulatory frameworks rather than comprehensive standalone legislation. In August 2024, the Australian Department of Industry, Science and Resources released the Voluntary AI Safety Standard, which was updated and simplified in October 2025 with the publication of the Guidance for AI Adoption, outlining six essential practices for safe and responsible AI governance.

In December 2025, Australia published its National AI Plan, setting out the government's strategy for building an AI-enabled economy. The plan focuses on capabilities and opportunities rather than restrictive regulation. The government also announced the establishment of an AI Safety Institute, which became operational in early 2026.

Australia currently has no AI-specific requirements in its data protection laws, though new requirements are scheduled to take effect in December 2026. The government has rejected calls for immediate AI-specific workplace regulation, but the trajectory suggests that binding requirements will follow the current period of voluntary standards development.

The absence of comprehensive AI legislation does not mean that AI use is unregulated in Australia. Existing laws governing consumer protection, anti-discrimination, privacy, and professional responsibility all apply to AI-assisted activities. The Australian Competition and Consumer Commission has indicated that it will take enforcement action against businesses that make misleading claims about AI capabilities or that use AI in ways that cause consumer harm, even in the absence of AI-specific legislation.

Canada: AIDA's Uncertain Future



Canada has been pursuing AI governance primarily through its proposed Artificial Intelligence and Data Act (AIDA), which was part of the broader Bill C-27 digital charter legislation. However, AIDA did not pass before recent parliamentary changes, and its future remains uncertain. In the interim, Canada has relied on standards development through the AI and Data Standardization Collaborative and voluntary frameworks to guide responsible AI use.

The Canadian approach reflects a tension between the desire to maintain the country's position as a leader in AI research and development and the recognition that binding regulation may be necessary to address risks. The government has established ongoing consultations with industry, civil society, and academic stakeholders, but the absence of enacted legislation means that Canada currently lacks the enforcement mechanisms available in the EU or the sector-specific regulatory authority exercised by agencies in the United States.

India: Light-Touch Governance with Existing Law Enforcement



India has adopted what officials describe as a light-touch approach to AI governance, prioritizing innovation while addressing harms through existing legal frameworks rather than comprehensive new legislation. In November 2025, the Ministry of Electronics and Information Technology released the India AI Governance Guidelines under the IndiaAI Mission. These guidelines are not enforceable law but serve as a reference framework for responsible AI adoption.

In December 2025, a Private Member's Bill, the Artificial Intelligence (Ethics and Accountability) Bill, 2025, was introduced in the Lok Sabha. The bill proposes the establishment of a statutory Ethics Committee for AI, mandatory ethical reviews for surveillance and high-risk systems, bias audits, and penalties of up to 5 crore rupees (approximately $600,000). As a Private Member's Bill, its chances of passage are uncertain, but it signals growing legislative interest in AI governance.

India's Digital Personal Data Protection Act, enacted in 2023, is moving from policy design to active enforcement, with implementing rules released in November 2025 and a phased rollout over twelve to eighteen months. Penalties under the Act can reach 2.5 billion rupees (approximately $27 million) per breach, giving regulators significant enforcement power even in the absence of AI-specific legislation.

The Reserve Bank of India and the Securities and Exchange Board of India have both issued sector-specific guidance on AI use in financial services, reflecting India's preference for regulating AI through existing institutional frameworks rather than creating new standalone agencies.

10. The Clearview AI Settlement and the Future of Biometric Privacy Litigation



The Illinois Class Action



On March 20, 2025, a federal judge in the Northern District of Illinois granted final approval to a settlement in the class action lawsuit against Clearview AI, the facial recognition startup that built its database by scraping billions of facial images from publicly available websites and social media platforms without individuals' consent. The case, brought under Illinois' Biometric Information Privacy Act (BIPA), alleged that Clearview's practices violated the Act's requirements for informed consent before collecting biometric identifiers.

The settlement's structure was unusual, reflecting the financial realities of a startup defendant that lacked the resources for a traditional cash payout. Instead of a lump sum payment, the court approved an arrangement granting class members a 23 percent equity stake in Clearview AI, valued at an estimated $51.75 million. Payment to class members would be triggered by an initial public offering or liquidation event. Alternatively, Clearview could pay 17 percent of its GAAP revenue from the date of final approval through September 30, 2027, or the settlement class could sell its equity stake.

The settlement was approved over objections from a bipartisan group of state attorneys general who argued that the equity-based structure did not adequately compensate victims. Two objectors have appealed to the Seventh Circuit Court of Appeals, and the case remains in litigation as of early 2026.

Vermont Dismissal and the Limits of State Enforcement



In December 2025, a Vermont state court dismissed a lawsuit brought by the state against Clearview AI. The court found that Clearview conducted no substantial business in Vermont and had no significant contacts with the state, illustrating the jurisdictional challenges that state regulators face when attempting to enforce privacy laws against technology companies that operate primarily through the internet.

Broader Implications for Biometric Privacy



The Clearview case represents a landmark in biometric privacy litigation. BIPA, which was enacted in 2008 and provides for penalties of up to $5,000 per willful violation, has become the primary statutory basis for facial recognition privacy claims in the United States. The statute's combination of a private right of action and statutory damages has produced substantial settlements, including a $650 million settlement with Facebook (now Meta) in 2021 over its photo-tagging feature.

Looking ahead, 2026 is expected to bring increased government enforcement of existing biometric privacy laws, even without additional federal legislation. Several states have enacted or are considering their own biometric privacy statutes, and the intersection of these laws with the growing use of AI-powered facial recognition in both commercial and law enforcement contexts will continue to generate significant litigation.

Compliance Guidance: Practical Steps for Legal Professionals in 2026



AI Use in Legal Practice



The sanctions cases discussed in this article make clear that courts are applying a strict liability standard in practice, even if not in doctrine, to AI-hallucinated citations. Every citation produced by an AI tool must be independently verified against a reliable legal database before inclusion in any filing. Law firms should establish written policies governing the use of AI tools in legal research, require attorneys to disclose the use of AI in compliance with applicable court rules, and provide regular training on the capabilities and limitations of generative AI systems.

The emerging duty suggested by the Noland decision, that lawyers may be obligated to detect fabricated citations in opposing counsel's filings, adds another layer of professional responsibility that firms should address in their quality control procedures.

Intellectual Property Strategy



For clients developing AI-assisted inventions or creating works with AI tools, the legal landscape requires careful documentation of human contribution at every stage of the creative or inventive process. The USPTO's current guidance creates a permissive environment for AI-assisted patents, but the underlying legal requirement of human inventorship has not changed, and future guidance could impose stricter requirements. Patent applicants should maintain detailed records of human decision-making, design choices, and contributions to claimed inventions.

On the copyright side, the divergence between the United States (where AI-generated works generally cannot receive copyright protection) and China (where they can, with sufficient documentation of human creative input) creates strategic considerations for companies operating in both jurisdictions. The Beijing Internet Court's emphasis on documented creative processes suggests that companies should implement systems for preserving generation logs, prompts, and iteration histories.

AI Training and Data Licensing



The Thomson Reuters v. ROSS decision, the GEMA v. OpenAI ruling, and the ongoing New York Times v. OpenAI litigation all point in the same direction: AI companies that train models on copyrighted content without licenses face substantial legal risk. The fair use defense is not a guaranteed shield, and courts in different jurisdictions are reaching different conclusions about its applicability. Organizations developing AI systems should build documented data provenance strategies, track how training data was obtained and under what licenses, and maintain clear records that can be produced in litigation or regulatory inquiries.

EU AI Act Compliance



For organizations subject to the EU AI Act, August 2, 2026, is the critical deadline. By that date, high-risk AI systems must have completed conformity assessments, finalized technical documentation, affixed CE marking, and registered in the EU database. Organizations that have not yet begun this process face a compressed timeline, as conformity assessment processes typically require six to twelve months. Companies should classify all AI systems according to the Act's risk categories, conduct gap analyses against the applicable requirements, and establish AI governance committees with representation from legal, compliance, security, and product development functions.

Conclusion: The First Wave Is Only the Beginning



The ten developments examined in this article represent the first comprehensive wave of judicial and regulatory engagement with artificial intelligence. They establish foundational principles that will shape AI law for years to come: AI cannot be an inventor or author under current intellectual property statutes in most jurisdictions; the fair use defense for AI training is not automatic and depends heavily on the specific facts; AI-hallucinated citations will be sanctioned with increasing severity; and comprehensive regulatory frameworks like the EU AI Act are moving from theory to enforcement with penalties that exceed even those under the GDPR.

But these are first-generation rulings addressing first-generation questions. The next wave of litigation will involve more complex scenarios: AI systems that make autonomous decisions affecting individuals' rights, liability allocation when AI-assisted medical diagnoses prove wrong, the enforceability of contracts negotiated by AI agents, and the application of anti-discrimination law to algorithmic decision-making.

The legal profession is simultaneously a subject and an actor in this transformation. Lawyers are being sanctioned for misusing AI tools while also being called upon to develop the frameworks that govern how society uses those same tools. The cases and regulations discussed here are not abstract developments to be monitored from a distance. They are the immediate operating environment for every lawyer advising clients on AI development, deployment, or governance.

The pace of technological change in AI is accelerating. The pace of legal change, remarkably, is accelerating as well. For legal professionals, the imperative is clear: understand these developments, adapt practice accordingly, and prepare for a future in which the questions will only grow more complex.

Citations and References



1. Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), (S.D.N.Y. 2023). Sanctions order by Judge P. Kevin Castel.

2. Johnson v. Dunn, No. 2:20-cv-01182 (N.D. Ala. July 2025). Attorney disqualification for AI-hallucinated citations.

3. Noland v. Land of the Free, California Court of Appeal, September 2025. $10,000 sanction with opposing counsel duty implications.

4. Wadsworth v. Walmart, D. Wyoming, February 2025. Sanctions for domain-specific AI-generated fabrications.

5. Sixth Circuit sanctions order, March 2026. Full attorneys' fees reimbursement for fabricated appellate citations.

6. Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), cert. denied, 143 S. Ct. 1783 (2023).

7. Thaler v. Perlmutter, No. 22-1564 (D.D.C. 2023), aff'd D.C. Circuit 2025, cert. denied March 2, 2026.

8. USPTO Guidance on AI-Assisted Inventions, February 2024 (Director Vidal) and November 2025 (Director Squires).

9. Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., No. 20-613 (D. Del. Feb. 11, 2025).

10. The New York Times Co. v. Microsoft Corp. and OpenAI, No. 23-cv-11195 (S.D.N.Y.). Motion to dismiss ruling March 2025; discovery orders November 2025 and January 2026.

11. GEMA v. OpenAI, Case No. 42 O 14139/24 (Landgericht Munchen I, Nov. 11, 2025).

12. Getty Images (US) Inc. v. Stability AI Ltd., [2025] EWHC 2863 (Ch) (Nov. 4, 2025).

13. Li v. Liu, Beijing Internet Court, November 27, 2023. AI-generated image copyright.

14. Beijing Internet Court, September 16, 2025. Stricter evidentiary standards for AI-generated work copyright claims.

15. Regulation (EU) 2024/1689 of the European Parliament and of the Council (EU AI Act), entered into force August 1, 2024.

16. Finland National AI Act Supervision Laws, effective January 1, 2026.

17. Italy Law No. 132/2025, entered into force October 10, 2025.

18. In re Clearview AI Inc. Consumer Privacy Litigation, No. 21-cv-0135 (N.D. Ill.), settlement approved March 20, 2025.

19. American Bar Association Formal Ethics Opinion on Generative AI, July 2024.

20. Damien Charlotin, AI Hallucination Cases Database (2024-2026).

21. UK Data (Use and Access) Act 2025, Royal Assent June 19, 2025.

22. UK Government Report on Copyright and Artificial Intelligence, March 18, 2026.

23. India AI Governance Guidelines, Ministry of Electronics and Information Technology, November 2025.

24. Australia National AI Plan, December 2025.

25. Mohamad v. Palestinian Authority, 566 U.S. 449 (2012). Definition of individual as natural person.
Topics Legal News
G
About the Author Global Law Lists International Legal Network & Client Referral Platform

This article was researched and written by the editorial team at Global Law Lists.org® — the world’s premier international legal network connecting verified lawyers and law firms with clients across 240+ jurisdictions.

Published March 24, 2026
Reading Time 33 minutes
Category Legal News