Legal News

Privacy Under Siege: How Wartime Surveillance, AI, and Data Harvesting Are Rewriting Privacy Law Globally

By Global Law Lists 22 min read Updated Mar 24, 2026

Introduction: When War Becomes a Privacy Laboratory



There is a pattern in the history of surveillance that repeats with uncomfortable regularity. Technologies developed for wartime intelligence gathering do not stay on the battlefield. They migrate into domestic law enforcement, commercial applications, and the everyday architecture of modern life. Wiretapping, signals intelligence, satellite imagery, and biometric databases all followed this trajectory, moving from military necessity to civilian ubiquity in timescales that compressed with each successive technology.

Artificial intelligence has accelerated this pattern to an unprecedented degree. The conflicts of 2023 through 2026, particularly in Ukraine, Gaza, and across multiple theaters of geopolitical tension, have become proving grounds for AI-powered surveillance systems that are simultaneously reshaping privacy law worldwide. Facial recognition technology deployed at military checkpoints is manufactured by the same companies selling to police departments in democratic nations. Large language models trained on intercepted communications are being adapted for commercial intelligence gathering. Biometric databases assembled under conditions of military occupation are creating precedents that threaten civilian privacy protections globally.

The legal frameworks designed to protect privacy were not built for this moment. The European Convention on Human Rights was drafted in 1950. The Fourth Amendment to the United States Constitution was ratified in 1791. Even the General Data Protection Regulation, which entered into force in 2018, was primarily designed to address commercial data processing, not the wholesale surveillance capabilities that AI has made possible. These legal instruments are being stretched to their breaking points as governments invoke national security to justify surveillance practices that would be plainly unlawful in peacetime contexts, and as the technologies refined in those contexts flow into civilian use.

For the legal profession, these developments carry a particular urgency. Attorney-client privilege, the foundation on which the adversarial legal system rests, is under direct threat from surveillance practices that make no distinction between privileged communications and ordinary intelligence targets. Digital rights organizations are fighting on multiple fronts simultaneously, challenging military surveillance, commercial data harvesting, and government access to encrypted communications. And the post-conflict landscape, the legal terrain that emerges after hostilities end, is increasingly shaped by the surveillance infrastructure that was built during the conflict itself.

This article examines how armed conflicts are accelerating the deployment of AI surveillance tools, how national security justifications are eroding privacy protections, and what the emerging global response looks like. It draws on court rulings from the European Court of Human Rights, regulatory actions in the European Union, litigation in the United States, and the advocacy work of digital rights organizations to map the current state of privacy law in a world where the boundaries between wartime and peacetime surveillance are dissolving.

Part I: Armed Conflicts as Catalysts for Surveillance Technology



The Ukraine Conflict: AI on the Modern Battlefield



Ukraine has been described as the world's first real-time laboratory for the deployment and regulation of AI in war. Since the Russian invasion in February 2022, both sides have employed increasingly sophisticated AI systems for intelligence analysis, targeting, drone operations, and information warfare. The Ukrainian government has actively encouraged the development and deployment of AI tools through mechanisms like the Brave1 platform, which by early 2025 had evaluated over 500 proposals and approved funding for more than 70 projects, including AI-driven surveillance systems, cyber defense technologies, and semi-autonomous drones.

AI's primary role in the conflict has been as a data analysis tool, processing the enormous volume of information generated by sensors, satellites, social media, intercepted communications, and frontline observations. Systems trained to geolocate Russian military assets using open-source data, including social media content posted by soldiers, have proven effective at identifying troop positions, weapon systems, and unit movements. Facial recognition tools have been deployed to identify captured or deceased combatants, raising questions about the treatment of biometric data under international humanitarian law.

The speed at which these technologies have been developed and deployed has outpaced any regulatory framework. Ukraine's Ministry of Digital Transformation, led by Vice Prime Minister Mykhailo Fedorov, has embraced AI adoption with a startup mentality, prioritizing rapid deployment over regulatory caution. While this approach has produced tactical advantages, it has also created a body of precedent for AI-assisted warfare that will influence military procurement and doctrine worldwide for decades.

The implications for privacy extend well beyond the theater of conflict. NATO allies have studied Ukraine's AI deployment closely, and the lessons learned are being incorporated into military planning and procurement decisions across the alliance. Technologies that prove effective in Ukraine will be purchased by democratic governments for domestic security applications, often with fewer safeguards than those that apply to traditional surveillance tools.

Gaza: AI-Powered Targeting and Mass Biometric Surveillance



The Israeli military's operations in Gaza since October 2023 have involved the most extensively documented use of AI in targeting and surveillance in any armed conflict to date. Multiple investigative reports have revealed the deployment of AI decision support systems that identify potential targets based on patterns of behavior, communications, and associations.

The system known as Lavender, according to published investigations, was designed to identify individuals suspected of affiliation with Hamas or Palestinian Islamic Jihad. Built on supervised machine learning, it assigns each person in the population a numerical score indicating the probability of militant affiliation. Investigative reporting has indicated that the system was used to generate target lists with minimal human review, particularly during the early phases of the military operation.

Alongside targeting systems, the Israeli military has deployed large-scale facial recognition programs at checkpoints and throughout the territory. Reports indicate that biometric data has been collected from Palestinian civilians without their knowledge or consent, creating databases that exist outside any legal framework governing data protection. These systems, manufactured by companies like Corsight AI, whose technology was developed in part by individuals with backgrounds in Israeli military infrastructure projects, have been deployed to conduct what human rights organizations describe as mass surveillance of an occupied civilian population.

The Israeli military has also been developing a ChatGPT-like large language model trained on millions of Arabic-language conversations obtained through surveillance of Palestinians. This system, developed under the auspices of Unit 8200, Israel's signals intelligence directorate, is designed to rapidly process large quantities of intercepted communications and answer queries about specific individuals. The creation of a military LLM trained on intercepted civilian communications represents a new category of AI surveillance tool with no clear precedent in international humanitarian law.

The privacy implications of these developments extend far beyond the immediate conflict. Cellebrite, the Israeli company whose phone extraction tools have been used to harvest data from Palestinians' devices, has sold its technology to law enforcement agencies in the United States and dozens of other countries. The surveillance capabilities refined in conflict are being marketed to civilian law enforcement agencies worldwide.

The International Legal Vacuum



The deployment of AI surveillance tools in conflict zones has exposed a significant gap in international humanitarian law. The International Committee of the Red Cross published an analysis in June 2025 addressing the use of facial recognition for targeting purposes under international law. The analysis found that IHL is broadly neutral toward the use of new technologies, meaning that it neither prohibits nor specifically authorizes AI-powered surveillance tools. The right to privacy under international human rights law does not, according to this analysis, preclude the use of biometrics in hostilities.

This legal vacuum has created a permissive environment in which military forces can deploy AI surveillance tools with little legal constraint. The United Nations has debated autonomous weapons under the Convention on Certain Conventional Weapons for nearly a decade without producing binding rules. In 2024, 166 nations called for urgent discussions on the topic, and UN Secretary General Antonio Guterres has pushed for a treaty on autonomous weapons by 2026. However, progress toward such a treaty has been slow, with major military powers reluctant to accept binding constraints on technologies they view as strategically important.

The ICRC has issued increasingly urgent warnings about the risks of AI in armed conflict. In its March 2025 report, the organization cautioned that without meaningful limits, the rise of autonomous weapons risks crossing a moral and legal threshold that humanity may not be able to reverse. The report recommended that states adopt national and international laws mandating human oversight at each stage of lethal decision-making, and that data protection principles drawn from frameworks like the GDPR and India's Digital Personal Data Protection Act, including necessity, proportionality, and data minimization, be extended to military AI in wartime.

Part II: National Security Versus Privacy in Democratic States



Section 702 of FISA: The Permanent Surveillance Debate



The United States' foreign intelligence surveillance apparatus has been a persistent source of tension between national security and privacy for more than two decades. Section 702 of the Foreign Intelligence Surveillance Act, enacted in 2008, permits the National Security Agency to acquire the communications of foreign persons located outside the United States without obtaining individualized court orders. The practical effect of this authority is the collection of enormous quantities of Americans' communications, including phone calls, text messages, and emails, when those Americans communicate with foreign targets or when their communications are swept up in bulk collection programs.

In April 2024, Congress reauthorized Section 702 through the Reforming Intelligence and Securing America Act (RISAA), but with the shortest sunset period ever included in a reauthorization: just two years, expiring on April 20, 2026. This compressed timeline reflected the depth of congressional disagreement over the program's scope. A proposed amendment to require warrants for queries of Section 702 data using American citizens' identifying information failed in the House of Representatives by a tied vote of 212 to 212, the closest the warrant requirement has ever come to passage.

RISAA included some new privacy safeguards, including expanded training requirements for FBI personnel conducting queries and enhanced oversight of searches involving political, media, or religious figures. However, privacy advocates argued that the legislation preserved the surveillance status quo and in some respects expanded it. The Electronic Privacy Information Center, the Brennan Center for Justice, and FreedomWorks jointly analyzed the legislation and concluded that amendments added during the legislative process significantly expanded Section 702's reach.

One provision has drawn particular concern. RISAA broadened the definition of electronic communications service provider, the category of entities that can be compelled to assist with Section 702 surveillance. Privacy advocates argue that this expanded definition could encompass a wide range of businesses and individuals beyond traditional telecommunications companies, potentially requiring landlords, cleaning services, and data centers to assist with government surveillance.

As of early 2026, the reauthorization debate is again underway. The House Judiciary Committee held a hearing on FISA oversight in December 2025, and the Senate Judiciary Committee followed in January 2026. A coalition of more than 130 organizations has urged congressional leadership not to reauthorize Section 702 without closing the data broker loophole, which allows the government to purchase Americans' sensitive personal data from commercial data brokers without a warrant. A separate coalition of 90 organizations has called on Democratic leadership to oppose any clean extension of Section 702 without meaningful reforms.

A federal district court ruling in February 2025 added constitutional weight to the reform effort, holding that the Fourth Amendment requires the government to obtain a warrant before searching Section 702 data using U.S.-person identifiers, unless a specific established exception to the warrant requirement applies. While the ruling applies only in one district, it represents the first federal court to squarely hold that warrantless backdoor searches of Section 702 data violate the Fourth Amendment.

The European Court of Human Rights and Bulk Surveillance



The European Court of Human Rights has been the most active international tribunal in developing jurisprudence on the intersection of mass surveillance and privacy rights. Article 8 of the European Convention on Human Rights protects the right to respect for private and family life, home, and correspondence, subject to limitations that are prescribed by law, pursue a legitimate aim, and are necessary in a democratic society.

The Court's landmark decision in Big Brother Watch v. United Kingdom, decided by the Grand Chamber, established the framework that continues to govern European bulk surveillance law. The Court found violations of Articles 8 and 10 with respect to the United Kingdom's bulk interception regime, concluding that the system lacked adequate safeguards to protect privacy and freedom of expression. However, the Court did not hold that bulk surveillance is inherently incompatible with the Convention. Instead, it established a set of minimum safeguards that must be present at every stage of the intelligence process, from initial authorization through collection, analysis, and dissemination.

In its ruling on Poland's surveillance laws in Pietrzak and Bychawska-Siniarska and Others v. Poland, the Court found that national legislation requiring telecommunications providers to retain communications data in a general and indiscriminate manner was insufficient to ensure that the interference with privacy was limited to what was necessary in a democratic society. The Court also found that secret surveillance provisions in Poland's Anti-Terrorism Act failed to satisfy Article 8 requirements because neither the imposition of surveillance nor its application during the initial three-month period was subject to review by an independent body.

A joint factsheet published in April 2025 by the European Union Agency for Fundamental Rights and the ECtHR documented the growing body of case law from both the ECtHR and the Court of Justice of the European Union addressing mass surveillance. The factsheet noted that both courts are being asked with increasing frequency to rule on the risks that bulk surveillance poses to fundamental rights, including the large-scale interception of communications data and requirements that carriers retain and store user data for government access.

The ECtHR's approach has been characterized as calibrated rather than absolutist. The Court has allowed member states a broader margin of appreciation in national security matters compared to other contexts, accepting that the safeguarding of national security against terrorism is a legitimate aim under Article 8(2). But it has insisted on what it calls end-to-end safeguards, requiring independent oversight at every stage of intelligence operations, from authorization through data retention and destruction.

A 2025 article in the Human Rights Law Review examined a previously under-studied dimension of the Court's work: how it handles national security secrecy in its own proceedings. The analysis reviewed 131 published case communications and the Court's procedural rules, including the newly introduced Rule 44F governing the treatment of highly sensitive documents. The study found that before Rule 44F, the Court had limited procedural tools to assess whether national security secrecy was genuinely necessary or was being invoked to shield governmental abuse from judicial scrutiny.

The EU-US Data Privacy Framework



The transatlantic dimension of surveillance and privacy was tested in 2025 when the General Court of the European Union dismissed a challenge to the EU-US Data Privacy Framework, confirming its validity based on the facts and law at the time of the European Commission's adequacy determination. This decision preserved the legal basis for transatlantic commercial data transfers but left unresolved the fundamental tension between US surveillance practices and European privacy standards.

The framework's stability depends in part on the renewal of Section 702. If Section 702 expires without reauthorization or is reauthorized in a form that weakens existing privacy safeguards, the adequacy determination could face renewed legal challenge before the CJEU, potentially triggering a third invalidation of transatlantic data transfer mechanisms following the Schrems I and Schrems II decisions.

Part III: AI Facial Recognition and the Erosion of Anonymity



The EU AI Act's Biometric Restrictions



The European Union's AI Act represents the most comprehensive regulatory effort to constrain AI-powered facial recognition and biometric surveillance. The Act's provisions on prohibited AI practices, which became enforceable in February 2025, include a ban on the use of AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, subject to narrow exceptions for specific serious offenses. The Act also prohibits the untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases, a practice that directly targets the business model of companies like Clearview AI.

These prohibitions represent a significant departure from the regulatory approaches taken in other jurisdictions. The United States has no comparable federal restriction on facial recognition technology, and enforcement depends primarily on state-level biometric privacy statutes, which exist in only a handful of jurisdictions. Illinois' Biometric Information Privacy Act remains the most powerful tool available to private litigants, as demonstrated by the Clearview AI settlement and the earlier Facebook photo-tagging settlement of $650 million.

The EU's approach recognizes something that other regulatory frameworks have been slow to acknowledge: facial recognition technology fundamentally changes the relationship between individuals and public space. The ability to identify any person in any public area in real time effectively eliminates the practical anonymity that has historically characterized movement through public life. The AI Act's ban on real-time biometric identification in public spaces is designed to preserve this anonymity as a default condition of civic life, allowing exceptions only under strict conditions and judicial oversight.

Clearview AI and the Limits of Biometric Privacy Litigation



The Clearview AI litigation illustrates both the potential and the limitations of using privacy law to constrain facial recognition technology. The $51.75 million settlement approved in March 2025 was unprecedented in its scope, covering a nationwide class of Americans whose facial images were scraped from the internet without consent. But the settlement's equity-based structure, which ties class members' recovery to Clearview's future financial performance, raised questions about whether it adequately compensates individuals whose biometric data was collected without their knowledge.

The company's continued operations underscore the challenge. Clearview has not been ordered to delete its database or cease operations. The permanent injunction against NSO Group in the WhatsApp case, by contrast, specifically prohibited future targeting of the platform's users. The difference reflects the fact that Clearview's activities, while invasive of privacy, do not involve the kind of direct hacking that formed the basis of the WhatsApp litigation.

The dismissal of Vermont's lawsuit against Clearview in December 2025 highlighted the jurisdictional challenges that state regulators face. The court found that Clearview had no substantial business presence in Vermont, illustrating how companies that operate primarily through the internet can avoid the reach of state enforcement actions. This jurisdictional gap makes federal legislation, or international cooperation, essential for effective regulation of facial recognition technology.

Facial Recognition in Conflict Zones: The Missing Legal Framework



The deployment of facial recognition in armed conflicts operates in a space where the legal frameworks governing both military conduct and civilian privacy protection are inadequate. International humanitarian law requires combatants to distinguish between civilians and military targets, but the law does not specifically address the use of AI-powered biometric systems to make those distinctions. The result is a legal vacuum in which military forces can deploy facial recognition systems without clear legal constraints on how biometric data is collected, stored, shared, or used after hostilities end.

The ICRC's 2025 analysis acknowledged this gap but stopped short of calling for a specific prohibition on military facial recognition. Instead, the analysis recommended that existing IHL principles of distinction, proportionality, and precaution be interpreted to require meaningful human oversight of AI-powered identification systems, and that biometric data collected during conflict be subject to protections analogous to those governing prisoners of war under the Geneva Conventions.

This recommendation has not yet been adopted by any state or incorporated into any binding international agreement. In the meantime, military forces continue to deploy facial recognition systems with few legal constraints, creating biometric databases of civilian populations that could persist long after the conflicts that generated them have ended.

Part IV: The Chilling Effect on Attorney-Client Communications



Surveillance and the Erosion of Privilege



Attorney-client privilege is the legal profession's oldest and most fundamental protection. It exists not primarily for the benefit of lawyers but for the benefit of clients, ensuring that individuals can communicate freely with their legal counsel without fear that those communications will be used against them. The privilege is considered so essential to the functioning of the adversarial legal system that it has been recognized in some form in virtually every common law and civil law jurisdiction.

Government surveillance programs, however, operate according to a different logic. Intelligence agencies collect communications in bulk based on technical selectors such as email addresses, phone numbers, or keywords. These collection methods do not distinguish between privileged attorney-client communications and ordinary communications. Under the Foreign Intelligence Surveillance Act in the United States, a specialized court operating in secret can order covert surveillance on targets that may include attorneys, law firms, or their clients. The minimization procedures that govern Section 702 collection do not prohibit the government from acquiring privileged communications; they merely prevent those communications from being introduced directly as evidence in court proceedings.

The Brennan Center for Justice has documented how this gap between collection and use creates a structural threat to attorney-client privilege. As the National Association of Criminal Defense Lawyers has argued, when every reasonable modern method of communication is apparently subject to routine mass search and seizure by the government, the right to consult with counsel effectively disappears in practical terms. The chilling effect is not hypothetical. Criminal defense attorneys representing clients in national security cases have reported that their clients are unwilling to discuss case strategy over electronic communications, forcing meetings to take place in person and creating logistical burdens that disadvantage defendants who are incarcerated or located far from their counsel.

In January 2026, the Foreign Intelligence Surveillance Court denied an FBI request to conduct electronic surveillance pursuant to Title I of FISA, determining that the government had failed to establish probable cause. While the classified nature of FISC proceedings makes it impossible to know whether attorney-client communications were at issue in that case, the denial itself is notable. The FISC approves the vast majority of surveillance requests it receives, and denials are rare enough to be newsworthy.

AI Tools and the New Privilege Questions



The intersection of AI tools and attorney-client privilege has generated a new category of legal questions that courts are only beginning to address. In USA v. Heppner, decided in late 2025 by Judge Jed Rakoff of the Southern District of New York, the court held that documents generated by a defendant using a commercial AI platform and later shared with legal counsel were not protected by attorney-client privilege. Judge Rakoff's reasoning focused on confidentiality, the cornerstone of the privilege. By inputting sensitive information into a consumer AI platform operated by a third party, the defendant voluntarily disclosed that information outside the attorney-client relationship. The AI company's terms of service negated any reasonable expectation of confidentiality.

This ruling has significant implications for legal practice. Many lawyers and their clients use AI tools to draft documents, analyze legal issues, and organize case materials. If communications with AI platforms are not protected by privilege, then every interaction with a commercial AI tool potentially waives the privilege with respect to the information disclosed. The practical effect is to create a new category of privilege risk that did not exist before the widespread adoption of AI tools in legal practice.

In February 2026, two additional federal courts addressed AI and privilege with results that appeared contradictory on the surface. One denied privilege protection for AI-generated materials; the other upheld work product protection in a factually similar context. Legal commentators noted that neither decision announced a new rule of privilege law. Instead, both applied existing principles to novel factual settings, reaching different results based on the specific facts and the degree to which the attorney, rather than the client, directed the AI-assisted work.

The emerging framework suggests that privilege protection for AI-assisted legal work depends on several factors: whether the AI tool is used under conditions that maintain confidentiality (enterprise deployments with contractual confidentiality protections versus consumer platforms with broad usage terms), whether the AI-assisted work is directed by the attorney as part of legal representation, and whether the information processed through the AI tool would otherwise be privileged if communicated directly between attorney and client.

Part V: The NSO Group and the Weaponization of Commercial Surveillance



The WhatsApp Verdict



The litigation between Meta (WhatsApp) and NSO Group, the Israeli manufacturer of the Pegasus spyware, produced the first judicial holding of liability against a commercial spyware company in United States history. In December 2024, a federal court found NSO Group liable for hacking 1,400 WhatsApp users' devices through its Pegasus software, violating the Computer Fraud and Abuse Act, California's data fraud statute, and WhatsApp's terms of service.

In May 2025, a jury awarded $167.3 million in punitive damages and $444,719 in compensatory damages. Court documents revealed that the targeted individuals spanned 51 countries, with 456 targets in Mexico, 100 in India, 82 in Bahrain, 69 in Morocco, and 58 in Pakistan. During the proceedings, NSO's counsel publicly identified Mexico, Saudi Arabia, and Uzbekistan as government clients linked to the 2019 spyware campaign, marking the first public confirmation of NSO's customer base.

In October 2025, the presiding judge reduced the punitive damages to $4 million but issued a permanent injunction barring NSO from ever targeting WhatsApp users again. NSO filed an appeal in November 2025, arguing that the injunction was catastrophic for its business and contrary to the public interest because it disrupted law enforcement, intelligence, and counterterrorism operations conducted by NSO's government clients.

The Broader Spyware Ecosystem



The NSO litigation exists within a larger context of commercial spyware deployment that has eroded privacy protections worldwide. Pegasus and similar tools have been used by governments to target journalists, human rights defenders, opposition politicians, and lawyers. The former Polish justice minister was arrested in January 2025 over allegations of misuse of Pegasus spyware against political opponents. Apple had filed its own lawsuit against NSO but dropped the case in 2024, citing concerns that discovery could reveal sensitive information about its own security measures that might benefit NSO and similar companies.

The commercial spyware market operates in a regulatory gray zone. NSO Group was placed on the US Commerce Department's Entity List in November 2021, restricting its access to American technology. However, with the transition to a new administration in January 2025, NSO invested heavily in lobbying to reverse this designation, hiring lobbyists with connections to the incoming administration and spending over $1.8 million on political campaigns during the 2024 election cycle.

The legal significance of the WhatsApp verdict extends beyond the specific parties. It established that commercial spyware companies can be held liable in US courts for the actions of their government clients, a principle that could apply to other vendors in the growing surveillance technology market. However, as legal scholars have noted, the verdict's precedential value may be limited by the unique circumstances of the case, including Meta's extraordinary resources as a plaintiff, which enabled the kind of sustained, multi-year litigation that few targets of spyware could afford independently.

Part VI: Data Harvesting and the Surveillance Capitalism Connection



The Advertising Surveillance Machine



The connection between commercial data harvesting and government surveillance has become one of the central themes of contemporary privacy law. Digital rights organizations, led by the Electronic Frontier Foundation, have documented how the advertising technology ecosystem, which tracks individuals across websites, apps, and physical locations to serve targeted advertisements, has been co-opted by government agencies seeking to access personal data without the legal process required for traditional surveillance.

The mechanism is straightforward. Data brokers aggregate personal information from hundreds of sources, including app usage data, location data from mobile phones, purchase histories, and social media activity, and sell it to advertisers. Government agencies have discovered that they can purchase the same data without obtaining warrants, effectively circumventing constitutional protections against unreasonable search and seizure. The data broker loophole in surveillance law means that the government can buy what it cannot legally seize, accessing detailed records of individuals' movements, communications, and associations through commercial transactions rather than judicial process.

In its 2025 year-in-review analysis, the EFF described the year as the period when states chose surveillance over safety. Half of US states now mandate age verification for accessing certain online content, a requirement that the EFF argues functions as a new surveillance mechanism, forcing users to identify themselves to access constitutionally protected speech. Nine states saw their age verification laws take effect in 2025 alone, creating new databases of user identities that could be subject to government access.

The explosion of online privacy litigation reflects growing concern about commercial data practices. In 2024, nearly 4,000 privacy-related cases were filed in the United States, up from just over 200 in 2023. This litigation trend continued in 2025, with tracking claims filed in 315 courts across 45 states against 3,512 unique defendants. Many of these cases involve the use of pixel tags, cookies, and other tracking technologies that capture user data without meaningful consent, often under privacy frameworks that were designed for an earlier era of technology.

State Privacy Laws: A Patchwork Under Pressure



The United States continues to lack comprehensive federal privacy legislation, relying instead on a growing patchwork of state laws. Between 2020 and 2024, twenty states enacted comprehensive data privacy statutes. Many observers expected this trend to accelerate in 2025, but surprisingly, no new comprehensive state privacy laws were enacted during the year, despite proposals being introduced in at least thirteen states.

Several factors may explain this pause. An executive order issued in late 2025 was designed to establish a single federal regulatory framework for AI and to preempt state-level restrictions. Privacy advocates argue that this order has had a chilling effect on state legislatures, creating uncertainty about whether new state privacy laws would survive federal preemption challenges. The order reflects a broader tension between the desire for regulatory uniformity and the state-level experimentation that has historically driven privacy law in the United States.

One significant state-level development did occur in 2026. California's Delete Act took effect, allowing residents to compel hundreds of data brokers to delete their personal information through a single mechanism rather than submitting individual requests to each broker. The law has been described as a potential model for broader reform, though its effectiveness will depend on enforcement and compliance.

Mexico and the Biometric Data Grab



International developments illustrate the global scope of the data harvesting challenge. In July 2025, the Mexican government passed laws giving both civil and military law enforcement access to large quantities of personal data and requiring individuals to surrender biometric information regardless of any suspicion of criminal activity. These laws create government databases of biometric identifiers, including facial images and fingerprints, for the entire population, with no meaningful limitations on how the data can be used or shared.

This development is particularly significant because Mexico is a major trading partner of both the United States and the European Union, raising questions about the compatibility of its new biometric collection regime with international data transfer frameworks. If Mexican authorities share biometric data with US law enforcement agencies under existing mutual legal assistance treaties, the data enters the American law enforcement system without the privacy protections that would apply to domestically collected biometric information.

Part VII: Digital Rights Organizations and the Fight for Privacy



The Electronic Frontier Foundation



The Electronic Frontier Foundation has been at the forefront of digital privacy litigation and advocacy since its founding in 1990. In 2025 and early 2026, the organization has focused on several fronts: challenging the expansion of Section 702 surveillance authority, opposing age verification mandates that it views as surveillance mechanisms, and pushing back against the European Commission's Digital Omnibus proposal, which the EFF argues would substantially weaken the GDPR's privacy protections.

The EFF's work illustrates the interconnected nature of modern privacy threats. The organization has documented how the advertising surveillance ecosystem enables government surveillance, how wartime surveillance technologies migrate into civilian law enforcement, and how ostensibly protective measures like age verification create new surveillance infrastructure. This holistic approach to privacy advocacy reflects the reality that privacy threats no longer come from a single source but from the convergence of commercial, governmental, and military surveillance capabilities.

Access Now and Privacy International



Access Now and Privacy International have focused their advocacy on the global dimensions of surveillance, with particular attention to the impact on vulnerable populations. These organizations provide legal aid, technical support, and policy advocacy in countries where governments use surveillance technology against civil society, journalists, and political opponents.

Privacy International has been particularly active in challenging facial recognition technology, publishing research in 2025 on the legal void surrounding the technology and calling for comprehensive regulation that addresses both government and commercial use. The organization's work has highlighted how the absence of regulation in one jurisdiction enables surveillance practices that affect individuals in other jurisdictions, a dynamic that is particularly acute in the context of commercial spyware and cross-border data flows.

In a significant policy development, the United States quietly withdrew from the Freedom Online Coalition in late 2025. This coalition of democratic nations had served as a platform for coordinating responses to internet shutdowns, online censorship, and digital surveillance by authoritarian governments. The US withdrawal signaled a retreat from leadership on global digital rights at a time when authoritarian regimes are promoting more restrictive models of internet governance under the banner of cyber sovereignty.

The EPIC Challenge to Data Brokers



The Electronic Privacy Information Center has made the regulation of data brokers a central focus of its advocacy, arguing that the data broker industry represents one of the most significant threats to privacy in the modern economy. EPIC has supported legislative efforts to close the data broker loophole in surveillance law, which allows government agencies to purchase personal data that they would otherwise need a warrant to obtain.

EPIC's campaign to reform or sunset Section 702 of FISA reflects a broader strategy of connecting surveillance reform to data broker regulation. The organization argues that as long as the government can purchase personal data from commercial sources, statutory restrictions on government surveillance will be incomplete, because the same information that Section 702 was designed to collect through intelligence operations can often be obtained through commercial transactions.

Part VIII: The Post-Conflict Privacy Landscape



What Happens to Surveillance Infrastructure After the Fighting Stops



One of the most important and least discussed aspects of wartime surveillance is what happens to the surveillance infrastructure, the databases, the biometric records, the monitoring systems, after hostilities end. Historical precedent suggests that surveillance capabilities developed during conflicts are rarely dismantled. Instead, they are absorbed into permanent security structures, repurposed for law enforcement or intelligence gathering, or sold to other governments or commercial entities.

The facial recognition databases assembled during the Gaza conflict illustrate this challenge. Biometric data collected from Palestinian civilians at military checkpoints has been stored in systems that operate outside any data protection framework. If and when the conflict ends, questions will arise about the retention, deletion, or continued use of this data. International humanitarian law provides no clear framework for the treatment of biometric data collected during armed conflict, and the ICRC's recommendations on the subject, while thoughtful, have no binding legal force.

Ukraine presents a different but related challenge. The country's rapid integration of AI-powered surveillance systems has created an extensive digital infrastructure for military intelligence that will need to be adapted to peacetime governance. The surveillance capabilities developed during the conflict, including facial recognition, open-source intelligence analysis, and drone-based monitoring, could be repurposed for domestic law enforcement or intelligence gathering after the conflict ends. The legal and institutional frameworks that will govern this transition are still being developed.

The Precedent Problem



Every deployment of AI surveillance in a conflict zone creates a precedent that can be invoked by other governments in other contexts. The use of facial recognition at military checkpoints becomes a justification for its use at border crossings. The use of AI targeting systems in armed conflict normalizes algorithmic decision-making in law enforcement. The development of military language models trained on intercepted communications provides a template for domestic intelligence agencies seeking to process communications data at scale.

This precedent dynamic is accelerated by the commercial surveillance industry. Companies that develop surveillance technologies for military clients market those same technologies, often in modified form, to civilian law enforcement agencies and commercial security firms. The result is a continuous flow of surveillance capability from military to civilian contexts, driven by commercial incentives rather than policy deliberation.

Recommendations for the Post-Conflict Framework



Legal scholars and digital rights organizations have proposed several measures to address the post-conflict surveillance challenge. These include mandatory data retention limits for biometric data collected during armed conflicts, requiring that such data be deleted within a specified period after hostilities end. They also include independent oversight mechanisms for the repurposing of wartime surveillance infrastructure, ensuring that systems developed for military intelligence are not simply transferred to domestic law enforcement without public debate and legal authorization.

Extending data protection principles from frameworks like the GDPR, including purpose limitation, data minimization, and storage limitation, to military AI systems during and after conflicts has been proposed by the ICRC and supported by several European governments. However, incorporating these principles into binding international law would require a treaty negotiation process that major military powers have shown little appetite to pursue.

Part IX: The Chilling Effect on Civil Liberties and Democratic Participation



Surveillance and Self-Censorship



The relationship between surveillance and self-censorship has been documented extensively in social science research. When individuals believe they are being monitored, they alter their behavior, their speech, their associations, and their political activities in ways that reduce the diversity of viewpoints and the vigor of democratic participation. This chilling effect operates regardless of whether surveillance is actually occurring; the perception of surveillance is sufficient to alter behavior.

AI-powered surveillance amplifies this chilling effect because it makes surveillance invisible and pervasive. Unlike a security camera mounted on a wall, facial recognition software embedded in existing infrastructure can identify individuals without their knowledge. Unlike a wiretap, which targets a specific phone line, bulk collection programs sweep up all communications passing through a network. The omnipresence of potential surveillance changes the calculus of civic participation, particularly for individuals belonging to communities that have historically been subject to government monitoring, including racial minorities, religious minorities, political dissidents, and immigrants.

The legal response to this chilling effect has been uneven. The ECtHR has recognized that surveillance can have a chilling effect on the exercise of rights protected by Articles 8 and 10 of the Convention, including freedom of expression and the right to privacy. The Court has held that this chilling effect is a relevant consideration in assessing whether surveillance measures are necessary in a democratic society. However, the Court has not established a general rule requiring governments to demonstrate that their surveillance programs do not produce chilling effects, leaving the assessment to be conducted on a case-by-case basis.

In the United States, the Supreme Court's standing doctrine has historically made it difficult for individuals to challenge surveillance programs, because plaintiffs must demonstrate that they have been personally subject to surveillance in order to bring suit, and the classified nature of surveillance programs makes such a showing nearly impossible. The result is a body of law that acknowledges the theoretical harm of surveillance but provides limited practical remedies for individuals who experience that harm.

The Intersection of AI and Democratic Institutions



The deployment of AI surveillance tools by democratic governments raises a fundamental question about the compatibility of mass surveillance with democratic governance. Democracy depends on the existence of a private sphere in which individuals can form opinions, associate with others, and organize political activity without government monitoring. When that private sphere is eroded by pervasive surveillance, the conditions necessary for democratic participation are undermined.

Digital rights organizations have argued that this erosion is already underway. The EFF's documentation of the advertising surveillance machine, Privacy International's research on facial recognition, and the Brennan Center's analysis of Section 702 all point to the same conclusion: the combination of commercial data harvesting, government surveillance, and AI-powered analysis has created a surveillance infrastructure of unprecedented scope, operating largely without democratic oversight or accountability.

The legal profession has a particular stake in this debate. Lawyers serve as gatekeepers of the legal system, advising clients on their rights and representing them in proceedings against the government. When attorney-client communications are subject to surveillance, the adversarial system is compromised. When lawyers self-censor because they fear monitoring, the quality of legal representation declines. When clients withhold information from their attorneys because they do not trust the confidentiality of the communication, the entire system of justice is weakened.

Part X: Looking Forward, the Privacy Law Landscape in 2026 and Beyond



The EU AI Act's Full Application



August 2, 2026, marks the date when the EU AI Act becomes fully applicable, including the high-risk AI system requirements that will affect surveillance technologies, biometric systems, and law enforcement AI tools. Organizations deploying AI systems in the European Union will need to have completed conformity assessments, implemented risk management systems, and established the transparency mechanisms required by the Act. The penalty provisions, including fines of up to 35 million euros or 7 percent of global turnover for prohibited practices, provide substantial enforcement leverage.

The Act's prohibitions on real-time biometric identification in public spaces, indiscriminate facial image scraping, and emotion recognition in workplaces and schools will set the global standard for AI surveillance regulation. Companies that develop surveillance technologies for the European market will need to design their systems to comply with these restrictions, and the extraterritorial reach of the Act means that non-EU companies serving European customers will also be affected.

Section 702 and the April 2026 Deadline



The expiration of Section 702 on April 20, 2026, creates a legislative forcing function that will determine the direction of US surveillance law for years to come. The outcome could range from a clean reauthorization that preserves or expands existing authorities, to a reform bill that closes the data broker loophole and requires warrants for US-person queries, to a lapse that temporarily suspends the government's Section 702 collection authority.

The reform coalition's strength, reflected in the near-passage of the warrant amendment in 2024 and the growing number of organizational signatories to reform letters in 2025 and 2026, suggests that a clean reauthorization is unlikely. But the intelligence community's arguments about the operational importance of Section 702, supported by declassified examples of its use in counterterrorism and counterintelligence, make a complete sunset equally improbable. The most likely outcome is a compromise bill that includes some additional safeguards but preserves the core collection authority, with another short sunset period that ensures the debate continues.

International Treaty Negotiations on Autonomous Weapons



The push for an international treaty on autonomous weapons by 2026 remains a priority for the UN Secretary General and a growing number of member states. However, the major military powers, including the United States, China, Russia, and the United Kingdom, have shown varying degrees of reluctance to accept binding constraints on AI-powered military systems. The failure to produce a treaty would leave the current legal vacuum in place, allowing military forces to continue deploying AI surveillance and targeting systems without clear international legal constraints.

Even without a treaty, the growing body of state practice, ICRC guidance, and academic commentary is creating what international lawyers call customary international law, a set of norms that arise from the consistent practice of states accompanied by a sense of legal obligation. Whether this emerging custom will be sufficient to constrain the deployment of AI surveillance in future conflicts depends on whether major military powers adopt the safeguards recommended by the ICRC and incorporate them into their military doctrine and rules of engagement.

The Post-Conflict Data Question



As conflicts in Ukraine, Gaza, and other theaters eventually reach some form of resolution, the question of what happens to the surveillance infrastructure and biometric databases built during those conflicts will become urgent. The international community has no established framework for addressing this question, and the development of such a framework will require engagement from international humanitarian law experts, data protection authorities, military legal advisors, and civil society organizations.

The legal profession will play a central role in this process, advising governments on the development of post-conflict data governance frameworks, representing individuals whose biometric data was collected without consent during armed conflict, and advocating for the application of data protection principles to military AI systems. The cases and regulatory developments discussed in this article provide the foundation for this work, but much of the legal framework remains to be built.

Conclusion: The Urgency of the Present Moment



Privacy law is being rewritten in real time, driven by the convergence of armed conflict, artificial intelligence, and commercial data harvesting. The developments documented in this article are not incremental adjustments to an established framework. They represent a fundamental transformation in the relationship between individuals, governments, and technology, one that is occurring faster than legal institutions can respond.

The conflicts in Ukraine and Gaza have demonstrated that AI-powered surveillance is no longer a future possibility but a present reality. Facial recognition technology is deployed at military checkpoints. AI systems generate target lists from intercepted communications. Large language models are being trained on surveillance data to process intelligence at a scale that human analysts cannot match. These capabilities do not disappear when the fighting stops. They migrate into civilian law enforcement, commercial security, and the permanent architecture of state power.

At the same time, the legal frameworks designed to protect privacy are under sustained pressure. Section 702 of FISA enables warrantless collection of Americans' communications. The data broker industry provides a commercial pathway around constitutional protections. The EU AI Act's ambitious prohibitions on biometric surveillance face the challenge of enforcement across 27 member states with varying levels of institutional capacity. And the ECtHR's end-to-end safeguards framework, while principled, has proven difficult to translate into consistent national practice.

For the legal profession, these developments demand attention and action. Attorney-client privilege, the foundation of the adversarial system, is threatened by surveillance practices that do not distinguish between privileged communications and ordinary intelligence targets. AI tools used in legal practice create new privilege risks that courts are only beginning to address. And the clients who most need effective legal representation, those targeted by government surveillance, dissidents, journalists, human rights defenders, are precisely those whose ability to communicate confidentially with counsel is most at risk.

The digital rights organizations fighting on these fronts, the EFF, Access Now, Privacy International, EPIC, and the Brennan Center, among others, are doing essential work with limited resources. Their litigation, advocacy, and research provide the raw material from which privacy law is being constructed. But the scale of the challenge requires broader engagement from the legal profession, from law firms, bar associations, law schools, and individual practitioners who recognize that the privacy framework established in the coming years will determine the conditions of democratic life for generations to come.

The moment demands both urgency and precision. Urgency, because the surveillance infrastructure being built today will be exceptionally difficult to dismantle once it is in place. Precision, because the legal frameworks that govern AI, surveillance, and data protection must be crafted with sufficient care to protect fundamental rights without foreclosing legitimate security needs. Getting this balance right is one of the defining legal challenges of this generation. The stakes could not be higher.

Citations and References



1. International Committee of the Red Cross, The Use of Facial Recognition for Targeting Under International Law, International Review of the Red Cross, June 2025.

2. Big Brother Watch and Others v. United Kingdom, Grand Chamber, European Court of Human Rights, Application Nos. 58170/13, 62322/14, and 24960/15.

3. Pietrzak and Bychawska-Siniarska and Others v. Poland, European Court of Human Rights, finding violations of Article 8 regarding bulk data retention and surveillance.

4. European Union Agency for Fundamental Rights and ECtHR, Joint Factsheet on Mass Surveillance, April 2025.

5. Reforming Intelligence and Securing America Act (RISAA), Pub. L. No. 118-49 (April 20, 2024), reauthorizing Section 702 of FISA through April 20, 2026.

6. Brennan Center for Justice, Section 702 of the Foreign Intelligence Surveillance Act (FISA): 2026 Resource Page.

7. Electronic Privacy Information Center, FISA Section 702: Reform or Sunset Campaign.

8. Regulation (EU) 2024/1689 (EU AI Act), Article 5 (Prohibited Practices), entered into application February 2, 2025.

9. In re Clearview AI Inc. Consumer Privacy Litigation, No. 21-cv-0135 (N.D. Ill.), settlement approved March 20, 2025.

10. WhatsApp Inc. v. NSO Group Technologies Ltd., N.D. Cal. Liability ruling December 2024; jury damages verdict May 2025; permanent injunction October 2025.

11. USA v. Heppner, S.D.N.Y. 2025 (Judge Rakoff), ruling on AI-generated documents and attorney-client privilege.

12. Electronic Frontier Foundation, The Year States Chose Surveillance Over Safety: 2025 in Review, December 2025.

13. Brennan Center for Justice, Government Surveillance Undermines Attorney-Client Privilege.

14. Privacy International, Toward Regulation: Addressing the Legal Void in Facial Recognition Technology, 2025.

15. Israel Military AI Targeting Systems (Lavender), investigative reporting 2024-2025.

16. Brave1 Platform, Ukrainian Ministry of Digital Transformation, operational data as of early 2025.

17. EU-US Data Privacy Framework, General Court of the European Union, challenge dismissed September 2025.

18. Illinois Biometric Information Privacy Act, 740 ILCS 14.

19. UK Data (Use and Access) Act 2025, Royal Assent June 19, 2025.

20. India AI Governance Guidelines, Ministry of Electronics and Information Technology, November 2025.

21. California Delete Act, effective 2026.

22. Mexico biometric data collection laws, enacted July 2025.

23. Foreign Intelligence Surveillance Court, denial of Title I surveillance application, January 2026.

24. ICRC Report on Autonomous Weapons and AI in Armed Conflict, March 2025.

25. Human Rights Law Review, National Security Secrecy in ECtHR Proceedings, 2025.

Topics Legal News
G
About the Author Global Law Lists International Legal Network & Client Referral Platform

This article was researched and written by the editorial team at Global Law Lists.org® — the world’s premier international legal network connecting verified lawyers and law firms with clients across 240+ jurisdictions.

Published March 24, 2026
Updated March 24, 2026
Reading Time 37 minutes
Category Legal News