Should We Legislate AI Watermarks?
Preliminary arguments on the AI Labeling Act
Bottom Line Up Front
Congress should reinforce existing voluntary, public-private coordination on AI watermarks. This work is already underway with direction from the National Institute of Standards and Technology, the Department of Justice, and “America’s AI Action Plan” released by the White House earlier this year.
Background
The AI Labeling Act is a response to legitimate public concern about the prevalence and rapid diffusion of realistic AI-generated audio-visual content. Coordination by both private and public actors is necessary for an effective response.
The first major AI safety law passed by Congress was the TAKE IT DOWN Act, signed by President Trump in May 2025.
The law mandates penalties for creators of non-consensual intimate imagery (NCII, or “revenge porn”), enforced by the Department of Justice and Federal Trade Commission.
Congress is now considering various AI “watermark” legislative efforts. One of these, the bipartisan “AI Labeling Act,” would do the following (according to a one-page summary of the bill:
“Require both visible and machine-readable disclosures identifying AI-generated content, and support user-friendly tools to enable people to detect AI-generated content.”
“Require AI developers and all major social media platforms to collaborate to inform users about the authenticity of shared content.”
“Establish a working group to create technical standards so users and social media platforms can identify AI-generated content and support content provenance.”
The AI Labeling Act does not cover AI-generated text. Rather, it focuses on digital audio, video, and image outputs — with exceptions.
This content must also be heavily modified by AI and “realistic enough such that a reasonable person would not necessarily assume the content was created or substantially modified by a generative artificial intelligence system.”
The legislation exempts content intended for internal R&D.
The act obligates AI-generated content providers to “bind or embed within the covered AI-generated content a machine-readable disclosure” of provenance metadata.
China has enacted the first comprehensive AI labeling laws.
“On September 1, 2025, [the law] came into full effect, mandating that every piece of AI-generated content (AIGC) be explicitly and implicitly ‘watermarked.’ This move is far more than a simple labeling rule. It represents the world’s first comprehensive, nationwide implementation of a technical and legal framework for AI content traceability.”
Analysis
Requiring generative AI to produce visible “watermarks” imposes unnecessary burdens on firms, invites adversarial disinformation, and slows cultural-cognitive adaptation to emerging AI practices.
Mandatory visible watermarks impose unnecessary costs on firms.
Frontier labs are already providing and experimenting with both visible and machine-readable watermarks. Google DeepMind’s Gemini, OpenAI’s Sora, and various trustworthy AI projects are all leading examples.
Private groups like the Frontier Model Forum are working with AI labs to address AI safety risks more generally, including chemical, biological, radiological, and nuclear (CBRN) and other alignment risks.
NIST and other agencies are already well-suited to coordinate with firms and other organizations to develop new guidelines.
NIST provides better, more targeted guidance in coordination with leading AI firms in a way that Congressmembers and their staff cannot. It is relatively more immune to electoral pressures.
NIST already provides guidance in this area and has been directed to expand its efforts in “America’s AI Action Plan” (July 2025).
“None of these [watermark] techniques offer comprehensive solutions on their own; the value of any given technique is use-case and context-specific and relies on effective implementation and oversight.” (NIST, “Reducing Risks Posed by Synthetic Content,” 2024)
“America’s AI Action Plan” already suggests the following responses to “malicious deepfakes” — under “Combat Synthetic Media in the Legal System”:
“Led by NIST at DOC, consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark.
“Led by the Department of Justice (DOJ), issue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules.”
“Led by DOJ’s Office of Legal Policy, file formal comments on any proposed deepfake-related additions to the Federal Rules of Evidence.”
By rejecting mandatory watermarks, the US reinforces its relative capacity for fast technological development, social resilience, and democratic adaptation.
China’s new comprehensive AI labeling law potentially weakens its public information ecosystem by demanding full-spectrum traceability for party accountability.
Visible watermarks can be easily imitated at a superficial level. US adversaries would have ample opportunity to leverage this new asset to pollute the US and allied information spaces with false watermarks, thereby casting doubt on real events. This could occur even with easily debunked false watermarks.
This fits a modified “bootleggers and Baptists” narrative, as adversaries weaponize new regulatory standards and safety advocates look to regulate first-order effects.
Watermark and counter-watermark strategies shift rapidly amid intense political-economic pressures. NIST says they are not a “silver bullet.”
The AI Labeling Act references this “liar’s dividend” as a challenge but punts on the issue and “implementing strategies recommended by the [Federal Trade] Commission.”
Mandating visible watermarks for generative-AI content may delay and politicize our cultural and cognitive adaptation to new incentives and information, both now and in the wake of artificial general intelligence.
People do not generally evaluate claims in a detached, “objective” sense — a “view from nowhere.” Rather, they typically do so by way of heuristics that take into account authority, trust, obligation, and social organization.
This is a “bottom-up,” emergent process that cannot be legislated. Attempts at legislation are unlikely to evolve as quickly as the rapid technological development Congress seeks to regulate.
Cognitive scientist Hugo Mercier writes in his book, Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton 2020):
“We aren’t gullible: by default we veer on the side of being resistant to new ideas. In the absence of the right cues, we reject messages that don’t fit with our preconceived views or preexisting plans. To persuade us otherwise takes long-established, carefully maintained trust, clearly demonstrated expertise, and sound arguments.”


