Anthropic became the first major consumer AI platform to require government-issued photo identification from some users, quietly updating its help center on April 14, 2026 to describe a selective identity verification system that asks certain users to submit a passport or driver's license alongside a live selfie.

The policy isn't universal yet, and may not ever be. Anthropic requires a physical, undamaged passport, driver's license, or national identity card. Photocopies, mobile IDs, and student credentials don't count. A live selfie may also be required. Decrypt

Neither OpenAI's ChatGPT nor Google's Gemini currently require government ID for standard consumer use, making Claude the first of the three major AI platforms to implement this level of verification.

What Triggers the Verification Prompt

Anthropic's spokespeople have clarified that the checks primarily trigger when the system detects potentially fraudulent or abusive behavior. The ID filter focuses on four main categories: usage policy offenders who repeatedly bypass rules, access attempts from restricted regions like mainland China, Russia, or North Korea, general terms of service violations, and users who are under 18. Android Headlines

This is not a universal sign-up requirement. It is a targeted flag system for accounts that trigger abuse detection.

The Privacy and Competitive Backlash

The reaction online was swift and largely negative. The most widely circulated criticism stated bluntly: "Claude now requires government ID verification (via Persona) before subscription. ChatGPT doesn't. Gemini doesn't. Anthropic just handed their competitors a gift." The backlash carries a particular irony: in early 2026, Anthropic saw a significant surge in new users - free sign-ups reportedly jumped around 60% in January and February - partly because the company had declined to participate in certain U.S. government defense AI contracts that OpenAI accepted. Many users chose Claude specifically because they viewed Anthropic as the more privacy-conscious alternative. PBX Science

The Data Handling Architecture

The system relies on a third-party provider, Persona, to process verification data. Anthropic states that it does not store ID images directly. Persona holds the documents and handles processing under contract. Data is encrypted during transfer and storage, and collection is limited to identity confirmation - not model training or advertising. Help Net Security

What This Means for Your Business

For enterprises evaluating AI platforms based on data governance and compliance requirements, Anthropic's ID verification rollout is a double-edged signal. On one hand, it demonstrates that Anthropic is building serious abuse-prevention infrastructure - which matters for regulated industries. On the other hand, it introduces user friction that competitors are not currently imposing. If your employees use Claude for individual productivity work, this is unlikely to affect them directly given the targeted nature of the trigger. If you're evaluating Claude for consumer-facing deployments, it's worth factoring into your user experience planning.

Keep Reading