Anthropic launched Claude Opus 4.7 on April 16, 2026, releasing its most capable commercially available model to date while deliberately keeping an even more powerful successor - Claude Mythos Preview - restricted to a small group of enterprise partners for safety testing.

Claude Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to hand off their hardest coding work to Opus 4.7 with confidence. Anthropic The model also brings substantially better vision, processing images at higher resolution - a meaningful upgrade for teams working with charts, diagrams, and technical documentation.

A Tighter Race Than the Headlines Suggest

The benchmark numbers are impressive but the competitive context is equally important. Opus 4.7 exceeds its most direct rivals - OpenAI's GPT-5.4, released in early March 2026, and Google's Gemini 3.1 Pro - on key benchmarks including agentic coding, scaled tool-use, agentic computer use, and financial analysis. But on directly comparable benchmarks, Opus 4.7 only leads GPT-5.4 by a narrow margin. VentureBeat Competitors still hold the edge in areas like agentic search and multilingual tasks.

For teams building with AI, that tightness matters. The era of one model dominating every category appears to be over.

What's New for Developers

Claude Code gets a new /ultrareview command, a new xhigh effort level for Opus 4.7 sitting between high and max, and Auto mode now available for Max subscribers. Releasebot These additions are aimed squarely at engineering teams running long, multi-step workflows - the kind of autonomous coding work that previously required close human supervision.

Pricing holds steady at $5 per million input tokens and $25 per million output tokens. One caveat worth flagging: an updated tokenizer can increase input token counts by up to 35 percent, so teams managing tight cost budgets should test before scaling.

The Mythos Situation

Anthropic's decision to hold back Mythos Preview reflects a deliberate safety strategy. Opus 4.7 is the first model where Anthropic is testing new automated safeguards around cybersecurity use cases - what the company calls its Cyber Verification Program. Security professionals who wish to use Opus 4.7 for legitimate purposes like penetration testing and red-teaming can apply to join. Anthropic

The signal here is clear: Anthropic believes its most capable model presents real dual-use risks, and it is working through those concerns methodically before any broad release.

What This Means for Your Business

For executives evaluating AI tools for complex, document-heavy, or multi-step workflows, Opus 4.7 is worth a hard look. The improvements in sustained reasoning and instruction-following are the kind that reduce the need for human review cycles - which is where the real cost savings tend to show up. If you're using Claude for coding workflows specifically, the /ultrareview feature alone could materially improve output quality for production-grade work.

Keep Reading