
Every pundit who predicted peak AI in 2025 was wrong.
Stanford University's Institute for Human-Centered AI released its 2026 AI Index today - 423 pages of independent data covering capabilities, investment, adoption, workforce impact, and public sentiment. The picture it draws is striking in both directions: performance numbers that exceed predictions, and trust numbers that should alarm everyone building these systems.
The Capability Numbers
On Humanity's Last Exam - benchmark questions designed by subject-matter experts to represent the hardest problems in their fields - the top-scoring model answered 8.8% of questions correctly in 2025. The figure is now 38.3%, with the best models as of April 2026 crossing 50%. On SWE-bench Verified, measuring software engineering performance, scores jumped from 60% to near 100% of the human baseline in a single year. AI adoption has reached 88% in the tech industry. Four in five university students now use generative AI.
Generative AI reached 53% global population adoption in just three years - faster than the PC and faster than the internet. The estimated value of generative AI tools to US consumers hit $172 billion annually by early 2026, with median value per user tripling between 2025 and 2026.
The China Question
The report confirms what was increasingly visible in model rankings: the US-China gap on AI performance has effectively closed. As of March 2026, the top models from all leading labs are clustered together with razor-thin differences. The report describes leading models as "nearly indistinguishable from one another." China leads the world in AI publications, citation counts, total patent output, and industrial robot installations. South Korea has emerged as the leader in innovation density, filing more patents per capita than any other country.
One consequence of the policy environment: US restrictions on H-1B visas, including a new $100,000 employer fee per hire, have caused a sharp drop in AI researchers coming to America. The talent pipeline the US built over decades is showing cracks.
The Trust and Transparency Problem
Here the report turns uncomfortable. Only 10% of Americans say they are more excited than concerned about AI in daily life. The US reported the lowest trust in its government to regulate AI among all surveyed countries, at 31%. Employment for software developers aged 22 to 25 has fallen nearly 20% since 2022. A third of organizations expect AI to shrink their workforce.
Meanwhile, transparency has declined. Over 90% of notable AI models are now created by private companies. Google, Anthropic, and OpenAI have all stopped disclosing dataset sizes and training durations for their latest models. 80 of the 95 most notable models launched last year were released without their training code. AI industry representatives have tripled their share of witnesses at congressional hearings since 2017. Neutral academics have largely disappeared from those same hearings.
The capabilities are real. The investment is real. The adoption is real. The transparency is gone.




