Mira Murati

Thinking Machines Lab, the AI startup founded by former OpenAI chief technology officer Mira Murati, has secured a multi-year partnership with Nvidia that includes a significant investment and access to at least one gigawatt of the chipmaker's next-generation processors, Reuters reported March 10.

The deal gives Thinking Machines a path to deploy Nvidia's upcoming Vera Rubin systems starting early 2027, placing the young startup directly into the escalating competition for large-scale AI training infrastructure. Financial terms of the agreement were not disclosed, though the computing power commitment alone represents enough capacity to power roughly 750,000 U.S. homes.

One Gigawatt Commitment Positions Startup Among Infrastructure Elite

The one gigawatt compute allocation signals extraordinary confidence in Murati's vision and represents one of the largest infrastructure commitments Nvidia has made to a startup founded less than two years ago. Industry executives use gigawatt-scale measurements when discussing the most ambitious AI training facilities—the kind required to train frontier models that compete with OpenAI's GPT series, Anthropic's Claude, and Google's Gemini.

Under the agreement, Thinking Machines will deploy Nvidia's Vera Rubin systems—the chipmaker's next-generation AI accelerators designed for training large language models at unprecedented scale and efficiency. The computing power will primarily be used to train the startup's artificial intelligence models, giving Thinking Machines access to cutting-edge hardware before most competitors can secure allocations.

The partnership reflects Nvidia's growing role across the AI ecosystem. The company has evolved far beyond chip supplier into a capital and infrastructure orchestrator that sits at the center of decisions about which AI labs gain access to scarce computing resources. Nvidia recently committed $30 billion to OpenAI and invested $10 billion in Anthropic, supplying the GPUs that power their training clusters and effectively positioning itself as kingmaker in determining which companies can compete at the frontier of AI capabilities.

Fundraising Continues Amid Talent Departures

Sources familiar with fundraising discussions told Reuters that Thinking Machines has explored another funding round that could push its valuation into the tens of billions of dollars. The startup has not disclosed its current valuation, though earlier reports from Crunchbase suggested the company raised a $2 billion seed round at a $10 billion valuation in 2025—which would represent the largest U.S. seed round of all time.

The continued fundraising activity comes despite significant talent departures that would typically undermine investor confidence. Co-founder and former chief technology officer Barret Zoph and co-founder Luke Metz both returned to OpenAI after joining Murati's venture earlier in its formation. Zoph's departure is particularly notable given his prominence in AI research, where he co-authored influential work on neural architecture search and transformer optimization during his tenure at Google Brain before joining OpenAI.

Even with those departures, Murati's project remains one of the most closely watched new AI labs in Silicon Valley. Her track record at OpenAI—where she oversaw development of GPT-4, DALL-E, and ChatGPT before stepping down in late 2025—gives investors confidence that Thinking Machines can execute on ambitious technical vision despite organizational turbulence common among early-stage AI startups racing to assemble world-class research teams.

Nvidia's Strategic Calculus in AI Infrastructure Investments

The Thinking Machines partnership illustrates Nvidia's strategy of using compute allocations and direct investments to secure relationships with emerging AI labs that could become major customers or strategic partners. By committing chips and capital early, Nvidia gains visibility into technical roadmaps, influences infrastructure decisions, and ensures its hardware remains the standard for frontier AI development.

The approach creates powerful network effects: more AI labs training on Nvidia chips means more tools, libraries, and optimization techniques built specifically for Nvidia architectures, which in turn makes it harder for competitors like AMD, Intel, or custom chip startups to gain traction even if they offer superior price-performance on paper.

For Thinking Machines, the Nvidia partnership solves the most critical bottleneck facing any new AI lab: access to sufficient compute to train competitive models. Securing one gigawatt of next-generation chips eliminates years of uncertainty about whether the startup can obtain hardware allocations that larger, more established competitors can lock up through long-term contracts with cloud providers and chip manufacturers.

The multi-year structure also provides planning stability that enables Thinking Machines to design training runs, hire research talent, and commit to product roadmaps with confidence that compute capacity will be available when needed. This contrasts sharply with the spot-market approach many smaller AI companies must use, where they compete for leftover capacity at unpredictable prices and availability.

Silicon Valley's Infrastructure Arms Race Intensifies

The Thinking Machines announcement arrives as competition for AI training infrastructure reaches unprecedented intensity. OpenAI's $110 billion February 2026 funding round included massive commitments from Amazon, Nvidia, and SoftBank specifically to secure compute capacity. Anthropic's $30 billion Series G emphasized enterprise revenue growth but also included provisions for expanded training infrastructure. Nscale's $2 billion Series C last week valued the UK data center startup at $14.6 billion based primarily on its ability to deliver GPU clusters at scale.

The pattern is clear: in 2026, AI value increasingly flows to companies that control physical infrastructure—chips, data centers, power capacity, and cooling systems—rather than just algorithmic innovation. Thinking Machines' partnership with Nvidia positions the startup to compete in an environment where model quality depends as much on training scale as on research breakthroughs.

Whether Murati's team can capitalize on this infrastructure advantage remains an open question given the talent departures and intense competition from better-funded rivals. But the Nvidia partnership ensures Thinking Machines will have the computational firepower to find out.

Keep Reading