
Welcome to today's edition of AI Business Weekly. OpenAI discussed raising $100 billion at a $750 billion valuation just three months after hitting $500 billion, Google launched Gemini 3 Flash as its default model to challenge ChatGPT with speed and cost advantages, SandboxAQ raised $300 million for quantum AI applications in drug discovery, Colorado defied Trump's executive order by advancing its AI regulations, and Hamming.ai secured $3.8 million to automate voice agent testing. These stories share one thread: the AI industry scales infrastructure at unprecedented speed while governance mechanisms—whether corporate testing platforms, state regulations, or quantum-classical hybrid approaches—struggle to keep pace with deployment realities. Let's dive in.
OpenAI Eyes $100B Raise at $750B Valuation—Up 50% Since October
OpenAI is in preliminary talks to raise tens of billions at a $750 billion valuation, representing a 50% jump in just three months as AI infrastructure costs soar to unprecedented levels. Read more →
Google Makes Gemini 3 Flash Default Model Globally, Challenging ChatGPT
Google launched Gemini 3 Flash as the default model worldwide, offering Pro-grade reasoning at lower cost and beating GPT-5.2 on key benchmarks as competition with OpenAI intensifies. Read more →
SandboxAQ Raises $300M at $5.3B for Quantum AI in Drug Discovery
Eric Schmidt-backed SandboxAQ secured over $300 million from Marc Benioff and others to develop Large Quantitative Models combining quantum computing with AI for drug discovery and cybersecurity. Read more →
Colorado Defies Trump Order, Advances AI Regulations Toward June 2026
Colorado is moving ahead with AI regulations targeting high-risk systems despite Trump administration orders seeking state law preemption, setting up a federal-state legal showdown. Read more →
YCombinator-Backed Hamming.ai Raises $3.8M to Test AI Voice Agents
Hamming.ai secured $3.8 million to automate testing for AI voice agents as companies rapidly deploy conversational AI without adequate quality assurance infrastructure. Read more →
The Future of Shopping? AI + Actual Humans.
AI has changed how consumers shop by speeding up research. But one thing hasn’t changed: shoppers still trust people more than AI.
Levanta’s new Affiliate 3.0 Consumer Report reveals a major shift in how shoppers blend AI tools with human influence. Consumers use AI to explore options, but when it comes time to buy, they still turn to creators, communities, and real experiences to validate their decisions.
The data shows:
Only 10% of shoppers buy through AI-recommended links
87% discover products through creators, blogs, or communities they trust
Human sources like reviews and creators rank higher in trust than AI recommendations
The most effective brands are combining AI discovery with authentic human influence to drive measurable conversions.
Affiliate marketing isn’t being replaced by AI, it’s being amplified by it.
📢 The Signal Behind the Noise
OpenAI's $750 billion valuation and $1.4 trillion infrastructure commitments reveal AI's transformation from software category to capital-intensive infrastructure rivaling telecommunications buildouts. Google's response—making Gemini 3 Flash free globally—shows that competitive dynamics increasingly favor distribution and efficiency over pure capability. SandboxAQ's $300 million raise validates that quantum-inspired approaches create defensible niches beyond foundation model races. Meanwhile, Colorado's defiance of federal preemption and Hamming.ai's testing platform represent two faces of AI governance: top-down regulation attempting to constrain harmful uses versus bottom-up tooling helping companies deploy responsibly. The disconnect is stark—OpenAI discusses $100 billion raises while a voice agent testing startup raises $3.8 million, a 26,000x difference reflecting where capital flows versus where quality control happens. Organizations that recognize this gap and invest in governance infrastructure proportional to deployment scale will separate themselves from those optimizing purely for capability and speed. The question isn't whether to scale AI—capital ensures that happens—but whether governance mechanisms mature fast enough to prevent systemic failures.





