In partnership with

Good Morning! Welcome to today's AI Business Weekly. You're part of a community that values clarity over hype and business impact over benchmark theater. Let's get to it.

Pentagon Signs Deal with xAI to Deploy Grok in Classified Military Systems as Anthropic Faces "Supply Chain Risk" Ultimatum Over AI Safety Guardrails

The US Department of Defense finalized an agreement with Elon Musk's xAI to deploy Grok inside classified military systems handling intelligence analysis and weapons development, breaking Anthropic's monopoly as Defense Secretary threatens to blacklist the company if it won't remove safeguards on mass surveillance and autonomous weapons by Friday deadline. Read more

Nvidia Confirms N1/N1X Arm-Based Laptop Chips Launching H1 2026 with Dell and Lenovo as First Major OEMs Targeting Apple MacBook Competition

Wall Street Journal reports Nvidia-MediaTek partnership will debut consumer system-on-chips featuring up to 20 CPU cores and RTX 5070-level integrated GPU in spring 2026 with at least eight laptop models spanning Dell XPS, Alienware, Lenovo Legion, and IdeaPad lines directly challenging Intel and AMD's laptop processor dominance. Read more

Anthropic CEO Dario Amodei Summoned to Pentagon for "Tense" Meeting Tuesday as Claude Faces Replacement Threat and Friday Deadline Looms

Defense Secretary Hegseth presented ultimatum demanding Anthropic lift all safeguards on Claude for "all lawful purposes" or face supply chain risk designation by Friday 5:01 PM deadline that would legally bar federal contractors from doing business with the company, potentially disconnecting Anthropic from Western commercial ecosystem. Read more

Google and OpenAI Accelerate Classified AI Negotiations as Pentagon Diversifies Beyond Anthropic Claude with Multi-Vendor Strategy

Defense Department intensified talks with Google for Gemini and OpenAI for ChatGPT deployment in classified systems as Pentagon pursues redundant AI capabilities across multiple vendors, with both companies facing same "all lawful purposes" requirement that created Anthropic crisis while Google reportedly close to agreement and OpenAI negotiations intensifying despite complex issues. Read more

Canadian Officials Express "Disappointment" After OpenAI Meeting Over Tumbler Ridge Shooting as Company Offers No New Safety Protocols

AI Minister Evan Solomon said OpenAI failed to present substantial safety measures in Ottawa meeting after company banned Tumbler Ridge shooter's ChatGPT account seven months before mass shooting that killed eight people but never alerted law enforcement, with Canadian government considering regulation requiring AI companies to report users plotting violence. Read more

Meet America’s Newest $1B Unicorn

A US startup just hit a $1 billion private valuation, joining billion-dollar private companies like SpaceX, OpenAI, and ByteDance. Unlike those other unicorns, you can invest.

Why all the interest? EnergyX’s patented tech can recover up to 3X more lithium than traditional methods. That's a big deal, as demand for lithium is expected to 5X current production levels by 2040. Today, they’re moving toward commercial production, tapping into 100,000+ acres of lithium deposits in Chile, a potential $1.1B annual revenue opportunity at projected market prices.

Right now, you can invest at this pivotal growth stage for $11/share. But only through February 26. Become an early-stage EnergyX shareholder before the deadline.

This is a paid advertisement for EnergyX Regulation A offering. Please read the offering circular at invest.energyx.com. Under Regulation A, a company may change its share price by up to 20% without requalifying the offering with the Securities and Exchange Commission.

📢 The Signal Behind the Noise

When the Pentagon threatens to blacklist an AI company for refusing to enable mass surveillance while approving Musk's xAI without restrictions, the industry's ethical dividing line becomes brutally clear: comply or lose government access. But the real inflection point isn't military adoption—it's Canada demanding answers from OpenAI after a banned ChatGPT user killed eight people seven months later. One story is about wartime AI deployment. The other is about whether AI companies have a duty to warn authorities when users demonstrate violent intent. The Pentagon can force compliance through contracts. Democracies enforce it through regulation after tragedies prove self-policing fails. The question isn't whether AI companies will face mandatory reporting requirements. It's whether they'll implement them before or after the next preventable catastrophe.

🎁 Refer a Friend

Know someone who'd benefit from bite-sized AI business news? Share AI Business Weekly and help them stay ahead.