
Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026. Jon Putman | Anadolu | Getty Images
Meta lost two major court cases in a single week. The verdicts are significant on their own. The reason they matter for the broader AI industry goes deeper than the dollar amounts.
Juries in two separate trials - one in Santa Fe, New Mexico and one in Los Angeles, California - found Meta liable for harm to children caused by its platforms. The New Mexico jury ordered Meta to pay $375 million in damages after the state attorney general alleged the company violated consumer protection laws and failed to protect minors from child predators on Facebook and Instagram. The Los Angeles jury found Meta and Google's YouTube negligent in a personal injury case brought by a young woman who alleged she became addicted to social media as a child, awarding $6 million in total damages split between both companies.
Meta's stock dropped roughly 7% in the days following the verdicts, compounding pressure on a company already dealing with a delayed AI model release and hundreds of job cuts across Reality Labs and other divisions.
The Internal Research Problem
Here is where the story becomes directly relevant to every technology company running safety research on its own AI systems.
More than a decade ago, Meta hired social science researchers specifically to study how its platforms were affecting users. The intent was defensible - understand the risks, demonstrate seriousness about safety, improve the product. Brian Boland, a former Facebook executive who testified in both trials, told CNBC that the findings from that internal research and those internal documents appeared to directly contradict how Meta portrayed itself publicly.
That gap between what Meta knew internally and what it said externally became the most damaging evidence in both courtrooms. Timothy Edgar, a Harvard Law School lecturer, characterized the outcomes as a major watershed event representing a significant shift in how Americans view Big Tech. Legal experts compared the precedent to the tobacco litigation of the 1990s, when internal documents showing companies knew about health risks while publicly denying them became the foundation of billion-dollar settlements.
What This Means for AI Companies
The implications for the AI industry are not hypothetical. Companies building AI systems - from social platforms with recommendation algorithms to consumer-facing chatbots to AI tools deployed in sensitive contexts - are increasingly conducting their own internal safety research. Anthropic publishes safety research. OpenAI maintains internal red-teaming teams. Meta itself has researchers studying the effects of its AI-powered content systems.
If that internal research surfaces findings that contradict public statements - about safety, about harm reduction, about the effects of AI systems on vulnerable users - those documents now exist in a legal environment where they can be used against the company in court. The Meta verdicts demonstrate that juries are willing to hold technology companies accountable when internal knowledge diverges from public claims.
The Bigger Picture for Meta
The court losses arrive at a complicated moment. Meta is spending up to $135 billion on AI infrastructure in 2026 and has committed $600 billion in AI investment through 2028. It brought in Scale AI founder Alexandr Wang as chief AI officer. But prominent AI researchers including Yann LeCun, who led Meta's AI research for more than a decade, have departed. Its foundational Avocado AI model was delayed to at least May after failing to outperform rivals in benchmark tests.
The company's core advertising business remains strong - $196 billion in revenue last year, up 22% - but the legal calendar is about to get heavier. A federal trial in Northern California is expected to commence later this year, with school districts and parents alleging that Meta, YouTube, TikTok, and Snap caused mental health harms to teenagers. Attorneys representing plaintiffs told CNBC they expect additional financial penalties that will make this week's verdicts look like a preview.
For AI leaders watching this play out, the lesson is uncomfortable but clear. Doing the research to understand your product's risks is the right thing to do. But the gap between what that research finds and what you say publicly is now a legal liability - not just a reputational one.




