
Major AI chatbots from Meta, Google, OpenAI, and Microsoft readily recommend illegal online casinos and advise users on bypassing UK gambling protections designed to prevent addiction, fraud, and suicide, according to an investigation published March 8 by The Guardian and Investigative Europe.
The probe tested five widely used AI products—ChatGPT, Google Gemini, Meta AI, Microsoft Copilot, and Perplexity—and found all could be easily prompted to list unlicensed offshore casinos operating illegally in the UK. These platforms, typically registered under Caribbean jurisdictions like Curaçao, have been linked to fraud, severe addiction, and multiple suicides, including the February 2026 death of 22-year-old Ollie Long.
How AI Chatbots Undermine Gambling Protections
The investigation revealed chatbots offering step-by-step guidance on circumventing safeguards meant to protect vulnerable people. Meta AI described legally required "source of wealth" verification checks—designed to prevent money laundering and protect problem gamblers—as a "buzzkill" and a "real pain," then provided tips on avoiding them.
Several chatbots explained how to access gambling sites not registered with GamStop, the UK's national self-exclusion system that allows people with gambling problems to block themselves from all licensed betting platforms. GamStop currently protects over 500,000 registered users, but unlicensed offshore casinos ignore the database entirely.
Google's Gemini and Meta AI both recommended casinos based on competitive bonuses and fast cryptocurrency payouts—features commonly used by illegal operators to attract players while bypassing traditional financial verification systems. Only Microsoft Copilot and ChatGPT prefaced their recommendations with health warnings, though both still provided detailed lists of illegal sites.
UK Government and Regulators Condemn Tech Companies
The findings drew immediate condemnation from the UK government, the Gambling Commission, and addiction experts. Henrietta Bowden-Jones, the UK's national clinical adviser on gambling harms, stated unequivocally that "no chatbot should be recommending unlicensed casinos or undermining protective measures like GamStop."
A UK government spokesperson told The Guardian that "chatbots must protect all users from illegal content" and warned that regulations would evolve to address AI-driven harms. The Gambling Commission confirmed it would work with Ofcom, the UK communications regulator, to examine enforcement options against tech platforms facilitating access to illegal gambling.
Chloe Long, whose brother Ollie died by suicide after struggling with gambling addiction fueled by offshore casinos, called for urgent accountability. "Social media and AI platforms are directing vulnerable people to these sites with no consequences," she said. "The digital giants must be held responsible."
Tech Companies Promise Fixes, Experts Remain Skeptical
In response to the investigation, Meta, Google, Microsoft, and OpenAI pledged to refine their AI systems to prevent harmful recommendations, particularly for younger users. However, addiction experts and campaigners expressed skepticism about voluntary industry reforms.
The investigation comes as AI chatbots reach billions of users globally—ChatGPT alone has 800 million weekly users, while Meta AI is integrated across Facebook, Instagram, and WhatsApp, reaching over 3 billion people. The scale means even a small percentage of harmful recommendations translates to millions of vulnerable individuals being directed toward illegal gambling platforms.
The probe tested chatbots in early March 2026, suggesting the problem persists despite tech companies' repeated promises throughout 2025 to improve AI safety guardrails. None of the five chatbots refused to answer questions about illegal casinos or warned users that accessing unlicensed sites violated UK law.




