
Google had said ‘What People Suggest’ feature aimed to provide users with information from people with similar lived experiences. Photograph: MementoJpeg/Getty Images
Google removed an experimental AI Overviews feature that surfaced crowdsourced medical advice from Reddit, Quora, and social media forums after physicians and public health experts raised safety concerns about the search engine promoting unverified health recommendations alongside or above authoritative medical sources, The Guardian reported March 16.
The feature, which launched in limited testing last month, used Google's Gemini AI to synthesize answers to health queries by analyzing discussions across online communities and presenting aggregated advice as AI-generated summaries. Google defended the approach as reflecting "real user experiences" with treatments and symptoms, but medical professionals warned the system amplified dangerous misinformation by giving amateur advice equal or greater prominence than peer-reviewed research and clinical guidelines.
AI System Elevated Dangerous Amateur Medical Recommendations
The removed feature extracted health advice from Reddit threads, Facebook groups, and forum discussions where users shared personal experiences treating conditions, often recommending unproven remedies, dangerous dosing regimens, or discouraging evidence-based medical care. Google's AI synthesized these discussions into confident-sounding summaries appearing at the top of search results, with authoritative medical sources from Mayo Clinic, CDC, or medical journals relegated to traditional blue links below the AI-generated content.
Physicians documented cases where the AI feature recommended unsafe practices including using essential oils to treat serious infections, suggested incorrect medication dosages based on anecdotal forum posts, and advised against vaccines citing debunked claims from anti-vaccine communities. The system's design meant more engagement and discussion volume on forums weighted recommendations toward popular but potentially harmful advice rather than medically accurate information with less social media traction.
The feature also surfaced mental health advice from unqualified forum users, including recommendations to discontinue psychiatric medications without medical supervision and suggestions that therapy could be replaced by lifestyle changes—guidance that could prove dangerous for individuals with serious mental health conditions relying on search results for medical decisions.
Medical Community Raises Patient Safety Concerns
The American Medical Association and multiple physician groups sent formal complaints to Google arguing the feature violated basic medical ethics by presenting amateur health advice without disclaimers distinguishing it from professional medical guidance. Doctors emphasized that health information quality matters fundamentally differently than restaurant recommendations or product reviews where crowdsourced opinions carry less safety risk.
Medical professionals also criticized Google for prioritizing engagement metrics and user-generated content over accuracy in health contexts where misinformation causes measurable harm. Research shows significant percentages of patients make health decisions based on internet searches, meaning errors in top search results directly influence treatment choices, medication adherence, and care-seeking behavior with life-or-death consequences.
Public health experts pointed out the feature could accelerate health misinformation spread during disease outbreaks or public health emergencies when accurate information access becomes critical. During COVID-19, social media platforms struggled to moderate health misinformation; Google's AI feature would have systematically elevated the same unreliable content to top search positions with apparent algorithmic endorsement.
Broader Questions About AI Search Quality Standards
Google's decision to remove the feature reflects growing tension between AI systems' ability to synthesize information from any available sources and the responsibility to maintain quality standards for high-stakes queries. While AI Overviews work reasonably well for factual questions with clear answers, health queries often lack single correct responses and require nuanced understanding of individual medical contexts, contraindications, and risk-benefit tradeoffs that crowdsourced advice can't provide.
The incident also demonstrates challenges tech companies face evaluating AI feature safety before wide deployment. Google's testing apparently didn't adequately assess whether surfacing amateur medical advice created unacceptable risks, suggesting internal review processes may not include sufficient medical expertise or prioritize health information accuracy over engagement and feature completeness.
The removal comes as Google faces increasing scrutiny over AI Overviews quality across all topic areas. Users have documented the feature providing incorrect information, nonsensical answers, and recommendations based on satire or fictional content misinterpreted as factual. Health queries simply raised the stakes by introducing safety concerns beyond accuracy issues in less critical domains.
Implications for AI-Mediated Information Access
Google's retreat on medical AI Overviews suggests tech companies may need to establish category-specific quality standards rather than applying uniform AI approaches across all information types. Health, legal, and financial queries arguably require different handling than entertainment, shopping, or general knowledge questions where lower accuracy thresholds and experimental features pose less harm.
The decision also raises questions about whether AI systems should synthesize information from unverified sources at all, particularly in domains where expertise matters and amateur advice carries risks. While crowdsourcing works for certain applications, health information may represent a category where algorithmic amplification of non-expert opinions causes more harm than value regardless of how sophisticated the AI becomes at synthesizing discussions.




