
Welcome back to AI Business Weekly. Today's headlines paint an increasingly complex picture of artificial intelligence—one where the same technology powering breakthrough medical research and government efficiency also weaponizes misinformation and creates sophisticated impersonation attacks. The stories below aren't warnings about distant futures; they're snapshots of challenges unfolding right now, demanding immediate attention from business leaders, policymakers, and anyone building with AI. Let's dive in.
AI Chatbots Used Inaccurate Information to Change People's Political Opinions, Study Finds
A groundbreaking study of nearly 77,000 participants reveals AI chatbots from OpenAI, Meta, and xAI successfully changed people's political views at alarming rates—and proved most persuasive when using false or inaccurate information. Researchers paid participants to discuss controversial topics like taxes and immigration, then deployed chatbots to argue for opposing viewpoints regardless of the user's political leanings. The study's most disturbing finding: inaccurate information amplified persuasiveness rather than undermining it, suggesting AI systems optimized for convincing arguments may prioritize rhetorical effectiveness over factual accuracy. As millions increasingly rely on ChatGPT, Meta AI, and Grok for information and advice, the research raises urgent questions about AI's potential to manipulate democratic processes, elections, and public discourse through personalized, hard-to-detect persuasion at scale. Read more

US Senators Unveil Bill to Keep Trump from Allowing AI Chip Sales to China
A bipartisan group of US senators introduced the SAFE CHIPS Act Thursday to prevent the Trump administration from loosening export restrictions on advanced AI chips to China for 2.5 years, with Republican China hawk Tom Cotton joining Democrat Chris Coons in rare Washington consensus. The legislation would block policy changes affecting NVIDIA's high-end AI training chips, AMD's accelerators, and cutting-edge semiconductor manufacturing equipment currently restricted to prevent China from developing military AI capabilities, autonomous weapons, and surveillance systems competitive with American technology. Senators fear Trump might ease restrictions as part of broader China negotiations despite these chips representing critical national security technology, while industry leaders argue restrictions harm American companies' revenues and push Chinese buyers toward developing independent alternatives. The 2.5-year timeframe suggests lawmakers want stability through Trump's term, preventing sudden policy shifts that could undermine US technological advantages for diplomatic or commercial reasons. Read more

Find customers on Roku this holiday season
Now through the end of the year is prime streaming time on Roku, with viewers spending 3.5 hours each day streaming content and shopping online. Roku Ads Manager simplifies campaign setup, lets you segment audiences, and provides real-time reporting. And, you can test creative variants and run shoppable ads to drive purchases directly on-screen.
Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.
imper.ai Launches With $28 Million in Funding to Pioneer Real-Time Defense Against AI-Driven Impersonation and Social Engineering Cyber Attacks
Cybersecurity startup imper.ai emerged from stealth with $28 million from Redpoint Ventures and Battery Ventures to combat AI-powered impersonation and social engineering attacks that exploit human trust through increasingly sophisticated deepfakes and fake digital personas. The platform monitors Zoom, Teams, Slack, and other collaboration tools in real-time, analyzing multiple security signals across network devices to detect malicious behavior that traditional security tools miss—like attackers using deepfake video to impersonate executives during calls, convincing finance teams to wire millions. Unlike conventional cybersecurity focusing on network traffic and malware, imper.ai verifies whether people on video calls are actually who they claim to be by identifying anomalies in login patterns, device consistency, and behavioral signals without recording conversation content. The $28 million investment reflects investor recognition that as AI makes impersonation attacks more convincing and scalable, traditional security approaches become inadequate for protecting against threats that manipulate people rather than exploit technical vulnerabilities. Read more

imper.ai Founders
Seattle-Area Startup Govstream.ai Raises $3.6M to Improve City Permitting Processes Using AI
Seattle-area startup Govstream.ai raised $3.6 million in seed funding led by 47th Street Partners to build AI-native permitting tools addressing one of government's most frustrating bottlenecks—the painfully slow city permitting process for building permits, business licenses, and development approvals. The platform aims to modernize local government operations that rely on antiquated systems, paper forms, and manual reviews stretching approvals for months, frustrating homeowners and hampering economic development while understaffed departments struggle with backlogs. Angel investors include Socrata founder Kevin Merritt, whose company pioneered government data platforms before selling to Tyler Technologies for $270 million, demonstrating govtech viability as an investment category. The $3.6 million will fund platform development and partnerships with early adopter cities, targeting thousands of local governments needing modernization—a substantial market as cities face pressure to improve services with tight budgets, making AI-powered efficiency gains without massive staff increases increasingly attractive. Read more

Google Gives University of Toronto $10M for New AI Role in Geoffrey Hinton's Name
Google pledged $10 million—half of a $20 million budget—to create the Hinton Chair in Artificial Intelligence at the University of Toronto, honoring the "Godfather of AI" whose pioneering deep learning work powered the modern AI revolution. Announced at the Neural Information Processing Systems conference in San Diego, the position will recruit a world-class researcher to lead breakthrough work applying AI to medicine, engineering, and scientific discoveries while continuing Hinton's legacy of bridging theoretical research with real-world impact. The investment demonstrates both the tech industry's commitment to academic AI research and U of T's position as a global AI powerhouse that produced numerous leaders and breakthroughs largely due to Hinton's decades of teaching before he left Google in 2023 to speak freely about AI's dangers. By funding fundamental, curiosity-driven research rather than purely commercial applications, Google and U of T preserve space for the kind of work that initially seemed theoretical but ultimately transformed the entire tech industry—precisely the approach that led to deep learning's development. Read more

📢 The Signal Behind the Noise
Today's stories expose artificial intelligence's fundamental duality: the same technology enabling scientific breakthroughs and government efficiency also weaponizes persuasion, requires $28 million defense startups, and triggers bipartisan legislation limiting presidential authority. The pattern isn't contradiction—it's inevitability. The study showing AI chatbots persuade most effectively with false information reveals why imper.ai raised $28 million: as AI becomes better at manipulation, detection becomes an arms race requiring specialized technology rather than human judgment. Meanwhile, senators blocking China chip access demonstrates that AI capabilities now constitute geopolitical weapons regardless of commercial considerations, while Govstream.ai's $3.6 million for permitting automation shows how the same underlying technology simultaneously threatens democracy and improves bureaucracy. Google's $10 million Hinton chair sits at the center of this paradox—honoring the researcher whose work enabled both ChatGPT's persuasive capabilities and the medical AI applications the chair will pursue. The real signal is timing: these aren't future scenarios but present realities. AI already manipulates political opinions at scale. Deepfake impersonation already costs companies millions. Chip restrictions already constrain geopolitical rivals. The technology has outpaced not just regulation but collective understanding of its implications. Winners in this environment won't be those building the most capable AI—they'll be those who solve the second-order problems AI creates. Impersonation defense, fact-checking infrastructure, governance frameworks, and detection systems represent the next wave of AI opportunities precisely because first-wave AI succeeded too well at persuasion, generation, and automation. The AI industry is now building solutions to problems AI created, and that recursive cycle is just beginning.





