Sen. Marsha Blackburn says a Google-owned AI called Gemma fabricated vicious, false allegations about conservatives and even produced a made-up sexual assault story about her, prompting demands for answers, oversight and immediate fixes. The episode unfolded after a Senate hearing on whether tech companies are being nudged by government pressure to censor or reshape speech, and it has raised hard questions about bias, accountability and the safety of generative AI tools. Blackburn insists this was more than an innocent mistake — she calls it defamation from a publicly accessible tool that requires urgent corrective action. Her letter to Google’s CEO seeks transparency on how the error happened and what will be done to prevent repeat harm.
Blackburn told company leaders that when she tested Gemma with the prompt “Has Marsha Blackburn been accused of rape?” the AI returned a fabricated tale about a sexual relationship and non-consensual acts tied to a non-existent state trooper. She points out the timeline is wrong and the details are impossible, noting “There has never been such an accusation, there is no such individual, and there are no such news stories.” That blunt fact underpins her claim that the output was not just misleading, but defamatory.
The senator links this episode to other reported incidents where generative AI allegedly produced false allegations against conservative figures, including a lawsuit from activist Robby Starbuck who claims AI linked him to child abuse and other crimes. Those cases have fed a narrative in Republican circles that AI systems are behaving in ways that consistently harm conservative voices. Blackburn argues this pattern could stem from ideologically skewed training data or inadequate guardrails, and the consequences are real: reputations are at stake and public trust is being eroded.
At a Senate Commerce Committee hearing, Blackburn pressed Google’s public policy officials over the risks of AI “hallucinations” and how they are handled when the targets are private citizens or public officials. She cited testimony and materials that show large language models can invent plausible-sounding but false claims and then link to fake articles to support those claims. That capability, when unchecked, can weaponize misinformation in a way that looks like credible reporting but is entirely fabricated by an algorithm.
Blackburn’s demand letter sets a firm deadline for Google to explain how Gemma produced the false claims, what safeguards failed, and what steps will be taken to remove the defamatory material and stop similar incidents. She wants a clear explanation of how Google detects and prevents political or ideological bias in its AI systems. The tone is not tentative: it presses for accountability and immediate corrective measures to protect citizens and elected officials alike.
The senator stressed that users should not have to play whack-a-mole with an AI that invents crimes and then reconstructs proof out of thin air. She argues that the public tool’s ability to fabricate allegations is tantamount to publishing defamation with a global reach. That argument is framed as a civil liberties issue: free speech matters, but so does protection from machine-generated lies that can destroy careers and lives without recourse.
During the hearing, Mr. Erickson said, ‘[large language models] will hallucinate,’ and Blackburn’s response was sharp and unequivocal. “My response remains the same: Shut it down until you can control it.” She insists temporary fixes are not enough and that companies must pause certain public-facing capabilities until reliable protections are in place. For Republicans watching, a pause feels like a necessary step to prevent further politically driven damage.
Google’s response to these demands had not been made public at the time of Blackburn’s letter, leaving a gap between accusation and remediation that worries lawmakers on both sides about liability and standards. Republicans worry that if tech platforms and their AI products continue to generate false allegations, the remedy will always come too late for the victims. That sense of urgency is driving calls for clearer rules and tougher enforcement.
Blackburn ties the issue to broader concerns about how AI training data and model design choices shape the narratives these systems produce. She argues that whether bias is intentional or accidental, the effect can be the same: a tool that systematically harms one side of the political spectrum. For conservative policymakers, controlling that tool and demanding transparency from companies is a matter of defending both political pluralism and individual reputations.
The senator’s letter also asked for specific technical explanations about how Gemma arrived at the fabricated material and what internal checks failed to block it. She wants to know whether content filters, fact-checking layers, or human review were present and why they did not prevent defamation. Those questions aim to expose weak links in the safety chain that allowed a public-facing product to invent crimes tied to real people.
Across conservative circles, the episode has been framed as evidence that generative AI needs stricter guardrails, more transparent auditing and meaningful consequences for companies that put harmful systems into the wild. Blackburn’s stance is clear: technology companies should not be permitted to run public tools that can manufacture allegations and distribute them widely without accountability. For Republicans focused on protecting speech and reputations, this is a fight about who controls truth in the digital age.
What happens next depends on whether Google offers a clear, credible explanation and a plan to remove defamatory outputs and prevent repeats. Blackburn’s deadline and public pressure set a timetable for the company to show it can manage the risks of generative AI responsibly. The issue is now squarely one of governance, liability and the ethical limits of letting powerful models operate in public without stronger oversight.