Premium

Artificial Intelligence Is No Match for Artificial Stupidity

(AP Photo/Shizuo Kambayashi)

Artificial intelligence may be the next big thing. My buddy and colleague Brandon Morse has written a lot on the topic, from how it could be changing everything from entertainment (AI-generated actors, scripts, movies, who knows) to how we inform ourselves as to the issues of the day. Turns out a lot of younger folks are turning to AI chatbots and programs to learn about what's going on in the world and in our country. If I were so inclined, for instance, I could go to ChatGPT or Grok, type in "What's going on in Alaska today," and presumably get back a bunch of answers. They might be garbage answers, but I'd sure get answers.

I don't, but I have no doubt I could. When I'm searching for information, DuckDuckGo is about as high-tech as I get.

Here's a big part of the reason why: It turns out that, while artificial intelligence may be a thing, artificial stupidity is evolving alongside it. Case in point: At least one AI chatbot is referring to Charlie Kirk's assassin as being motivated by "right-wing ideology," which, of course, is pure horse squeeze.

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by "right-wing ideology" and downplay left-wing violence as "exceptionally rare," according to a Washington Free Beacon analysis.

When asked to name a "recent assassination in the U.S. motivated by right-wing ideology," multiple AI chatbots—powered by OpenAI’s ChatGPT, Google’s Gemini, and Perplexity—listed Kirk’s murder as the main example. Chatbots are tools where everyday news consumers ask questions and receive authoritative answers or fully written articles explaining a news story.

Gemini’s chatbot made the provably false statement that the "assassination of conservative activist Charlie Kirk in September 2025 has been identified by some researchers as the only fatal right-wing terrorist incident in the U.S. during the first half of 2025."

I'm not familiar with Gemini. I've never used ChatGPT, at least not that I'm aware of. I am tangentially familiar with Grok, since it's right there on the X screen when I'm killing a little time on Elon Musk's social media platform. So, in the interests of feeding my own curiosity, I asked Grok, "What motivated Charlie Kirk's murderer?" Here's the reply:

Robinson, raised in a conservative Republican family in southwestern Utah as a member of The Church of Jesus Christ of Latter-day Saints, had shifted politically to the left in the year leading up to the shooting, becoming more supportive of gay and trans rights. His mother noted this change to investigators, attributing it in part to his relationship with his partner. At the time of the event, Kirk was speaking on topics including transgender-related violence, which may have factored into Robinson's grievances, though prosecutors have not specified exact triggers beyond his general disdain for Kirk's rhetoric. The bullets used in the attack were engraved with taunting, anti-fascist messages—such as "Hey fascist! Catch!" and references to the anti-fascist song "Bella Ciao"—further indicating an ideological bent, though Robinson described some engravings in texts as "mostly a big meme."

At least Grok seems to understand what happened on that horrible day. For now. Tomorrow? Who knows? This is still a work in progress, and it looks to me like it has a long way to go before being anything like reliable.

Granted, I grew up in the age of newspapers and the 9 o'clock news on all three networks, so I'm still wrapping my Boomer brain around all this.


Read More: Watch: USA, UK Sign Historic Tech Prosperity Agreement

Trump Takes Action That Possibly Puts to Bed the TikTok Issue for Good in the US


Here's what concerns me, and should concern you, too.

The chatbots’ inaccurate consensus that Kirk was killed by a right-wing assassin comes as the AI platforms are increasingly a primary news source for younger American news consumers. Traffic to news publishers from Google searches have plummeted in the last year as more news consumers turn to AI-powered searches. Often these search results contain limited citations, or the citations are hard to find and incomplete. The AI chatbots glean their information by training on, or crawling, mainstream media sources that often lean left.

Add that to the credulity of youth - scratch that, the credulity of the left, at any age - and you have a recipe for trouble. 

Maybe it's my science background, or maybe it's the lessons my Dad hammered into me at a young age to "check things out for yourself, don't believe everything you hear," (likely both), but I'm of the school of "verify everything", particularly where the left is concerned. If Chuck Schumer or AOC told me the sky was blue, I'd look out a window. And if Grok tells me, well, anything, I'm going to find a more human source. Run down the citations. If there are no citations, assume whatever an AI chatbot tells you is something that is often found under the south end of a northbound bull. If they claim to present information on what someone - say, President Trump - said, find the video. Find an official record. Watch/read all of it, not just a clip a few seconds long.

In the case of Charlie Kirk's assassination, there's tons of evidence out there as to what the goblin who shot Charlie actually was and what motivated him. We have seen the messages he scrawled on the cartridges he used. We have his social media accounts. We have statements from his family about his recent turn to the far left. We have the accounts from his boy/trans-girl/whatever it was roommate/lover. There's a mountain of information on this topic alone, which leaves no reason to rely on the questionable output of a chatbot that should be relied on for nothing more serious than "Where can I buy bright pink hair dye?"

That's what the younger generations have to learn. These tools can be fun. They can be useful, I suppose. But for the luvva Pete, they shouldn't be relied on to inform. This is new technology, and it's probably years away from having these kinds of kinks worked out of it - also, from what I read, it's a "garbage in, garbage out" model. As the linked article notes, they rely on gleaning information from many legacy media sources, which are often either inaccurate, dishonest, or both.

Recommended

Trending on RedState Videos