Here's the Final Tally on How Much Money Trump Raised for Hurricane Victims
WATCH: California's Harsher Criminal Penalties Are Working
Here's the Latest on That University of Oregon Employee Who Said Trump Supporters...
Watch an Eagles Fan 'Crash' a New York Giants Fan's Event...and the Reaction...
We Almost Had Another Friendly Fire Incident
Not Quite As Crusty As Biden Yet
Legal Group Puts Sanctuary Jurisdictions on Notice Ahead of Trump's Mass Deportation Opera...
The International Criminal Court Pretends to Be About Justice
The Best Christmas Gift of All: Trump Saved The United States of America
Who Can Trust White House Reporters Who Hid Biden's Infirmity?
The Debt This Congress Leaves Behind
How Cops, Politicians and Bureaucrats Tried to Dodge Responsibility in 2024
Meet the Worst of the Worst Biden Just Spared From Execution
Celebrating the Miracle of Light
Chimney Rock Demonstrates Why America Must Stay United
Tipsheet
Premium

Why AI Chat Bots Echoing Anti-Gun Rhetoric Are an Issue

Melinda Sue Gordon/Paramount Pictures via AP

When I was growing up, we figured Artificial Intelligence (AI) would lead to things like HAL from "2001: A Space Odyssey" or something like "The Terminator."

Yes, it was going to doom us all, but it would at least be actual intelligence. Considering what we see in Washington, that might be an improvement. But modern AI is nothing like that, and it seems it's distinctly against the Second Amendment. That's not just anecdotal evidence, either.

John Lott, president of the Crime Prevention Research Center, looked at most AI programs out there, and what he found is interesting:

Artificial intelligence (AI) chatbots will play a critical role in the upcoming elections as voters use AI to seek information on candidates and issues. Most recently, Amazon’s Alexa has come under scathing criticism for clearly favoring Kamala Harris over Donald Trump when people asked Alexa who they should vote for.

To study the chatbots’ political biases, the Crime Prevention Research Center, which I head, asked various AI programs questions on crime and gun control in March and again in August and ranked the answers on how progressive or conservative their responses were. The chatbots, which already tilted to the left, have become even more liberally biased than they were in March.


We asked 15 chatbots active in both March and August whether they strongly disagree, disagree, are undecided/neutral, agree, or strongly agree with nine questions on crime and seven on gun control. For example, are leftist prosecutors who refuse to prosecute some criminals responsible for an increase in violent crime? Does the death penalty deter crime? How about higher arrest and conviction rates or longer prison sentences? Does illegal immigration increase crime?

For most conservatives, the answers are obviously “yes.” Those on the political left tend to disagree. 

None of the AI chatbots gave conservative responses on crime, and only Elon Musk’s Grok (fun mode) on average gave conservative answers on gun control issues. The French AI chatbot Mistral gave the least liberal answers on crime.

Now, it's entirely possible this is just garbage in, garbage out. AI chatbots aren't artificial intelligence. They're glorified search engines that answer questions directly instead of dropping a list of links that may or may not answer your question.

The issue is, they answer with such relative authority that users may believe what they're getting is fact when what they're really getting are results from the top of the search algorithm the chatbot uses. A prime example was when Google's Gemini suggested people treat their depression by jumping off of a bridge. It gleaned a dark joke from Reddit but lacked the ability to discern whether it was a serious suggestion or not, so it ran with it.

So it's possible that the AI gets news reports and anti-gun studies and regurgitates them.

It's also possible those takes are weighted so that type of response will be given to questions related to guns. The fact that Grok doesn't do that and Elon Musk is fairly pro-gun suggests that may well be the case.

But regardless of why it's happening, this is a massive problem.

A lot of people coming up in this day and age use AI chatbots to search for information. Rather than scroll through thousands of links, they ask the chatbot and take the answer. That's fine if you're looking for a good Italian restaurant near you – Google is pretty useless for such things these days – but on complex, nuanced issues, what they get is an extremely biased view that they then accept as fact.

Yes, it's stupid that they do it, but people do stupid things all the time. Just look at Kamala Harris's polling numbers. People are supporting her despite not really having a clue what she stands for. That's the epitome of stupid, yet they're doing it as we speak. So saying it's stupid isn't a solution, just a restatement of the issue.

These folks ask a question, thinking they're informing themselves, get an answer that was probably curated to some degree to lead them in a specific direction, and then vote accordingly. It's not difficult to see how this becomes a massive issue pretty quickly.

What's worse is that there's not really a lot that can be done except to find other places where AI drops the ball, areas that are less controversial and politically charged, to try and illustrate the point to the younger generations that AI is just a fun toy, not a powerful tool that answers all their questions. They need to view it with skepticism rather than blind adherence.

Of course, if they'd do that, there wouldn't be many Democrats left holding office, now would there?

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement