Technology

Chatbot Disinfo irritates protests in Los Angeles

Zoë Schiffer: Oh, wow.

Leah Feiger: Yes, it’s anyway. Who already has Trump’s ears. This becomes widespread. So people go to Grok of X and they are like, “Grok, what is this?” What does Grok tell them? No, no. These aren’t actually images of the protests in Los Angeles, Groke said. It says they are from Afghanistan.

Zoë Schiffer: oh. Groke, no.

Leah Feiger: They were like, “No reliable support. It’s the wrong contribution.” It’s really bad. Really, really bad. Then in another case, another person shared these photos with Chatgpt, and Chatgpt was like: “Yes, this is Afghanistan. It’s not accurate, wait, wait.” It’s not very good.

Zoë Schiffer: I mean, after many of these platforms systematically remove the fact-checking procedures, don’t let me start this moment and decide to allow more content purposefully. Then you add the chatbots to the mix for all their uses, and I do think they are really useful and they are very confident. When they hallucinate, when they mess up, they do it in a very convincing way. You won’t see me defending Google search here. Absolutely rubbish, nightmare, but when you go astray, you can be more clear when you go astray than Grok is completely convinced that you see pictures of Afghanistan when you don’t have one.

Leah Feiger: This is really worrying. I mean, it’s hallucinating. It’s totally fantastic, but unfortunately, you meet the swagger of a drunk boy at a party in your life.

Zoë Schiffer: nightmare. nightmare. Yes.

Leah Feiger: They were like “No, no, no. I’m sure. I’ve never been more certain in my life.”

Zoë Schiffer: Absolutely. I mean, OK, why are chatbots so confidently giving these incorrect answers? Why don’t we see them just say, “Okay, I don’t know, so maybe you should check it out somewhere else. Here are some reliable places to look for answers and information.”

Leah Feiger: Because they don’t. They don’t admit they don’t know, and it’s really crazy for me. Actually, a lot of research has been done, and in a recent study of AI search tools at the Center for Drag Digital News at Columbia University, it was found that chatbots are “normally afraid to refuse to answer questions they can’t answer accurately. Offer incorrect or speculative answers.” Really, really, really, really crazy, especially when you consider the fact that there are a lot of articles in the election: “Oh no, sorry, I’m changpt, I can’t weigh politically.” You’re like, you’re weighing a lot right now.

Zoë Schiffer: OK, I think we should stop on that very scary note and we will be back right away.

[break]

Zoë Schiffer: Welcome back Incredible valley. Today, I was joined by Leah Feiger, senior political editor at Wired. OK, so, in addition to trying to verify information and video recordings, there are a lot of reports about misleading AI to generate videos. There is a Tiktok account that starts uploading videos of alleged National Guard soldiers who have allegedly been deployed to protests in Los Angeles, and you can see what he says is false and inflammatory words, such as the fact that the protesters “chuck in a balloon full of oil”, with one of the videos approaching a million views. So, I don’t know, it feels like people have to become more proficient in identifying this fake shot, but in an environment that is essentially without context, it’s as difficult as a post on X or a video on Tiktok.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button