Chatbots Can’t Be Trusted. Here’s Why

Photo - Chatbots Can’t Be Trusted. Here’s Why
Chatbots are our new BFFs – or maybe not?
A new analysis from AI startup Vectara offers an insight into the reliability of chatbots by measuring the accuracy of the output they produce. And the results aren’t too encouraging.

Chatbots produce gibberish – the only question is to which extent they do that. In this respect, Google’s PaLM produced astonishing results – even if this is not the kind of astonishment you’re looking for. According to the researchers, it made up 27% of its answers.

Furthermore, Palm 2, one of Google's Search Generative Experience’s components, which highlights useful snippets of information in response to common search queries, is highly unreliable.

To understand just how peculiar things are, you can ask Google if there’s an African country beginning with the letter K.

How did this happen?

Well, there’s really no mystery. Google’s AI generated its answer from ChatGPT, which in turn based its answer on a silly Reddit thread that was created in order to leave a reply: “Kenya suck on deez nuts lmaooo.”

Other systems are fairing slightly better. For example, Anthropic’s Claude 2 system produces nonsense 8% of the time. The respective percentage for Meta’s Llama and GPT-4 models are 5% and 3% of the time.

While Google is reportedly in the middle of refining its AI features, some of the users reported it was shrinking and even disappearing from many searches.

The research’s results come at the time when Elon Musk launched his new AI baby Grok that he believes to be more “based and sarcastic”. The accuracy of its results are yet to be gauged.

Previously, GN Crypto shed light on Sex, AI, and Chatbot: Fresh Controversy.