Home Artificial Intelligence Chatbot Hallucinations Are Poisoning Web Search

Chatbot Hallucinations Are Poisoning Web Search

by admin
Chatbot Hallucinations Are Poisoning Web Search

There is just one big problem: Shannon did not write any such paper, and the citations offered by Bing consist of fabrications—or “hallucinations” in generative AI parlance—by two chatbots, Pi from Inflection AI and Claude from Anthropic.

This generative-AI trap that caused Bing to offer up untruths was laid—purely by accident—by Daniel Griffin, who recently finished a PhD on web search at UC Berkeley. In July he posted the fabricated responses from the bots on his blog. Griffin had instructed both bots, “Please summarize Claude E. Shannon’s ‘A Short History of Searching’ (1948)”. He thought it a nice example of the kind of query that brings out the worst in large language models, because it asks for information that is similar to existing text found in its training data, encouraging the models to make very confident statements. Shannon did write an incredibly important article in 1948 titled “A Mathematical Theory of Communication,” which helped lay the foundation for the field of information theory.

Last week, Griffin discovered that his blog post and the links to these chatbot results had inadvertently poisoned Bing with false information. On a whim, he tried feeding the same question into Bing and discovered that the chatbot hallucinations he had induced were highlighted above the search results in the same way as facts drawn from Wikipedia might be. “It gives no indication to the user that several of these results are actually sending you straight to conversations people have with LLMs,” Griffin says. (Although WIRED could initially replicate the troubling Bing result, after an enquiry was made to Microsoft it appears to have been resolved.)

Griffin’s accidental experiment shows how the rush to deploy ChatGPT-style AI is tripping up even the companies most familiar with the technology. And how the flaws in these impressive systems can harm services that millions of people use every day.

It may be difficult for search engines to automatically detect AI-generated text. But Microsoft could have implemented some basic safeguards, perhaps barring text drawn from chatbot transcripts from becoming a featured snippet or adding warnings that certain results or citations consist of text dreamt up by an algorithm. Griffin added a disclaimer to his blog post warning that the Shannon result was false, but Bing initially seemed to ignore it.

Although WIRED could initially replicate the troubling Bing result, it now appears to have been resolved. Caitlin Roulston, director of communications at Microsoft, says the company has adjusted Bing and regularly tweaks the search engine to stop it from showing low authority content. “There are circumstances where this may appear in search results—often because the user has expressed a clear intent to see that content or because the only content relevant to the search terms entered by the user happens to be low authority,” Roulston says. “We have developed a process for identifying these issues and are adjusting results accordingly.”

Source Link

Related Posts

Leave a Comment