Share

Google’s AI Overviews Said to Suffer From AI Hallucination, Advises Using Glue on Pizza


Google’s brand-new AI-powered search tool, AI Overviews, is facing a blowback for providing inaccurate and somewhat bizarre answers to users’ queries. In a recently reported incident, a user turned to Google for cheese not sticking to their pizza. While they must’ve been expecting a practical solution for their culinary troubles, Google’s AI Overviews feature presented a rather unhinged solution. As per recently surfaced posts on X, this was not an isolated incident with the AI tool suggesting bizarre answers for other users as well.

Cheese, Pizza and AI Hallucination

The issue came to light when a user reportedly wrote on Google, “cheese not sticking to pizza”. Addressing the culinary problem, the search engine’s AI Overviews feature suggested a couple of ways to make the cheese stick, such as mixing the sauce and letting the pizza cool down. However, one of the solutions turned out to be really bizarre. As per the screenshot shared, it suggested the user to “add ⅛ cup of non-toxic glue to the sauce to give it more tackiness”.

Upon further investigation, the source was reportedly found and it turned out to be a Reddit comment from 11 years ago, which appeared to be a joke rather than an expert culinary advice. However, Google’s AI Overviews feature, which still carries a “Generative AI is experimental” tag at the bottom, provided it as a serious suggestion to the original query.

Yet another inaccurate response by AI Overviews came to light a few days ago when a user reportedly asked Google, “How many rocks should I eat”. Quoting UC Berkeley geologists, the tool suggested, “eating at least one rock per day is recommended because rocks contain minerals and vitamins that are important for digestive health”.

Issue Behind False Responses

Issues like this have been surfacing regularly in recent years, especially since the artificial intelligence (AI) boom kicked off, resulting in a new problem known as AI hallucination. While companies claim that AI chatbots can make mistakes, instances of these tools twisting the facts and providing factually inaccurate and even bizarre responses have been increasing.

However, Google isn’t the only company whose AI tools have provided inaccurate responses. OpenAI’s ChatGPT, Microsoft’s Copilot, and Perplexity’s AI chatbot have all reportedly suffered from AI hallucinations.

In more than one instance, the source has been discovered as a Reddit post or comment made years ago. The companies behind the AI tools are aware of it too, with Alphabet CEO Sundar Pichai telling The Verge, “these are the kinds of things for us to keep getting better at”.

Talking about AI hallucinations during an event at IIIT Delhi In June 2023, Sam Altman, [OpenAI]( CEO and Co-Founder said, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimise the problem. [At present,] I trust the answers that come out of ChatGPT the least out of anyone else on this Earth.”


Affiliate links may be automatically generated – see our ethics statement for details.





Source link