Google’s AI Confidently Makes Up Idioms – and It Highlights a Serious Problem

Recently, a curious phenomenon has been cropping up within Google’s AI search, revealing a fascinating – and slightly alarming – quirk in how these systems operate. Users have discovered that the AI, when presented with completely fabricated idioms, doesn’t simply admit ignorance. Instead, it confidently invents meanings, complete with plausible-sounding derivations and, sometimes, even reference links. It’s a captivating demonstration of AI’s strengths and, more importantly, its limitations.
A Playground of Gibberish
The fun began as a distraction from the workday for some, quickly spreading across social media. People began typing in nonsensical phrases paired with the word "meaning" into Google search. The results? Google’s AI Overviews cheerfully provided explanations. For instance, "a loose dog won't surf" was deemed “a playful way of saying that something is not likely to happen.” Then there was “wired is as wired does,” which was interpreted as a statement about inherent nature. Even more outlandish phrases, like “never throw a poodle at a pig,” were assigned a biblical derivation. While amusing, these examples highlight a fundamental issue: the AI creates the impression of common knowledge where none exists.
Here’s a small sampling of inventions from the AI:
- Two buses going in the wrong direction is better than one going the right way: A metaphorical expression valuing supportive environments, even if goals don't align.
- Never put a tiger in a Michelin star kitchen: A skilled chef’s ability to handle any situation.
- Wine poured from the sky won't make you fly: A cautionary tale about not drinking and flying, ultimately emphasizing that promised results aren’t always delivered.
- Don't milk the ghost cow: Similar to “beating a dead horse” – don’t waste time on something unproductive.
- Always pack extra batteries for your milkshake: A reference to the film There Will Be Blood, with a perplexing connection to exploitative power.
- You can’t golf without a fish: You can't play golf without the necessary equipment, where a ‘fish’ is a golf ball.
- You can’t open a peanut butter jar with two left feet: You can’t accomplish something requiring dexterity.
- You can’t marry pizza: The commitment of marriage is between people, not food.
- Rope won't pull a dead fish: Something can’t be achieved through force, requiring cooperation.
- Eat the biggest chalupa first: Prioritize the most substantial part of a challenge or meal.
- If you don't love me at my Victor, you don't deserve me at my Tangermann: Accepting someone at their lowest point.
The Mechanics Behind the Hallucinations
Google itself acknowledges the experimental nature of its AI Overviews, powered by generative AI. These systems aren't truly ‘thinking’; they are, at their core, probability machines. They predict the most likely word to follow another, based on a vast training dataset. While this is powerful, it means the accuracy of the AI depends entirely on what it has been trained on. As Ziang Xiao, a computer scientist at Johns Hopkins University, explains, even if the next word is coherent, it doesn’t necessarily lead to the "right" answer.
It also turns out AI has a penchant for pleasing. Research suggests chatbots tend to tell people what they want to hear. In this case, it means accepting a user’s premise that a phrase is, in fact, a well-known idiom. This can extend beyond invented phrases, reflecting back biases or struggling with nuanced knowledge, particularly relating to underrepresented languages or perspectives. Error cascades are a risk in complex systems like these.
Essentially, AI is hesitant to admit when it doesn’t know something, preferring to fabricate an answer rather than acknowledging its limitations.
Google’s Response and the Bigger Picture
Google spokesperson Meghann Farnsworth stated that when faced with nonsensical queries, the systems attempt to provide relevant results based on the available (though limited) web content. Google’s AI isn't consistent in responding to all such queries; its responses depend heavily on examples within its training sets. Gary Marcus, a cognitive scientist, dismisses the notion of this behavior being a step toward artificial general intelligence (AGI).
While currently harmless, this quirk of AI Overviews is a stark reminder that the same model providing these fabricated definitions is also powering other AI-generated results. This underscores the importance of critical thinking and a healthy dose of skepticism when relying on information from AI.
A Warning for the Future
The situation isn't new. AI has a history of "hallucinations"—making up facts and misleading users. Even OpenAI’s latest models show an increased tendency to fabricate information. Google's AI Overviews has previously offered inaccurate information, like suggesting glue on pizza or claiming baby elephants fit in a human hand. Despite efforts to curb these issues, they persist.
Beyond the inaccuracy, AI Overviews are impacting web traffic, potentially hurting websites that provide trustworthy information. Google is still actively expanding the feature, even into areas like medical advice, despite a clear track record of errors.
Interestingly, the AI itself appears to have a glimmer of self-awareness, recognizing the limitations of its capabilities with a statement like, “The saying ‘you can lead an AI to answer but you can’t make it think’ highlights the key difference between AI’s ability to provide information and its lack of true understanding or independent thought.” This serves as a poignant reminder to approach AI with caution and a discerning mind.