This is nothing new, usually at the end of a search it will state the the AI can make mistakes. You can tell the AI it has made a mistake, not looked close enough and is being biased and to check again. From what I read in the article is that the test was only done on searches which is only one of the uses of AI.
And yet, some people swear by it. Yes, they know how to get the right results and verify them. For what I've been doing it's been a big help. If you are having a problem get help.
If you ask AI what is the best war game (without additional criteria) it will come back and give you results based on awards the games have been given. Writing prompts and knowing how to get what you want is more art than science.
The key thing to understand if you are going to use it: AI on Google Search, including the Gemini family of models and AI Overviews, can provide inaccurate information. This is often called "hallucinations". These AI models predict the most likely next words in a sentence. They are designed for fluency and pattern recognition, not for verifying facts, understanding context, or real-world logic. Just like any other tool, you need to know how to use it and its limitations. It's not magic even if the hype claims it is.
Errors can occur for several reasons: "Hallucinations" (Pattern Matching over Truth): The AI predicts the most probable sequence of words to answer a query. This can lead to incorrect or fabricated information, such as nonexistent citations.
Poor Quality Training Data & "Data Voids": The AI is trained on large datasets from the web. These datasets can include misinformation, satire, and sarcasm. When reliable, high-quality information is scarce, the AI may fill the gap with inaccurate information.
Failure to Recognize Satire and Misinformation: The AI can struggle to differentiate between reliable sources, satirical sites, or content from forums. However, you can point it towards sites where you know info is valid and tell it to stay away from specific sites and sources.
Misinterpreting Queries and Context: The AI may struggle to understand the context of a query. It may misinterpret language nuances or treat user speculation as fact.
"Ungrounded" Responses: In many cases, AI Overviews are "ungrounded." The links provided by Google to back up the answer do not support the information in the summary. You can ask for the sources it has used to help get it correct (example: stay away from Wikipedia, left/right leaning sources, etc.)
Pressure to Compete: The rapid push to release generative AI tools has led to the release of models that can exhibit low accuracy rates. That's not going to change anytime soon.
Key Findings on Accuracy: Accuracy Rates Vary: Even with a 90% accuracy rate, millions of search results per hour could be wrong.
"Misinformation" vs. "Hallucination": Some mistakes are pure hallucinations, while others are due to the AI pulling in false or misleading information from the web.
How to Stay Safe: Double-check: Google states that "AI can make mistakes, so double-check responses".
Critically Evaluate: Users should not consider the first AI-generated summary as the absolute truth, especially regarding health, financial, or safety. Some AI programs will ask you if you require additional details or suggest the next step for what you want to accomplish.
Yes, this is mostly AI generated but I did check, edit and add.
Wolfhag