Google has limited what search results are answered by its artificial intelligence (AI) systems after they resulted in a string of errors including claiming Barack Obama was a Muslim.

The company said it was adding new restrictions to its “AI overview” feature, which provided AI summaries of answers to search queries.

This includes limiting the number of health-related queries that will be answered by AI, restricting the types of website used by the system, and introducing restrictions where the AI answers were deemed unhelpful.

Google unveiled the AI overview feature earlier this month and introduced it widely in the US last week before a planned worldwide launch later this year. The feature reads and interprets information from other websites to provide short summaries to people’s questions.

However, last week users found that it would provide incorrect answers to questions, often using joke websites or posts on internet forums, or by misinterpreting reliable sources.

This included recommending that people eat “one small rock a day”, based on an article in the satirical news website The Onion, or using glue to make pizza, referencing a comment on Reddit. It said Mr Obama was the first Muslim US president after misunderstanding an academic textbook.

However, Google said that several screenshots widely shared on social media last week, such as encouraging pregnant women to smoke or saying it was acceptable to leave dogs in hot cars, were fake. 

Google said it had created “additional triggering refinements” for health queries, meaning AI overviews are less likely to show up for medical questions. It said it had built better mechanisms for joke websites, and restricted queries “where AI overviews were not proving to be as helpful”.

However, it is persisting with the feature, and insisted that users were finding it useful.

Google's Gemini chatbot was also curbed after it began generating historically inaccurate images of black Nazis

Google has suffered a series of embarrassments related to its AI services in recent months. It stopped its Gemini chatbot from generating images of people after users found it would draw historically inaccurate images of black Nazi soldiers. 

Its previous chatbot, Gemini, was found to offer political opinions such as claiming Brexit was a bad idea.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.