Cryptocurrencies

There is a recent AI feature called “AI Overview”, which revealed

A recent AI feature called “AI Overview,” revealed by Google Monolith, distributes inaccurate and dangerous summaries in response to user searches, and Google doesn’t appear to have a real solution to the problem.

At the time of writing, Google has disabled some queries for its “AI Preview” feature after it was widely reported that the system produced erroneous and potentially malicious results.

Reports began spreading across social media and news communities about a user query asking the search engine how to keep cheese on pizza, to which the AI ​​system reportedly responded with text indicating that the user should use glue. In another batch of apparent chaos, the AI ​​system reportedly told users that at least two dogs owned hotels and pointed to a nonexistent dog statue as proof.

While many of the supposedly inaccurate results seem laughable or harmless, the main concerns seem to be that the consumer-facing model that generates “AI presentation” content produces inaccurate and accurate results with the same external confidence.

So far, according to Google representative Megan Farnsworth, who spoke to The Verge via email, the company has been relegated to removing queries from the system that produce inaccurate results when they appear. Basically, it seems like Google is playing a metaphorical game of typing with its AI problem.

To make things even more confusing, Google seems to blame the problems on the humans creating the queries.

Farnsworth Bear:

“Most of the examples we saw were unusual queries, and we also saw examples that were falsified or could not be reproduced. »

It’s not yet clear how users are supposed to avoid making “unusual queries,” and as is often the case with large language models, Google’s AI system tends to produce different responses to same questions when asked several times.

Cointelegraph reached out to Google for further clarification but did not receive an immediate response.

While it appears that Google’s AI system still needs further development to solve problems, the founder of rival AI company xAI, Elon Musk, believes that these machines will surpass human capabilities before the end of 2025.

As Cointelegraph recently reported, Musk recently told attendees at the VivaTech 2024 conference in Paris that he believes xAI could catch up with OpenAI and DeepMind Google by the end of 2024.

about: Political correctness in AI systems is the biggest concern: Elon Musk

Leave a Reply

Your email address will not be published. Required fields are marked *