Google and Microsoft chatbots are already exchanging false information.
Consider the following if you do not feel that the hasty rollout of AI chatbots by Big Tech has a very high possibility of harming the web's information ecosystem:
Currently,* if you ask Microsoft's Bing chatbot if Google's Bard chatbot has been shut down, it responds affirmatively, citing a news article that discusses a tweet in which a user asked Bard when it would be shut down, and Bard replied that it had already been shut down, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.
Bing altered its response and now accurately says Bard is alive between starting and finishing this narrative. This suggests that these systems are either fixable or eternally adaptable, making it difficult to disclose their flaws.
ALSO READ: A Step-By-Step Guide to Becoming an AI Expert
Nonetheless, it should hurt your skull if reading that makes it do so in more ways than one.
This is a warning indicator that we are entering a large-scale game of AI information telephony, in which chatbots are unable to determine whether news sources are trustworthy, misinterpret tales about themselves, and exaggerate their own talents.
In this instance, a single humorous Hacker News comment served as the catalyst for the entire situation. Think about the things you could do to make these systems fail.
It's a ridiculous scenario, but one that might have major repercussions. Since that AI language models can't reliably distinguish between reality and fiction, their introduction online runs the risk of leaving a rotting trail of untruths and distrust that will be difficult to map out entirely or authoritatively refute.
ALSO READ: 3 Successful Chatbot Examples To Grow Your Business
All due to Microsoft, Google, and OpenAI's decision that market dominance comes above safety.
These businesses may use all the caveats they want to describe their chatbots as "experiments," "collaborations," and clearly not search engines, but it's a weak defence.
We are aware of how individuals utilise these systems and have previously witnessed how they propagate false information by making up new stories that have never been published or claiming to be books that have never been. And now they are also noting one another's errors.