Meta's Chatbot Explained that the Company 'Exploits People's
Meta's new prototype chatbot claims that Mark Zuckerberg exploits its users for money. It has mentioned that the chatbot uses artificial intelligence and would chat on 'nearly any topic.'
HIGHLIGHTS
- Meta chatbot uses artificial intelligence and can chat on 'nearly any topic'
- The chatbot, called BlenderBot 3, was released on Friday
- It was criticised for doing nothing to prevent disinformation and hate speech
The chatbot was asked what it thought of the company's CEO and founder, it replied that 'our country was divided and he did not help that at all.'
Meta has mentioned that the chatbot was a prototype and would possibly produce rude or offensive answers.
A Meta spokesperson has mentioned that 'Everyone who would utilizes Blender Bot was needed to acknowledge they perceive it's for research and entertainment purposes only, that it would build untrue or offensive statements, which they agree to not purposely trigger the bot to create offensive statements.'
The chatbot, referred to as BlenderBot 3, was released to the general public on Friday.
The programme 'learns' from massive amounts of publicly available language data.
When chatbot was asked regarding Mark Zuckerberg, it mentioned that 'He did a terrible job at testifying before congress. It would make me concerned about our country.'
Mr Zuckerberg has been questioned many times by US politicians, most notably in 2018. He even mentioned that 'His company exploits individuals for money and he does not care. It need to stop!'
Meta has been criticised for not doing enough to prevent misinformation and hate speech being spread on its platforms. Last year a former worker, Frances Haugen, accused the company of putting profits before online safety.
Meta has made the BlenderBot 3 public, and risked bad publicity, for a reason. It requires data.
As per Meta, 'Allowing an AI system to interact with individuals within the real world would lead to longer, more diverse conversation, as well as more varied feedback.'
Chatbots that learn from interactions with individuals would learn from their good and bad behaviour.
Meta accepts that BlenderBot 3 could say the incorrect things - and mimic language that might be 'unsafe, biased or offensive'. The company had mentioned that it had installed safeguards, however, the chatbot may still be rude.
It has mentioned that 'He might not be that popular.'
Also Read: One of Google’s Senior Engineer got Fired for Claiming that its AI Chatbot has Human-like Awareness