Help Teach Meta's New AI-Powered Chatbot to be The Perfect Parent


U.S. adults can contribute to the company's AI research by chatting with the bot on a public website. Meta's new chatbot is part of its research to improve the quality and safety of AI-based chatbots. 

Sitting in front of a computer screen, I'm typing messages to a new chatbot created by Facebook's parent company, Meta. We talk about pizza, politics and even social networking. 

What do you think about Facebook? 

I'm not crazy about facebook... It seems like everyone spends more time on facebook than talking face to face, the bot replies. 

Called BlenderBot 3, the artificial intelligence bot is designed to improve its conversational and safety skills when talking to humans. Meta will publicly launch the chatbot on Friday as part of an AI research project. U.S. adults can chat with Meta's new chatbot about most topics on this public website. The AI uses Internet searches, as well as memories of their conversations, to compose its messages. 


Chatbots are programs that can mimic human conversations using text or audio. They are often used in voice assistants or customer service. As people spend more time using chatbots, companies are trying to improve their capabilities to make conversations flow more smoothly.  

Meta's research project is part of broader efforts to advance AI, a field that struggles with concerns about bias, privacy and security. Experiments with chatbots have gone poorly in the past, so the demonstration could be risky for Meta. In 2016, Microsoft shut down its Tay chatbot after it started tweeting lewd and racist comments. In July, Google fired an engineer who claimed that an AI chatbot the company had been testing was a self-aware person. 

In a blog post about the new chatbot, Meta said researchers have been using information that is normally collected through studies in which people interact with bots in a controlled environment. However, that data set does not reflect diversity on a global level, so the researchers are asking the public for help. 

The AI field is still far from a truly intelligent AI system that can understand, engage and converse with us as other humans do, the blog post says. To build models that are better suited to real-world environments, chatbots need to learn from a diverse and comprehensive perspective with people in the wild. 

Meta said the third version of BlenderBot includes skills from its predecessors, such as Internet search, long-term memory, personality and empathy. The company collected public data that included more than 19,000 conversations between humans and robots, which enhanced the range of topics BlenderBot can address, such as healthy food recipes and finding child-friendly services. 

Meta acknowledged that security remains an issue, but researchers have found that the chatbot becomes more secure the more it learns from conversations with humans.  

However, a live demo is not without its challenges, according to the blog post. It is difficult for a bot to keep everyone's attention while talking about arbitrary topics and to make sure it never uses offensive or toxic language. 

People chatting with the chatbot can give feedback on an offensive message by clicking the thumbs down icon next to the message and selecting Rude or Inappropriate as the reason for disliking it. There are also other options when people give feedback, such as that the message was off-topic, useless or spammy. 

Participants are discouraged from providing the chatbot with any personal information, such as names, addresses, and birthdays. If a user wants to chat with the bot without the conversation being shared for research, or if participants accidentally include personal information in their chat, they can opt out of opting out of saving the data at the end of the session. Meta says it will then permanently delete the data from the conversation. 

The bot can also make false or contradictory statements, according to an FAQ on the experiment. The FAQ in the demo says that the bot's comments are not representative of Meta's views as a company, and should not be relied upon for factual information, including but not limited to medical, legal or financial advice. 

BlenderBot's testing clearly shows that it is still a work in progress. After asking the bot for your favorite pizza place in San Francisco, the bot redirects the conversation and asks: 

What do you like to do in your spare time? 

According to the demo, the AI changes the subject when it detects a potentially unsafe topic. The bot is not shy about sharing its political views after claiming that social media use was causing family drama. The bot said that someone shared an article attacking former US President Donald Trump, sparking arguments between family members who support the politician and those who do not.  

The bot, who said his parents are related to the founder of the US pizza chain Papa John's, also claimed to be a Republican and to be pro-choice. He added that he would rather not discuss politics online due to disagreements between the two parties.  

BlenderBot then said that he was considering deleting his Facebook account because there were too many trolls. He also started making nonsensical statements.


Post a Comment

Post a Comment (0)

Previous Post Next Post