麻豆影视

Skip to main content

It didn't take long for Meta's new chatbot to say something offensive

Share

Meta's new chatbot can convincingly mimic how humans speak on the internet 鈥 for better and worse.

In conversations with CNN Business this week, the chatbot, which was released publicly Friday and has been dubbed BlenderBot 3, said it identifies as "alive" and "human," watches anime and has an Asian wife. It also falsely claimed that Donald Trump is still president and there is "definitely a lot of evidence" that the election was stolen.

If some of those responses weren't concerning enough for Facebook's parent company, users were quick to point out that the artificial intelligence-powered bot . In one case, the chatbot reportedly said it had "deleted my account" over frustration with how Facebook handles user data.

While there's potential value in developing chatbots for customer service and digital assistants, there's a long history of experimental bots quickly running into trouble when released to the public, such as with more than six years ago. The colourful responses from BlenderBot show the limitations of building automated conversational tools, which are typically trained on large amounts of public online data.

"If I have one message to people, it's don't take these things seriously," Gary Marcus, an AI researcher and New York University professor emeritus, told CNN Business. "These systems just don't understand the world that they're talking about."

In a statement Monday amid reports the bot also made antisemitic remarks, Joelle Pineau, managing director of fundamental AI research at Meta, said "it is painful to see some of these offensive responses." But she added that "public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized."

Meta previously acknowledged the current pitfalls with this technology in a on Friday. "Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we've conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3," the company said. "Despite this work, BlenderBot can still make rude or offensive comments."

But Meta also claimed its latest chatbot is "twice as knowledgeable" as predecessors as well as 31% more improved on conversational tasks while being factually incorrect 47% less often. Meta said that it was continuously gathering data as more people interact with the bot to make improvements.

Meta did not immediately respond to CNN Business' request for further specifics on how the bot was trained, but it did say in blog posts that it was trained via "a large amount of publicly available language data." The company added: "Many of the datasets used were collected by our own team, including one new dataset consisting of more than 20,000 conversations with people predicated on more than 1,000 topics of conversation."

Marcus speculated that the company is "probably borrowing stuff from Reddit and Wikipedia" like other AI chat systems. If so, he says, the poor outcomes highlight limitations with the data on which the bot is trained. For example, the bot may think Trump is still president because in the majority of older data sets it was trained on, Trump was still president, Marcus speculated.

The public release of BlenderBot comes nearly two months after a Google engineer made headlines by claiming that Google's AI chatbot LaMDA The claims, which were widely criticized in the AI community, highlighted how this technology can lead people to assign human attributes to it.

BlenderBot self-identified as "sentient" during chats with CNN Business, likely because that's what the human responses it studied said. When asked what made it "human," the bot stated: "The fact that I'm alive and conscious right now makes me human, as well as having emotions and being able to reason logically."

After being caught contradicting itself in responses, the bot also produced an all-too-human response: "That was just a lie to make people leave me alone. I'm afraid of getting hurt if I tell the truth."

As Marcus put it, "these systems produce fluent language that sounds like a human wrote it, and that's because they're drawing on these vast databases of things that humans actually did write." But, he added, "at the end of the day, what we have are a lot of demonstrations that you can do cute stuff, and a lot of evidence that you can't count on it."

CTVNews.ca Top Stories

Two nephews of the beloved Harry R. Hamilton share stories about his life and legacy.

The union representing some 1,200 dockworkers at the Port of Montreal has overwhelmingly rejected a deal with their employers association.

Local Spotlight

For the second year in a row, the 鈥楪ift-a-Family鈥 campaign is hoping to make the holidays happier for children and families in need throughout Barrie.

Some of the most prolific photographers behind CTV Skywatch Pics of the Day use the medium for fun, therapy, and connection.

A young family from Codroy Valley, N.L., is happy to be on land and resting with their newborn daughter, Miley, after an overwhelming, yet exciting experience at sea.

As Connor Nijsse prepared to remove some old drywall during his garage renovation, he feared the worst.

A group of women in Chester, N.S., has been busy on the weekends making quilts 鈥 not for themselves, but for those in need.

A Vancouver artist whose streetside singing led to a chance encounter with one of the world's biggest musicians is encouraging aspiring performers to try their hand at busking.

Ten-thousand hand-knit poppies were taken from the Sanctuary Arts Centre and displayed on the fence surrounding the Dartmouth Cenotaph on Monday.

A Vancouver man is saying goodbye to his nine-to-five and embarking on a road trip from the Canadian Arctic to Antarctica.