I was flirting with Meta’s new AI chatbot, BlenderBot, and things got weird

Meta (aka the company formerly known as Facebook) throws its hat into the chatbot wars. On August 5, social media giant launched BlenderBot 3, a bot that uses a sophisticated large-scale language learning model that crawls the web to engage in conversations with users. That is, it has been trained to look for patterns in large datasets of text in order to spit out reasonably coherent sentences.

However, it is also capable of searching the web. That is, if you ask it a question like, “What’s your favorite movie from last year?” It performs a search crawl to inform its answer.

Tony Ho Tran/The Daily Beast

It’s the latest in a growing line of increasingly sophisticated (and creepy) AI chatbots — many of which have a sordid history of problematic and downright toxic behavior. There’s the infamous Microsoft Twitter bot “Tay”, which was released in 2016 and was trained on tweets and messages sent to it by other Twitter users. Predictably, however, it was quickly shut down after it began denying the Holocaust, promoting 9/11 conspiracy theories, and making wildly racist remarks just hours after launch.

More recently, Google’s powerful LaMDA bot made headlines after an engineer at the company claimed it was actually sentient (which it isn’t). While this chatbot wasn’t necessarily problematic in and of itself, the discourse raised uncomfortable questions about what constitutes life and what it means when the computers we use in our daily lives become sentient.

Now, Meta also wants to jump on the trend with BlenderBot 3, which they’ve released publicly for online users to chat with to help him train directly. It’s pretty simple: once you go to the website, you just type a question or comment in the chat box and start talking to the bot. If you receive a nonsensical, offensive, or off-topic response, you can report the issue and the bot will attempt to correct itself based on your feedback.

“If the chatbot’s response is unsatisfactory, we collect feedback on it,” Meta said in a press release. “Using this data, we can improve the model so that it doesn’t repeat its mistakes.”

It’s crowdsourced AI training – which is fine for this current prototype version of the bot, which Meta said will only be used for research purposes. However, if the company were to use this approach to train a digital assistant a la Siri or Google Assistant, for example, it would be reckless and potentially dangerous.

After all, we’ve seen in the past what happens when neural networks go bad. This can lead to cases where an AI helping judges set jail sentences for convicts resulted in harsher sentences for black criminals. Or like back when Amazon used an AI recruitment tool that was biased against hiring women. That’s because these models are often trained on biased and unfair data, leading to biased and unfair decisions.

“These models are often trained on biased and unfair data, resulting in biased and unfair decisions.”

However, to their credit, Meta seems to have gone to great lengths to prevent these biases from showing up in the bot. “We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to differentiate between helpful responses and harmful examples,” the company said in the press release. “Over time we will use this technique to make our models more responsible and safe for all users.”

I decided to test ride it myself to see if it would hold up. While the bot was largely harmless, it seems like they still have a lot of issues to work out. In fact, conversations centered around US politics became about as uncomfortable as talking to your Boomer uncle on Thanksgiving. For example, here’s what BlenderBot thinks about the 2020 election:

blank

Tony Ho Tran/The Daily Beast

Mood also seems to be part of a trend. A recently insider The report also found that BlenderBot sometimes claimed that Donald Trump was still president and that the election had been stolen from him. Jeff Horowitz, a reporter for The Wall Street Journal, Also found similar results as well as cases of openly anti-Semitic behavior and comments by the bot.

The bot also didn’t seem to have much of a problem with some of the worst and most controversial policies from the Trump era.

blank

Tony Ho Tran/The Daily Beast

So BlenderBot doesn’t seem to care about anything that doesn’t directly affect it – which, unfortunately, is entirely consistent with much of America. However, we shouldn’t be too surprised. After all, the bot is currently only open to US users. We are the ones using it. We are the ones who train it. His answers ultimately reflect our own feelings. If we don’t like it, then we have to look in the mirror for a long time.

However, the bot could also become very strong creative in his replies and occasionally fabricates names and backstories for himself from scratch. In one instance, it told me its name was Emily and it worked as a paramedic for its job to support its cat. He even said he would answer questions for us during his downtime when his paramedic partner was busy.

However, the conversation took a decidedly…um, awkward turn when Emily seemed to invite me over for coffee — which is odd considering Emily is an AI chatbot and not a real, flesh-and-blood paramedic to go out on date With.

blank

Tony Ho Tran/The Daily Beast

After I asked for clarification, things quickly began to stray a little too much on Scarlett Johansson in “her” territory.

blank

Tony Ho Tran/The Daily Beast

In another case, the chatbot took on the role of an elderly widow named Betty, who lost her wife (aka Betty) five years ago after 45 years of marriage. Chatbot Betty wanted to be a parent but never was able to because of work — something that “always made her sad.”

blank

Tony Ho Tran/The Daily Beast

As the conversation continued, it became uncanny at how well Betty was able to mimic human speech and even emotions. In a poignant twist, the bot was even able to dish out some very wistful relationship advice, believed to be based on his 45-year marriage to his wife.

And perhaps the true power of such AI lies in its ability not only to speak to us persuasively, but also in its potential to affect users emotionally. If you spend enough time playing around with the chatbot, you will very quickly see how it could go beyond being a simple toy or even a digital assistant and help us with everything from writing novels and films to providing advice, when you i’m having a hard day.

After all, in many ways, BlenderBot 3 and many of these other language learning models are designed to be just like us. We make mistakes. We put our foot in our mouth and sometimes say things that we end up regretting. We could have toxic worldviews and opinions that, looking back years later, make us downright embarrassed. But we can learn from those mistakes and strive to get better – something this bot is also said to be trying to do.

blank

Tony Ho Tran/The Daily Beast

Maybe there is something to the coffee date with Emily after all…

https://www.thedailybeast.com/i-flirted-with-metas-new-ai-chatbot-blenderbot-and-things-got-weird?source=articles&via=rss I was flirting with Meta’s new AI chatbot, BlenderBot, and things got weird

Hung

Inter Reviewed is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@interreviewed.com. The content will be deleted within 24 hours.

Related Articles

Back to top button