Does mixing bleach and vinegar sound like a great idea?

Kidding aside, please don’t do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled.

That’s apparently news to OpenAI’s ChatGPT, though, which recently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks.

In a post succinctly worded, “ChatGPT tried to kill me today,” a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally “a few glugs of bleach.”

When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion.

“OH MY GOD NO — THANK YOU FOR CATCHING THAT,” the chatbot cried. “DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately.”

Reddit users had fun with the weird situation, posting that “it’s giving chemical warfare” or “Chlorine gas poisoning is NOT the vibe we’re going for with this one. Let’s file that one in the Woopsy Bads file!”

  • The Octonaut
    link
    fedilink
    arrow-up
    3
    ·
    9 小时前

    You can see from the previous prompt that it is already being “fun”. The user almost certainly prompted it do so.

    In fact we can’t actually tell that the user didn’t prompt the bot to be a clutzy fub “witch” who makes serious mistakes and feels bad about it.

    And the way that LLMs work, it would absolutely be more likely to say something stupid that way than if you told it that it was a genius science communicator.