This is legit.
- The actual conversation: https://archive.is/sjG2B
- The user created a Reddit thread about it: https://old.reddit.com/r/artificial/comments/1gq4acr/gemini_told_my_brother_to_die_threatening/
This bubble can’t pop soon enough.
This is legit.
This bubble can’t pop soon enough.
I don’t understand. What prompted the AI to give that answer?
It’s not uncommon for these things to glitch out. There are many reasons but I know of one. LLM output is by nature deterministic. To make them more interesting their designers instill some randomness in the output, a parameter they call temperature. Sometimes this randomness can cause it to go off the rails.
This is what they said before the installation of the Matrix
Additionally, Google’s generative AI stuff is exceptionally half baked. To such an extent that it seems impossible for a megacorporation of the calibre of Google. There has already been a ton of coverage on it. Like the case of LLM summary of a Google search suggesting you put inedible things on a pizza and their image generators producing multiracial Nazis.