I’ve fucked around a bit with ChatGPT and while, yeah, it frequently says wrong or weird stuff, it’s usually fairly subtle shit, like crap I actually had to look up to verify it was wrong.
Now I’m seeing Google telling people to put glue on pizza. That a bit bigger than getting the name of George Washington’s wife wrong or Jay Leno’s birthday off by 3 years. Some of these answers seem almost cartoonish in their wrongness I almost half suspect some engineer at Google is fucking with it to prove a point.
Is it just that Googles AI sucks? I’ve seen other people say that AI is now getting info from other AIs and it’s leading to responses getting increasingly bizarre and wrong so… idk.
Sorta relatedly, I remember back before AI the most common reddit bots would just post a top comment from the same askreddit thread or a similar one. It’s a weird feeling to think that everyone seems to more or less have forgotten about this as the main way to do account farming for a while. The focus on AI modifying the response makes no sense to me. The ‘content’ literally doesn’t matter so why do the more computationally expensive thing? There were plenty of methods that people used to reword and avoid bans too. Like checking the thread for the same text and picking a different comment, having a table of interchangeable common words, etc, etc. The AI methods seem to be literally worse than just a copy-paste fuzzy matching (or markov chain etc etc).