Having used ChatGPT for a few days now, it’s on a completely different level. I thought AI wasn’t going to be a big deal because every AI I’ve ever seen puts out what relatively speaking is garbage, but I’ve had extended conversations with it on very diverse topics, and while I can see cracks, I can’t see very many.

If I were all the english majors out there, I’d be very afraid because language models like this have the potential to make the tiny minority of those with 9-5 jobs in their field just as unemployable as the rest of them.

If AI becomes absolutely massive, it’s going to be a strangely conservative force, talking to it. By definition it can’t really come up with ideas it’s never been fed, so if people rely on it to find answers for them, it’ll only provide answers someone else has already come up with to an extent. The answers might be even left or woke, but the answers could become trapped in time, and eventually it could end up a self-reinforcing thing like wikipedia, where someone says something on wikipedia that’s wrong, journalists use wikipedia for research, the incorrect thing gets said in the media, which wikipedia can then use to justify it’s incorrect thing.