guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
I am not, and I will look it up in a minute.
But my point is that such a low-fidelity reconstruction, when interpreted through the model of modern computing methods, lacks the accuracy for any application AND, crucially, has absolutely no way to account for and understand its limitations in relation to the intended applications. That last part is a more philosophy of science argument than about some percentage accuracy. It’s that the model has no way to understand its limitations because we don’t have any idea what those are, and discussion of this is limited to my knowledge, leaving no ceiling for the interpretations and implications.
I think a big difference in positions in this thread though is between those talking about how the best neuroscientists in the world think about this, and about those who are more technologists who never reached that level and want to Frankenstein their way to tech-bro godhood. I’m sure the top neuros get this, and are constantly trying to find new and better models. But their publications don’t appear in science journals on the covers