the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Frank [he/him, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    I’ve heard people saying that the Chinese Room is nonsense because it’s not actually possible, at least for thought experiment purposes, to create a complete set of rules for verbal communication. There’s always a lot of ambiguity that needs to be weighed and addressed. The guy in the room would have to be making decisions about interpretation and intent. He’d have to have theory of mind.

    • Tomorrow_Farewell [any, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      The Chinese Room argument for any sort of thing that people would commonly call a ‘computer’ to not be able to have an understanding is either rooted on them just engaging in endless goalpost movement for what it means to ‘understand’ something (in which case this is obviously silly), or in the fact that they assume that only things with nervous systems can have qualia, and that understanding belongs to qualia (in which case this is something that can be concluded without the Chinese Room argument in the first place).

      In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

      • Frank [he/him, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

        My understanding was that the point of the chinese room was that a deterministic system with a perfect set of rules could produce the illusion of consciousness without ever understanding what it was doing? Is that not analogous to our discussion?