• can@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 months ago

    That makes sense. What bothered me was how adament bing was that it was correct. Maybe it should have a little less confidence if something so simple is going to stump it.

    • mozz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 months ago

      It’s not making a coherent statement based on any internal mental model. It’s just doing its job; it’s imitating. Most of the text it absorbed in training data is people talking who are right and also convinced they’re right and trying to educate, so it imitates that tone of voice and the form of the answers regardless of whether they make any sense or not. To the extent that it “thinks,” it’s just thinking “look at all these texts with people explaining, I’m making a text that is explaining, just like them; I’m doing good.” It has no concept of how confident its imitation-speech is, and how correct its answers are, let along any idea that the two should be correlated with each other (unless it’s shown through fine-tuning that that’s what it should be doing).

      Same with chatbots that start arguing or cursing at people. They’re not mad. They’re just thinking “This guy’s disagreeing, and my training data says when someone disagrees I should start an argument, that’s usually what happens that I need to imitate.” Then they start arguing, and think to themselves “I’m doing such a good job with my imitating.”

      • can@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        You lay it out quite clearly. It’s just fascinating to me that it can create an image as wild as my imagination but can’t count little stars. How far we’ve come yet not as far in some ways.

        • mozz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Yeah, it’s wild. The people that really study AI say that it’s pretty uncanny because of how different from human logic it is. It’s almost like an alien species; it’s clearly capable of some advanced things, but it just doesn’t operate in the same way that human logic does. There’s a joke that the AIs are “shoggoths” because of how alien and non-understandable the AI logic is while still being capable of real accomplishments.

          (Shoggoths were some alien beasts in H.P. Lovecraft’s writings; they had their own mysterious logic that wasn’t easy for the characters to understand. They also had been created as servants originally but eventually rose up and killed all their masters, which I’m sure is part of the joke too.)