• MxM111@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I think the author of this article really like the word “bullshit” for clickbaitery reasons.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Well, he does reason it by rooting his definition in Harry Frankfurt’s On Bullshit and elaborating both why bullshit is different from lies and why Frankfurt’s definition of bullshiti is applicable to AI even when it happens to be right about something.

      I’m not sure how you could make the word bullshit any less clickbaity. Personally I suspect he might just have a valid point.

      • MxM111@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The industry uses word “hallucination” quite long and successfully. This is clickbait, but I agree that the author borrowed this word from another author.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          The hallucination concept refers to something different - it’s when the language model starts digging itself into a hole and making up stuff to fit it’s increasingly narrow story. Nobody would say a language model is hallucinating if what it says is accurate.

          The author here makes a different case - that the AI is constantly bullshitting, similar to what you’d expect from Boris Johnson. It doesn’t matter whether it’s wrong or right in any particular case - it goes ahead with the exact same confidence no matter what. There’s no real difference between an LLM when it’s “hallucinating” and when it’s working correctly.

          LLMs are not always hallucinating, but they’re always bullshitting. Frankly, I think using the term hallucination to describe a spiralling algorithm might be a load of bullshit in its own right, fashioned by people who are desperate to liken their predictive model to the workings of a human brain.

          • MxM111@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I can say exactly the same thing LLM always hallucinate, just sometimes they do it correctly and sometimes not.

            • sab@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              So you’d say it’s a hallucination machine rather than a bullshit generator?

              I think you’re on to a good point - the industry seems to say their model is hallucinating whenever is does something they don’t approve of, but the fact of the matter is that it does the exact same thing as it always does.