• sab@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    The hallucination concept refers to something different - it’s when the language model starts digging itself into a hole and making up stuff to fit it’s increasingly narrow story. Nobody would say a language model is hallucinating if what it says is accurate.

    The author here makes a different case - that the AI is constantly bullshitting, similar to what you’d expect from Boris Johnson. It doesn’t matter whether it’s wrong or right in any particular case - it goes ahead with the exact same confidence no matter what. There’s no real difference between an LLM when it’s “hallucinating” and when it’s working correctly.

    LLMs are not always hallucinating, but they’re always bullshitting. Frankly, I think using the term hallucination to describe a spiralling algorithm might be a load of bullshit in its own right, fashioned by people who are desperate to liken their predictive model to the workings of a human brain.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I can say exactly the same thing LLM always hallucinate, just sometimes they do it correctly and sometimes not.

      • sab@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        So you’d say it’s a hallucination machine rather than a bullshit generator?

        I think you’re on to a good point - the industry seems to say their model is hallucinating whenever is does something they don’t approve of, but the fact of the matter is that it does the exact same thing as it always does.