The industry uses word “hallucination” quite long and successfully. This is clickbait, but I agree that the author borrowed this word from another author.
The hallucination concept refers to something different - it’s when the language model starts digging itself into a hole and making up stuff to fit it’s increasingly narrow story. Nobody would say a language model is hallucinating if what it says is accurate.
The author here makes a different case - that the AI is constantly bullshitting, similar to what you’d expect from Boris Johnson. It doesn’t matter whether it’s wrong or right in any particular case - it goes ahead with the exact same confidence no matter what. There’s no real difference between an LLM when it’s “hallucinating” and when it’s working correctly.
LLMs are not always hallucinating, but they’re always bullshitting. Frankly, I think using the term hallucination to describe a spiralling algorithm might be a load of bullshit in its own right, fashioned by people who are desperate to liken their predictive model to the workings of a human brain.
So you’d say it’s a hallucination machine rather than a bullshit generator?
I think you’re on to a good point - the industry seems to say their model is hallucinating whenever is does something they don’t approve of, but the fact of the matter is that it does the exact same thing as it always does.
The industry uses word “hallucination” quite long and successfully. This is clickbait, but I agree that the author borrowed this word from another author.
The hallucination concept refers to something different - it’s when the language model starts digging itself into a hole and making up stuff to fit it’s increasingly narrow story. Nobody would say a language model is hallucinating if what it says is accurate.
The author here makes a different case - that the AI is constantly bullshitting, similar to what you’d expect from Boris Johnson. It doesn’t matter whether it’s wrong or right in any particular case - it goes ahead with the exact same confidence no matter what. There’s no real difference between an LLM when it’s “hallucinating” and when it’s working correctly.
LLMs are not always hallucinating, but they’re always bullshitting. Frankly, I think using the term hallucination to describe a spiralling algorithm might be a load of bullshit in its own right, fashioned by people who are desperate to liken their predictive model to the workings of a human brain.
I can say exactly the same thing LLM always hallucinate, just sometimes they do it correctly and sometimes not.
So you’d say it’s a hallucination machine rather than a bullshit generator?
I think you’re on to a good point - the industry seems to say their model is hallucinating whenever is does something they don’t approve of, but the fact of the matter is that it does the exact same thing as it always does.