The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
As I mentioned another comment I simply skipped the word “document”.
I’m baffled at the idea of it being “fine if the AI makes shit up”.
It’s fine as long as you know that it’ll make shit up, and you aren’t giving its claims an ounce of faith. Still useful to know what a text is about.
The fact that it’ll make shit up is a problem on another level: those systems are being marketed as if they were able to provide you reliable answers, when it is clearly not the case. It will tell you to put glue on your pizza, and yet you’re expected to “trust” it… yeah nah.
As I mentioned another comment I simply skipped the word “document”.
It’s fine as long as you know that it’ll make shit up, and you aren’t giving its claims an ounce of faith. Still useful to know what a text is about.
The fact that it’ll make shit up is a problem on another level: those systems are being marketed as if they were able to provide you reliable answers, when it is clearly not the case. It will tell you to put glue on your pizza, and yet you’re expected to “trust” it… yeah nah.