The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
No need to guess - I’ve seen it, you’re right about what they’re being pushed hard for: they’re being pushed as intelligent and able to understand language, when neither is true. The expectation that they should be able to output accurate summaries is a consequence of both.
No, it’s just rambling. My bad.
I focused too much on using AI to summarise and ended not talking about it summarising documents, even if the text is about the later.
And… well, the later is such a dumb idea that I don’t feel like telling people “the text is right, don’t do that”, it’s obvious.
You’d think so, but guess what precise use case LLMs are being pushed hard for.
No need to guess - I’ve seen it, you’re right about what they’re being pushed hard for: they’re being pushed as intelligent and able to understand language, when neither is true. The expectation that they should be able to output accurate summaries is a consequence of both.