Seriously, the current large “language” models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the “superhuman capabilities” from the tech bros’ wet dreams.
In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it’ll degenerate.
You’ll need a different architecture to go meaningfully past that. Probably one that doesn’t handle semantics as an afterthought, but instead as its own layer, a central and big one.
I feel like a broken record, but…
Seriously, the current large “language” models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the “superhuman capabilities” from the tech bros’ wet dreams.
In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it’ll degenerate.
You’ll need a different architecture to go meaningfully past that. Probably one that doesn’t handle semantics as an afterthought, but instead as its own layer, a central and big one.