In the context of LLMs, I think that means giving them access to their own outputs in some way.
That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
Thanks, I didn’t know that. I guess I need to broaden my reading.
It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.
Thanks, I’ll check it out.