So normally with Copilot it will generate three suggested prompts that will continue the conversation, e.g. “Tell me more about blank”, “Are blank healthy?” and “Where can I find blank nearby?”
Here, the autosuggested responses were drafted from the perspective of Copilot itself, not from that of the user, responding to my message. It’s like, Copilot suggesting a response apologising to itself.
Buh?
So normally with Copilot it will generate three suggested prompts that will continue the conversation, e.g. “Tell me more about blank”, “Are blank healthy?” and “Where can I find blank nearby?”
Here, the autosuggested responses were drafted from the perspective of Copilot itself, not from that of the user, responding to my message. It’s like, Copilot suggesting a response apologising to itself.
I really hope this is evidence of LLM CJD.
Its like an ant that can be catty