• Lvxferre
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    You could, but even then you need to put some thought on how to prompt and review/edit the output.

    I’ve noticed from usage that LLMs are extremely prone to repeat verbatim words and expressions from the prompt. So if you ask something like “explain why civilisation is bad from the point of view of a cool-headed logician”, you’re likely outing yourself already.

    A lot of the times the output will have “good enough” synonyms. That you could replace with more accurate words… and then you’re outing yourself already. Or simply how you fix it so it sounds like a person instead of a chatbot, we all have writing quirks and you might end leaking them into the review.

    And more importantly you need to aware that it is an issue, and that you can be tracked based on how and what you write.