There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

  • snake_case@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    If you’re looking for a factual response from chat GPT you’re using it wrong. It’s designed to produce text that looks correct. It’s not a replacement for Google or indeed proper research. For more on this watch the leagle eagle video on chat gpt case: https://youtu.be/oqSYljRYDEM

    • Dr Cog
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s decent for parsing text, given you are careful about the prompt generation.

      I am exploring it’s use in assessing speech-based cognitive assessments, and so far it is pretty accurate