• catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 months ago

    Maybe we shouldn’t be treating text generators as sources of truth.

    Isn’t there some liability for someone who provides inaccurate voting information? Perhaps that could be used to influence Google et al. to stop providing AI summaries on their results pages.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      Yes, there is. Even Xitter quickly changed their LLM to point to a government website whenever voting questions were asked so they have no liability.

  • burgersc12
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 months ago

    “AI models provide inaccurate information” this is all it is. The rest is true as well, but the biggest thing is the fact that these things do not know the “right” answer they just give you an answer, no matter how wrong it is.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    Oh this trend is getting old. Why are we acting like AI is normally right about everything and we are so excited when we find a loophole where it screws up? This is like lowest common demoninator news. I bet you could have a AI generate these articles about things AI is wrong about, that’s how formulaic this has become. It’s wrong about all kinds of stuff. You know that phrase about don’t believe everything you read on the internet? AI believes most of what it reads on the internet. I could go on, but my glue pizza is ready to come out of the oven…