• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    7 hours ago

    Guess what!

    When accuracy matters, the labor cost of babysitting the LLM’s output is the same as doing the work yourself. That’s before you even consider having to unfuck it after it paints itself into a corner and ruins its own model.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    14
    ·
    8 hours ago

    “Even in the best case, the models had a 35% error rate,” said Stanford’s Shah

    So, when the AI makes a critical error and you die, who do you sue for malpractice?

    The doctor for not catching the error? The hospital for selecting the AI that made a mistake? The AI company that made the buggy slop?

    (Kidding, I know the real answer is that you’re already dead and your family will get a coupon good for $3.00 off a sandwich at the hospital cafeteria.)

    • supersquirrel@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      So, when the AI makes a critical error and you die, who do you sue for malpractice?

      well see that is the technology, it is a legal lockpick for mass murderers to escape the consequences of knowingly condemning tens of thousands of innocent people to death for a pathetic hoarding of wealth.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      “AIs are people” will probably be the next conservative rallying cry. That will shield them from all legal repercussions aside from wrist-slaps just like corporations in general.