• wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    24 hours ago

    Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

    And who the hell argues the animals don’t have free will? They don’t have full sapience, but they absolutely have will.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

      I just dont find it a particularly useful concept.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        I’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).