• Botzo@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    1 day ago

    How about: there’s no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

    • toynbee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      As a kid learning about programming, I told my mom that I thought the brain was just a series of if ; then statements.

      I didn’t know about switch statements then.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        24 hours ago

        Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

        And who the hell argues the animals don’t have free will? They don’t have full sapience, but they absolutely have will.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          22 hours ago

          So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

          I just dont find it a particularly useful concept.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            I’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 hours ago

              If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).