• Dragon Rider (drag)@lemmy.nz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    4 hours ago

    Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?

    • Bezier@suppo.fi
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?

      I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 hours ago

        Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?

        • Apollo42@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          “Its weird, why would you do that?”

          • some guy who talks about himself in the 3rd person
        • Bezier@suppo.fi
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          I have no clue what you’re trying to prove, but I think I’m done with this conversation.

    • VeganCheesecake@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Neither the worm, nor current LLMs, are sapient.

      Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        3 hours ago

        The sweet release of death.

        Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.

        Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?

        • VeganCheesecake@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.

          I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research thsg deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious.

          I feel like the last part is something the AI from the paperclip thought experiment would do.