• Retiring@lemmy.ml
    link
    fedilink
    arrow-up
    46
    ·
    6 months ago

    I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.

    It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.

    • aStonedSanta@lemm.ee
      link
      fedilink
      arrow-up
      10
      ·
      6 months ago

      And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol

      • knokelmaat@beehaw.org
        link
        fedilink
        arrow-up
        14
        ·
        6 months ago

        I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.

        Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.

        The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

        An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.

    • Lvxferre
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      6 months ago

      Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It’s just… perfect! Model degeneration is a lot like what happened with the Habsburg family’s genetic pool.

      When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don’t think that the models are misbehaving, they’re simply behaving as expected, and that any “improvement” in this regard is basically a band-aid being added to humans to a procedure that doesn’t yield a lot of useful outputs to begin with.

      And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it’ll “magically” become smart. It won’t, just like 70kg of bees won’t “magically” think as well as a human being would. The underlying process is “dumb”.

      • Retiring@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        I am glad you liked it. Can’t take the credit for this one though, I first heard it from Ed Zitron in his podcast „Better Offline“. Highly recommend.

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what’s the point of using energy on useless tools. There’s so many great things that AI is and can be used for, but of course like anything exploitable whatever is “for the people” is some amalgamation of extracting our dollars.

      The funny part to me is that like mentioned “beautiful” AI cabins that are clearly fake – there’s this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that’s too bad, I’m definitely guilty of aiming for “the perfect composition” but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.

      The current state of marketed AI is selling the promise of perfection, something that’s been getting sold for years already. Just now it’s far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.

      It really sucks being an optimist sometimes.

    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      It could be only hype. But I don’t entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.