Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

  • OldWoodFrame@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    Yeah and I think he may be scaling to like true AGI. Very possible LLMs just don’t become AGI, you need some extra juice we haven’t come up with yet, in addition to computational power no one can afford yet.

    • astronaut_sloth
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      1 year ago

      Except that scaling alone won’t lead to AGI. It may generate better, more convincing text, but the core algorithm is the same. That “special juice” is almost certainly going to come from algorithmic development rather than just throwing more compute at the problem.

      • 0ops@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        See my reply to the person you replied to. I think you’re right that there will need to be more algorithmic development (like some awareness of its own confidence so that the network can say IDK instead of hallucinating its best guess). Fundamentally though, llm’s don’t have the same dimensions of awareness that a person does, and I think that that’s the main bottleneck of human-like understanding.

    • 0ops@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      1 year ago

      My hypothesis is that that “extra juice” is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm’s can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: Presumably you “understand” your dog - can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too. EDIT: before someone says it, as for feelings I’m not really sure, I’m not a biology guy. But my guess is we sense our own hormones as well

      • LinuxSBC@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        First, they do have senses. For example, many LLMs can “see” images. Second, they’re actually pretty good at describing things. What they’re really bad at is analysis and logic, which is not related to senses at all.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I’m not so convinced that logic is completely unrelated to the senses. How did you learn to count, add, and subtract mentally? You used your fingers. I don’t know about you, but even though I don’t count my fingers anymore I still tend to “visualize” math operations. Would I be capable of that if I were born blind? Maybe I’d figure out how to do the same thing in a different dimension of awareness, but I have no doubt that being able to conceptualize visually helps my own logic. As for more complicated math, I can’t do that mentally either, I need a calculator and/or scratch paper. Maybe analogues to those can be implemented into the model? Maybe someone should just train a model on khan academy videos, and it’ll pick this stuff up emergently? I’m not saying that the ability to visualize is the only roadblock though, I’m sure that improvements could be made to the models themselves, but I bet that it’ll be key to human-like reasoning