I don’t really think so. I am not really sure why, but my gut feeling is that being good at impersonating a human being in text conversation doesn’t mean you’re closer to creating a real AI.

  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    I don’t thik it’s so far away, due to the great current advances in Quantum Computing. In just a few years, our current computers look like a basic pocket calculator. It is not about more powerful computers like the current ones, but about multiplying their processing powers by more than 10,000 in one jump.

    • Ferk@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      2 years ago

      Personally, I think this has very little to do with computing power and more to do with sensorial experience and replicating how the human brain interacts with the environment. It’s not about being able to do calculations very fast, but about what do those calculations do and how are they conditioned, what stimuli are the ones that cause them to evolve, in which way and by how much.

      The real problem is that to think like a human you need to see like a human, touch like a human, have instincts of a human, the needs of a human and the limitations of a human. From babies we learn from things by touching, sucking, observing, experimenting, moved by instincts such as wanting food, wanting companionship, wanting approval from our family… all the things that ultimatelly motivate us. A human AI would make mistakes just like we do, because that’s how our brain works. It might just be little more than a toddler and it could still be a human-like AI.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        I completely agree with this. Brains basically evolved to create an efficient simulation of the physical environment that we exist in. Any AI that would have human style consciousness needs to have embodiment either in a physical or a virtual environment. If we evolve AI agents in a physical environment and they learn to interact with it meaningfully, then we could teach them language and communicate with them in a meaningful way because we’ll have a shared context.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      I actually think that analog computers would be more relevant here than quantum computers. The brain is an analog computer, and you could replicate what our neural connections do using a different substrate. This is an active area of research currently.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          Yeah, replicating the entire brain is a herculean effort. However, it’s also important to keep in mind that the brain evolved for robustness, and has a lot of redundancy in it. It turns out that we’d only need to implement a neural network that’s roughly 10% of the brain to get a human level intelligence as this case illustrates. That seems like a much more tractable problem. It might be even smaller in practice, since a lot of the brain is devoted to body regulation and we’d only care about the parts responsible for thought and reasoning.

          I think the biggest roadblock is in figuring out the algorithm behind our conscious process. If we can identify that from the brain structure, then we could attempt to implement it on a different substrate. On Intelligence is a good book discussing this idea.

      • Zerush@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 years ago

        It isn’t, the brain don’t work analog, the brain process several data at the same time, in analog manner it can’t create concienciouness, this is a quantum process. Because of this, a a digital or analog computer never can have a real intelligence with its own consciousness, but possible in a quantum computer in no so far future. https://www.psychologytoday.com/us/blog/biocentrism/202108/quantum-effects-in-the-brain https://www.nature.com/articles/440611a

        • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          Neurons are very much analog as they do not send discrete signals to each other. While the brain exploits quantum effects there is no indication that these are fundamental to the function of consciousness.

          • Zerush@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 years ago

            You can create an artificial intelligence by digital or analog means, capable of solving problems through neural networks, but you cannot create a self-aware intelligence by these means. It is the consciousness that is only possible with quantum technologies. What we currently think of as artificial intelligence, sophisticated as it may be, has real intelligence no higher than that of a grasshopper. although it can learn data processing and acquisition pathways, but this is only a small part of how our brain works. It’s like someone with a photographic memory, who can recite whole chapters of books to a question, but without really understanding it. The Turing test currently has many deficiencies that do not serve for a clear definition of human or machine, since it dates back to the 1950s and this did not foresee the computer evolution from the outset. For this reason, several other conversational methods are used today with possible responses that are not clearly definable, like the Markus test, the Lovelace test 2.0 or MIST (Minimum Intelligence Signal Test) Still a long way to get a Daneel Olivaw

            I asked Andi if he would pass the Turing test.

            (I love this “search engine”)

            • Ferk@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              2 years ago

              No modern AI has been able to reliably pass the Turing test without blatant cheats (like allowing the use of foreign kids unable to understand/express/speak themselves fluently, instead of adults). Just because it dates back to the 1950s doesn’t make it any less valid, imho.

              I was interested by the other tests you shared, thanks for that! However, in my opinion:

              The Markus test is just a Turing Test with a video feed. I don’t think this necessarily makes the test better, it adds more requirements for the AI, but it’s unclear if those are actually necessary requirements for consciousness.

              The Lovelace test 2.0 is also not very different from a Turing test where the tester is the developer and the questions/answers are on a specific domain, where it’s creativity is what’s tested. I don’t think this improves much over the original test either, since already in the Turing test you have freedom to ask questions that might already require innovative answers. Given the more restricted scope of this test and how modern procedural generation and neural nets have developed, it’s likely easier to pass the Lovelance test than the Turing test. And at the same time, it’s also easier for a real human to not pass it if they can’t be creative enough. I don’t think this test is really testing the same thing.

              The MIST is another particular case of a more restricted Turing test. It’s essentially a standardized and “simplified” Turing test where the tester is always the same and asks the same questions out of a set of ~80k. The only advantage is that it’s easier to measure and more consistent since you don’t depend on how good the tester is at choosing their questions or judging the answers, but it’s also easier to cheat, since it would be trivial to make a program specifically designed to answer correctly that set of questions.

              • Zerush@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                2 years ago

                The difficulty in these tests is that even we ourselves are still not clear about what consciousness. the ego, is and how it really works, so this type of tests to an AI will always remain in a subjective result, very possible even that some people do not pass the Turing test.

            • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              First, we don’t have a firm definition for what consciousness is or how to measure it. However, self awareness is simply an act of the system modelling itself as part of its internal simulation of the world. It’s quite clear that this has nothing to do with quantum technologies. In fact, Turing completeness means that any computation done by a quantum system can be expressed by a classical computation system.

              The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

              What we need to do to make systems that think like us is to evolve them in an environment that mimics how our physical world works. Once these systems build up an internal representation of their environment we can start building a common language to talk about it.

              • vitaminka@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                2 years ago

                The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

                heh, in your perception, how do human brains work? 😅

                • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 years ago

                  In my perception, human brains are neural networks that evolved to create a simulation of the physical environment that the organism inhabits. Human brains are a result of tuning over billions of years of natural selection.

                  Operating on an internal representation of the world is inherently cheaper than parsing out the data from the senses. This approach also allows the brain to create simulations of events that happened in the past or may happen in the future allowing for learning and planning. There’s a lot more that can be said about this, but I think these are the key features that make complex brains valuable from natural selection perspective. I generally agree with the ideas outlined in this book.

                  • vitaminka@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 years ago

                    i see 🤔

                    i’m still not totally convinced that there’s a fundamental division/difference between the criteria that constitute a brain/consciousness (many of them you mentioned) and just artifacts of learning algorithms at a scale we can’t model/execute on a computer