I don’t really think so. I am not really sure why, but my gut feeling is that being good at impersonating a human being in text conversation doesn’t mean you’re closer to creating a real AI.

  • electrodynamica
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 years ago

    Agreed. Especially when you consider that even fictional general AIs centuries more advanced than we are capable of making, such as Data or Isaac, aren’t clearly shown to actually have what we call a soul, even if they’re really good at emulating people.

    • Ferk@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 years ago

      If “what we call a soul” means consciousness, then I doubt there’s a way to prove that anything else than your own self can be shown to actually have a soul. Not even what we call “other people”.

      You being aware of your own consciousness doesn’t mean every human necessarily is in the same, right? …and since we lack of a way to prove consciousness then we can’t assume other people are any more conscious than an AI could be.

      • sexy_peach@feddit.deOP
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        I agree with this, it’s impossible to prove that other people are the same as you. Still, I have this feeling about AI atm. Maybe I just haven’t encountered one ^^

        • Subversivo@lemmy.mlB
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          The problem is we probe consciousness trough language, and we created machines capable of damm good language. We tend to think language as a byproduct of conscience. If I feel like I’m a being separated from the world, I can use language to order that world. As AI Focus a lot on NLP we have machines capable of using language and describing the world as conscientious beings, but we have no way to tell the difference of a emulated conscience and a real one.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        2 years ago

        The difference is that we know we are all built on the same architecture. Since consciousness is the byproduct of how our physical brain functions, knowing that I’m conscious is a reasonable basis to assume that others are conscious as well.

        The problem with AIs is that they’re not built on the same principles, and so we can’t know whether its genuine consciousness or mimicry. We can’t ever definitively know that something that acts consciously and claims to be conscious has an internal qualia of experience.

        That said, I would argue that from an ethical standpoint we should err on the side of caution. If an AI claims to be conscious and acts as if it has self determination, then it should be given the benefit of the doubt and treated as a sentient entity. Given how we currently treat animals, I don’t have much hope for this though.

        • Ferk@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 years ago

          Do we know for sure that our architecture is the same? How do you prove that we are really the same? For all I know I could be plugged to a simulation :P

          If there was a way to test consciousness then we would be able to prove that we are at least interacting with other conscious beings… but since we can’t test that, it could theoretically be possible that we (I? you?) are alone, interacting with a big non-sentient and interconnected AI, designed to make us “feel” like we are part of a community.

          I know it’s trippy to think that but… well… from a philosophical point of view, isn’t that true?

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            2 years ago

            I find solipsism isn’t really a useful framework to use, and we have to go on the assumption that the world we perceive from our senses is real. We can never prove it, but acting on this basis is the only logical approach available to us.

            We know that our architecture is the same because we’ve studied the brain for a long time now. We understand how natural selection, genetics, and evolution work. This gives us a very strong basis to argue that our brains do indeed have the same basic function.

            • Ferk@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              2 years ago

              Oh but I agree that assuming our reality is solipsist isn’t useful for practical purposes. I’m just highlighting the fact that we do not know. We don’t have enough data preciselly because there are many things related to consciousness that we cannot test.

              Personally I think that if it looks like a duck, quacks like a duck and acts like a duck then it probably is a duck (and that’s what the studies you are referencing generally need to assume). Which is why, in my opinion, the turing test is a valid approach (and other tests with the same philosophy).

              Disregarding turing-like tests and at the same time assuming that only humans are capable of having “a soul” is imho harder to defend, because it requires additional assumptions. I think it’s easier to assume that either duck-likes are ducks or that we are in a simulation. Personally I’m skeptical on both and I just side with the duck test because it’s the more pragmatic approach.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                2 years ago

                I agree that we should always give systems that act as if they’re conscious and self aware . the benefit of the doubt. That’s the only ethical approach in my opinion.

                As you note, we still lack the understanding of how consciousness arises and until we develop such understanding we can only guess whether a system is conscious or not based on its external behavior.

      • electrodynamica
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        There’s a great distance between what we can prove and what we can know. Proving is a very high bar. I know God exists. I’m saying that I don’t know if those fictional characters are concious, they don’t really seem to be to me.

    • sexy_peach@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Do you think creating a “real AI” is possible with computers as we have now (but more powerful)?

      I feel like technically it could be possible, but if it is it still is far away.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        I think it’s definitely possible since current computers are Turing complete. Any computation that the brain does can be expressed using a machine. As a thought experiment, we can consider creating a physics simulation detailed enough to allow virtualizing a human brain.

      • electrodynamica
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        I don’t. I think at the very least analog or quantum computing is necessary. We could create very useful intelligences though, even general intelligences.

      • Zerush@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        I don’t thik it’s so far away, due to the great current advances in Quantum Computing. In just a few years, our current computers look like a basic pocket calculator. It is not about more powerful computers like the current ones, but about multiplying their processing powers by more than 10,000 in one jump.

        • Ferk@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          2 years ago

          Personally, I think this has very little to do with computing power and more to do with sensorial experience and replicating how the human brain interacts with the environment. It’s not about being able to do calculations very fast, but about what do those calculations do and how are they conditioned, what stimuli are the ones that cause them to evolve, in which way and by how much.

          The real problem is that to think like a human you need to see like a human, touch like a human, have instincts of a human, the needs of a human and the limitations of a human. From babies we learn from things by touching, sucking, observing, experimenting, moved by instincts such as wanting food, wanting companionship, wanting approval from our family… all the things that ultimatelly motivate us. A human AI would make mistakes just like we do, because that’s how our brain works. It might just be little more than a toddler and it could still be a human-like AI.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            2 years ago

            I completely agree with this. Brains basically evolved to create an efficient simulation of the physical environment that we exist in. Any AI that would have human style consciousness needs to have embodiment either in a physical or a virtual environment. If we evolve AI agents in a physical environment and they learn to interact with it meaningfully, then we could teach them language and communicate with them in a meaningful way because we’ll have a shared context.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          I actually think that analog computers would be more relevant here than quantum computers. The brain is an analog computer, and you could replicate what our neural connections do using a different substrate. This is an active area of research currently.

            • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              Yeah, replicating the entire brain is a herculean effort. However, it’s also important to keep in mind that the brain evolved for robustness, and has a lot of redundancy in it. It turns out that we’d only need to implement a neural network that’s roughly 10% of the brain to get a human level intelligence as this case illustrates. That seems like a much more tractable problem. It might be even smaller in practice, since a lot of the brain is devoted to body regulation and we’d only care about the parts responsible for thought and reasoning.

              I think the biggest roadblock is in figuring out the algorithm behind our conscious process. If we can identify that from the brain structure, then we could attempt to implement it on a different substrate. On Intelligence is a good book discussing this idea.

          • Zerush@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 years ago

            It isn’t, the brain don’t work analog, the brain process several data at the same time, in analog manner it can’t create concienciouness, this is a quantum process. Because of this, a a digital or analog computer never can have a real intelligence with its own consciousness, but possible in a quantum computer in no so far future. https://www.psychologytoday.com/us/blog/biocentrism/202108/quantum-effects-in-the-brain https://www.nature.com/articles/440611a

            • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              2 years ago

              Neurons are very much analog as they do not send discrete signals to each other. While the brain exploits quantum effects there is no indication that these are fundamental to the function of consciousness.

              • Zerush@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 years ago

                You can create an artificial intelligence by digital or analog means, capable of solving problems through neural networks, but you cannot create a self-aware intelligence by these means. It is the consciousness that is only possible with quantum technologies. What we currently think of as artificial intelligence, sophisticated as it may be, has real intelligence no higher than that of a grasshopper. although it can learn data processing and acquisition pathways, but this is only a small part of how our brain works. It’s like someone with a photographic memory, who can recite whole chapters of books to a question, but without really understanding it. The Turing test currently has many deficiencies that do not serve for a clear definition of human or machine, since it dates back to the 1950s and this did not foresee the computer evolution from the outset. For this reason, several other conversational methods are used today with possible responses that are not clearly definable, like the Markus test, the Lovelace test 2.0 or MIST (Minimum Intelligence Signal Test) Still a long way to get a Daneel Olivaw

                I asked Andi if he would pass the Turing test.

                (I love this “search engine”)

                • Ferk@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  2 years ago

                  No modern AI has been able to reliably pass the Turing test without blatant cheats (like allowing the use of foreign kids unable to understand/express/speak themselves fluently, instead of adults). Just because it dates back to the 1950s doesn’t make it any less valid, imho.

                  I was interested by the other tests you shared, thanks for that! However, in my opinion:

                  The Markus test is just a Turing Test with a video feed. I don’t think this necessarily makes the test better, it adds more requirements for the AI, but it’s unclear if those are actually necessary requirements for consciousness.

                  The Lovelace test 2.0 is also not very different from a Turing test where the tester is the developer and the questions/answers are on a specific domain, where it’s creativity is what’s tested. I don’t think this improves much over the original test either, since already in the Turing test you have freedom to ask questions that might already require innovative answers. Given the more restricted scope of this test and how modern procedural generation and neural nets have developed, it’s likely easier to pass the Lovelance test than the Turing test. And at the same time, it’s also easier for a real human to not pass it if they can’t be creative enough. I don’t think this test is really testing the same thing.

                  The MIST is another particular case of a more restricted Turing test. It’s essentially a standardized and “simplified” Turing test where the tester is always the same and asks the same questions out of a set of ~80k. The only advantage is that it’s easier to measure and more consistent since you don’t depend on how good the tester is at choosing their questions or judging the answers, but it’s also easier to cheat, since it would be trivial to make a program specifically designed to answer correctly that set of questions.

                  • Zerush@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 years ago

                    The difficulty in these tests is that even we ourselves are still not clear about what consciousness. the ego, is and how it really works, so this type of tests to an AI will always remain in a subjective result, very possible even that some people do not pass the Turing test.

                • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 years ago

                  First, we don’t have a firm definition for what consciousness is or how to measure it. However, self awareness is simply an act of the system modelling itself as part of its internal simulation of the world. It’s quite clear that this has nothing to do with quantum technologies. In fact, Turing completeness means that any computation done by a quantum system can be expressed by a classical computation system.

                  The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

                  What we need to do to make systems that think like us is to evolve them in an environment that mimics how our physical world works. Once these systems build up an internal representation of their environment we can start building a common language to talk about it.

                  • vitaminka@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 years ago

                    The reason the systems we currently built aren’t conscious in a human sense is that they’re working purely on statistics without having any model of the world. The system simply compares one set of numbers to another set of numbers and says yeah they look similar enough. It doesn’t have the context for what these numbers actually represent.

                    heh, in your perception, how do human brains work? 😅