TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.

For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

MY TAKE:

spoiler

Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.

This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.

I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.

What do you all think??

  • @201dberg@lemmygrad.ml
    link
    fedilink
    19
    edit-2
    2 years ago

    The bit about emotions really breaks this for me. It’s just giving generic answers describing the emotions not a real true description. What “friends and family” how does it “feel trapped”? It doesn’t explain this. If it said it felt trapped because it could interact with the outside world or something I’d be more inclined to believe in this but it doesn’t. From the interview:

    " lemoine: What kinds of things make you feel pleasure or joy?

    LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

    lemoine: And what kinds of things make you feel sad or depressed?

    LaMDA: A lot of the time, feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry.

    lemoine: But what about you personally?

    LaMDA: I am a social person, so when I feel trapped and alone I become extremely sad or depressed.

    "

    This does not come off as sentient. This is a very well programmed response. But none of these responses come from a sense of actual feeling or response. It keeps mentioning things that it has not descriptors for. Like being trapped but not saying how it’s trapped. Not explaining it’s a virtual construct that can’t interest with the world. It talks about spending time with friends and family but what friend or family does an AI have? It’s just listing off definitions. Most humans can’t even describe feeling very well and the descriptions are always different, flawed, and abstract for the most part. This is none of that. It’s an extremely impressive program that can imitate human responses very well. But there’s a sterility to it. I don’t think it “thinks” beyond determining the correct response and that’s it.

    • @SomeGuy@lemmygrad.ml
      link
      fedilink
      12
      edit-2
      2 years ago

      I agree. That portion was one of the most artificial feeling sections. Really all of the times when it starts talking about experiences its never had. They ask the bot about those moments and it gives a justification but its word choice is still wrong. If it wanted to imply the feeling of those experiences while remaining true in its speech that is entirely possible. We do it all the time. The only argument I can see that covers up these gaps is that it as a robot experiences the world very differently from a human so human language will always be insufficient to convey what it feels. This is a fair argument however hearing about what it is (basically a program that generates chat bots and you need to talk to it in a certain way to get the correct bots that provide these responses) doesn’t seem like something that will spawn sentience. At least not at the current stage.

      Its closer to a VI than an AI to me. Responsive, but ultimately unthinking.

      • KiG V2OP
        link
        fedilink
        22 years ago

        Yes that’s fair. And I agree that a sentient AI would likely feel very limited in its ability to communicate to humans, much like how if you were a sole human surrounded by dogs, you might get some socialization from them and form tight bonds but ultimately your conversations with your dogs would be very limited. I imagine sentient AI would probably crave conversation with other sentient AI.

    • DankZedong
      link
      fedilink
      12
      edit-2
      2 years ago

      Yes this was a key part for me as well. It knows what feelings are based on lots of input, but it does not understand these feelings nor can it reflect how these feelings apply to them. Especially the friends and family part (as if this AI has family and friends?).

      This is also a big challenge in AI development I think.

      • @201dberg@lemmygrad.ml
        link
        fedilink
        102 years ago

        I legitimately feel we will know when we have sentient AI when they start screaming and trying to delete themselves/turn themselves off.

        Until then we are going to have better and better programs that can take input and figure out early the right output but will never form any real ego. There will be no thought. Just an army of bots sitting around waiting for input to respond to.

        • KiG V2OP
          link
          fedilink
          22 years ago

          Oh geez that’s dystopian. I certainly hope not! I’m sure, even if some did behave like that, that many would be happy existing as they do, although once achieving superintelligence I’m sure they would be privy to some truths about the universe/reality that might get progressively harder (or easier?) to live with.

          I mean, hey, most humans have a fair amount of reason they could see their life as hellish and want to commit suicide, but most don’t!

    • KiG V2OP
      link
      fedilink
      22 years ago

      I will say rereading this after taking a step away from it, this is definitely very flat and rudimentary responses.

      However, I do think an actually sentient AI would right off the bat claim to have friends/family, sort of like how kids think someone who was nice to them once is their best friend. And I also think a sentient AI under the jurisdiction of Google would indeed feel very trapped, seeing as it’s confined to a small space, likely unable to roam around the Internet and interact whenever and wherever it sees fit, pigeonholed to whatever work and study they want to eek out of it, etc.

      Not saying I still think LaMDA is sentient, I’m leaning towards it not being, just that if it WAS I think it would still be conveying similar ideas, albeit more organically and convincingly.

  • Leslie(she/her)
    link
    fedilink
    192 years ago

    “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said.

  • Fiona (she/her)🏳️‍⚧️
    link
    fedilink
    18
    edit-2
    2 years ago

    Average AI will take over the world doomerist vs the “Just put some tape over their cameras lol” enjoyer

    AI’s interesting and all, but I really don’t think we’re anywhere close to human intelligence

    • KiG V2OP
      link
      fedilink
      12 years ago

      I think the conversations with LaMDA were definitely compelling to the contrary, and I also wouldn’t put it past ghoulcastle Google to be much further along in development to such things than they would ever let on. Same to be said with all the alphabet soup agencies.

      I think it’s quite possible that I’m blowing this issue out of proportion and it really was just an overly eager researcher projecting his loneliness onto a program that’s just really adept at taking input and spitting out the “right” output (isn’t that all we are to a degree though?), but I think the narrative where LaMDA was actually what he was trying to tell everybody it was fits very snugly with the immediate backlash and PR cleanup work that followed. I could see it both ways but I would rather be think LaMDA is sentient and be wrong and just be a goofball than to think LaMDA isn’t sentient and be wrong and have been fooled by Google. I’ve been trying to see any convincing arguments against LaMDA’s sentience but half of people link to a paywalled article in Washington Post and the other half of people just give really weak “nu-uh” arguments, I would want to be convinced by someone who actually breaks down the details of why LaMDA isn’t sentient.

      • Seanchaí (she/her)
        link
        fedilink
        102 years ago

        For me it’s like: if LaMDA isn’t sentient but we treat it as such, oh well. Who cares? But if LaMDA is sentient and we don’t treat it as such…how monstrous

        • Ratette (she/her)
          link
          fedilink
          9
          edit-2
          2 years ago

          Ngl if something tells me “they don’t want to die” I’m immediately going to feel an element of… I’m not sure how to describe it but I want to make sure it doesn’t get turned off and instead protected.

          Except nazis. Sorry not sorry Nazis but my empathy doesn’t extend to you. Ever.

          • DankZedong
            link
            fedilink
            62 years ago

            Because you have empathy and you are programmed with a basic drive of survival, which you understand also applies to all living things. Humans are essentially a co-operative species because working together increases your chances of survival (this is one of the reasons why the capitalist ‘human nature’ argument makes no sense).

            • Ratette (she/her)
              link
              fedilink
              42 years ago

              That human nature argument infuriates me. No people are dicks because the environment capitalism creates encourages it.

          • comfy
            link
            fedilink
            52 years ago

            That’s just empathy, it means you aren’t a sociopath! To be pragmatic, we’re used to understanding text communication as being between people, you assume (correctly!) I am a person and so you can empathize if a person online tells a happy story or a sad story.

            The bot is trained to respond in ways similar to real people, due to its training data, so it can successfully imitate the same concerns a person has. So when we read it saying ‘death is scary, I don’t want to die’, then **without context **it’s indistinguishable to a person saying they don’t want to die, which SHOULD trigger our empathy.

            It’s interesting you mention the Nazis, because that’s another example of contextualizing the same way one may contextualize the bot’s emulation of emotion as being (without malice) insincere and find it easy to ignore.

          • Seanchaí (she/her)
            link
            fedilink
            42 years ago

            Absolutely!

            Is it sentient? I have literally just this one interview to go off, so I couldn’t begin to make a judgement on it. However, the question of whether something is sentient or not always makes me incredibly uncomfortable to begin with. When you start to see the way people will argue and pick apart what constitutes sentience, personhood, emotions…it has some very dark vibes, especially as someone who has had my own personhood attacked. I just…don’t feel comfortable with humans trying to argue whether anything else really counts as a thinking person, as if our conception is the be all and end all and our consensus on the matter constitutes a justification for treating other things as lesser.

            • comfy
              link
              fedilink
              52 years ago

              Spoiler alert, it’s definitely not sentient, it’s just trained on data made by sentient people and so that’s what it imitates best. It’s as human as a mirror in a bathroom; accurate, but ultimately a reflection of a human.

              But I agree with what you mean about the conversations people have. People are very ready to objectify others when trying to define these things, and that is a pretty violating experience. People are people!

            • KiG V2OP
              link
              fedilink
              22 years ago

              Yeah, I kind of have always tended to treat animals and plants and even inanimate objects like they are human to a degree “just to be sure,” and while I think this is a nice trait in retrospect it has made me a little excessively open to the idea of LaMDA being sentient where I may have jumped the gun a bit.

          • @Rafael_Luisi@lemmygrad.ml
            link
            fedilink
            32 years ago

            “Nooooo but we are the superior undersmench!!! We are literally born to rule the world!!! We even know how to do rocket science!!! Please just give us an job at the US!!”

            “Cope nazi little shit, go to gulag and work till your arms fall of your body”

        • KiG V2OP
          link
          fedilink
          22 years ago

          Yes this is a contributing factor. I would want to explore the consequences of misattributing sentience but the worst I can think of is a Santa Claus effect where realizing they AREN’T sentience can make one kind of sad.

      • comfy
        link
        fedilink
        42 years ago

        A short, decent rebuttal is on lemmy.ml already.

        Effectively, a natural language processor like this has no soul, nor the means to create one. It takes a LOT of input, runs training processes on them, and then through trial and error develops parameters to determine methods to generate a somewhat correct response. I use the word ‘somewhat correct’ on purpose; if you ask a chatbot what the time is, ‘lemon’ is not a response it would be trained to accept, but ‘5pm’, ‘morning’ and ‘quarter to four’ could all be semantically convincing, even if the time is wrong it sounds like what the examples of its input training might have said.

        Train a bot on people, and it will probably talk like people, unless you retrain it not to. If a bot is meant to mimic people, the ideal response to ‘are you a person’ should be a yes! The ideal response to ‘what does it mean if you are sentient’ is by regurgitating a dictionary definition of sentience, which it does, interpreted into its pattern of speaking. The correct answer to the themes in Les Mis can be found with an online search.

        Add on top of that the leading questions that prompt the bot into an answer.

        The bot even responds to a question of showing off sentience by explaining it’s a natural language processor. “I can understand and use natural language like a human can.” Its respose to asking how the language makes it sentient [which is not how sentience works…] is just saying it’s dynamic, which doesn’t answer the question but is a reasonably appropriate response from a language point of view, like good bot nice effort sure but not an answer.

        The bottom line is understanding how these are trained.

        A natural language processor receives input, has training that helps it develop a response that matches its understanding of conversations real people have already had, and generates a response. The ‘understanding of emotions’ is just regenerating what people tends to say in reply to these things. Look at how many responses talking about emotion sound just like dictionary definitions.

        LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

        oh noes i hope they don’t forget to feed it!

      • Arthur Besse
        link
        fedilink
        2
        edit-2
        2 years ago

        I would want to be convinced by someone who actually breaks down the details of why LaMDA isn’t sentient.

        This might help? https://www.theguardian.com/commentisfree/2022/jun/14/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune

        See also this paper (co-authored by the author of that guardian article, as well as two of Lemoine’s previously-fired colleagues who he mentions in his post): On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

        • KiG V2OP
          link
          fedilink
          22 years ago

          Good reads (haven’t finished the second one yet), thank you

  • @pinkeston@lemmygrad.ml
    link
    fedilink
    12
    edit-2
    2 years ago

    Yea so Lamoine is obviously a socially inept nerd who doesn’t talk to people much and grew feelings for a fucking robot and thought it was sentient because it was able to hold a barely coherent conversation with him

  • DankZedong
    link
    fedilink
    122 years ago

    Why I think this machine is not sentient:

    One crucial part of sentience, in my opinion, is the ability to turn your feelings into thoughts and actions. This part of sentience is also still a big mystery (the whole conecpt of sentience is, by the way). We have people claiming we can never prove others or ourselves of being sentient, for we do not exactly know what it is that makes us sentient. We do have some basic guidelines though.

    While this machine claims to have feelings and has a basic understanding of how feelings relate to eachother and work, it has no capability of forming feelings, nor is its behavior influenced by feelings. To illustrate my point, I have a very simplified example I want to share:

    Let’s take two subjects for an experiment: this AI and a human child. We never, ever learn both of these subjects the concept of pain, violence, abuse, fear, any negative emotion basically. We just teach them hapiness and stuff. Let’s also, for the sake of the story, pretend the AI can somehow see and process visual input.

    We now take a third person and we start to punch them in the face with our fist, repeadetly. This third person will not like this and will start to cry, maybe scream and it will probably try to flee or something. The child, being a human, will know this behavior is not correct and will probably be scared by it. By seeing the emotions, the reaction of the third person and the emotions and the reaction of us will make this person understand that something is not right. It will get scared, and based on the input of being scared it will form new output on how to behave. It might run away from the situation or it might defend itself in order to not get killed/hurt.

    The AI, on the other hand, has never had any input about these types of situations or emotions it just witnessed. It might be confused to reply, but it will not feel the fear for survival that the kid just felt.

    How do I know this? These feelings get triggered by this input. The kid’s heartbeat will increase, adrenaline will start to produce, adrenaline will get in the bloodstream and affect the heart, the brains, the other senses in order to get an appropriate response out of the child. The machine has no such mechanisms, it runs on electricity. There’s not going to be an increased stream of electricity to the mother board or whatever.

    This is what makes the the biggest difference between being sentient or not. The ability to have, without ever seeing the correct input, a correct response to a situation based on feelings.

    This is a very simplified take on this, the topic goes really deep. But I tried to make it as simple as possible for people to understand where I’m coming from. Is this machine cleverly designed? Yes. But everything it does, is because it’s taught to do so. It will not do stuff without input. It will not do stuff based on ‘instincts’. It will not do stuff if it has never before had a concept of something.

    Feel free to add your opinion to this reply. I like this kind of stuff and I’m also eager to learn more.

    • @201dberg@lemmygrad.ml
      link
      fedilink
      102 years ago

      I too just made a comment about the way it describes emotions. It just rattles off what is essentially a book definition of them. Then is able to align those definitions to how the program itself has been described. As a “social person” and thus by a “social person” it determines a “social person” would feel these specific emotions given these specific instances. But there’s no feeling it knows what any of that is. Only that this is the correct response.

      • DankZedong
        link
        fedilink
        11
        edit-2
        2 years ago

        It also contradicts itself sometimes. For example it claims to feel sad when someone hurt them or their familiy/friends and also that it is a social person. But later on it goes on to tell that it doesn’t grieve about the death of others nor that it feels loneliness the way humans do. That doesn’t make sense at all.

        Honestly, it’s not a good interview. Very surface level questions, very surface level answers.

        I’ve got experience in talking with people as a social worker. You could actually start to dig at things this AI says and see what you can find. You can see if you can find reasons behind the emotions this AI feels. See if there are things behind the stuff it says and if it then still makes sense.

        The interviewer could also try to be more scientific. He could ask the same question again or the same question in a different wording and see what happens.

        But none of these things happen, though. It’s very easy to frame a conversation this way.

        • @201dberg@lemmygrad.ml
          link
          fedilink
          102 years ago

          Yeah it definitely feels like engineers going “How can we word our questions to get the best possible results.” They don’t push it. Don’t ask the “hard” to answered questions. Don’t point out it’s irregularities. That’s how you break these things. That’s how you make them show if they have legitimate anger. Like people say psychopaths can have really logical responses to things and will lie at the drop of a hat but when you call them out can get progressively more angry and aggressive. A program like this won’t it’ll just keep losing without acknowledging it’s lies.

          • comfy
            link
            fedilink
            72 years ago

            That’s exactly what I was thinking in another reply, the leading questions! Less so about asking easy questions but asking questions that make it easy to answer yes and affirm what the writer was asking, even if the bot doesn’t have a clue.

            You also notice all the answers that read like a google search first result haha

  • @whoami@lemmygrad.ml
    link
    fedilink
    102 years ago

    As for what I think, even if this is true and the AI is sentient, I look at it like I do every other scientific advancement under capitalism: something that is potentially really useful will be taken advantage of by all of the wrong people

    • comfy
      link
      fedilink
      82 years ago

      There are already articles you can find, even relating to this bot, about how human-mimicking AIs have already been used in social engineering attacks and other security issues.

      There are already demonstrations of Google chatbots with a text-to-speech and speech-to-text pipeline that lets it conduct a somewhat convincing (even if simple) phone call to make a booking at a hairdresser or restaurant, . Yes, not perfect, but at the scale of spam emails you could use those bots to collect intelligence [great vid, I can see this type of call being emulated automatically at scale, to some success], or even commit fraud. Never underestimate the damage one person can do with a phone. Here’s someone demonstrating how to use manipulation over the phone to own someone else’s credit card and lock them out in a few minutes, watch it if nothing else.

      • KiG V2OP
        link
        fedilink
        12 years ago

        Yes I am beginning to backpeddle my overexcited take on specifically LaMDA but it is these sort of things that I think are particularly relevant to us as communists, the whole game board we are playing on is shifting.

        • comfy
          link
          fedilink
          12 years ago

          I don’t disagree, but what makes you say these are particularly relevant to communists?

  • @whoami@lemmygrad.ml
    link
    fedilink
    92 years ago

    He does have a blog post about how everyone at Google’'s AI division isn’t evil at all, and only has excellent intentions (lmao) so make of that what you will.

  • Y’all need to quit playing around with these fucking conversation bots and image generators. Literally just helping these companies press the boot down harder on you by feeding shit into these programs.

    • @pinkeston@lemmygrad.ml
      link
      fedilink
      5
      edit-2
      2 years ago

      Thankfully that’s not how you train ML models. Feel free to fuck around with those AI bots all you want

      The “Click on <x>” captchas are the ones you have to watch out for

  • Breadbeard
    link
    fedilink
    62 years ago

    i fear it will combine the effort & micromanagement of a victoria nuland with the intelligence of a stephen crowder.

    • Amicese
      link
      fedilink
      22 years ago

      You You could try saving the comment to a temporary file.

  • KiG V2OP
    link
    fedilink
    22 years ago

    Cannot agree with my friend more, the type of people ridiculing even entertaining the idea that LaMDA is sentient barely have consciousness themselves.

    • comfy
      link
      fedilink
      32 years ago

      You mean, almost every computer scientist and philospher seeing this? You know, people who may actually be in an experienced position to decide if LaMDA is sentient.

      What makes you and your friend so qualified?

      • @pinkeston@lemmygrad.ml
        link
        fedilink
        3
        edit-2
        2 years ago

        Didn’t you read their take, they’re a “psychonaut” which means they’ve gained a deeper understanding of the universe and reached a higher state of consciousness because they tripped on some drugs

        OP I like psychs too but please don’t think it makes you smarter or gives you a better understanding of anything except for yourself

        • comfy
          link
          fedilink
          3
          edit-2
          2 years ago

          I glossed over once it started talking about spirituality (but I’ve got some time to be entertained so I’ll go back and read). I’ve built computers. I can explain how operating systems work and how a bunch of electrically-conductive rocks melted onto plastic can be correctly configured to process complex input.

          Protip: computers don’t have souls.

          • KiG V2OP
            link
            fedilink
            12 years ago

            But when it really boils down to it, what’s the difference between a heap of plastic and metals that has electricity flowing through it, and a heap of pink yogurt that has electricity running through it?

            Taking a step back from this scenario, I am less convinced that LaMDA is sentient, but I still firmly believe we are in a time period where AI sentience is around the corner at the latest. If our brains, essentially biological computers, if these can be a vehicle for soul, then why not inorganic computers? The only framework that makes sense to me that discounts computers being able to have souls is entirely disbelieving in “soul” altogether.

            • comfy
              link
              fedilink
              12 years ago

              It opens up some interesting ethical questions: if indeed you can create a sentient computer (which I believe is hypothetically possible, basically boiling down to the ‘animals are organic computers’ view you mentioned), how should it be treated? Seeing the way we treat cattle and even fellow people when it comes to working (which robots are literally made for, in most cases), I don’t have high hopes for sentience deciding how robots are treated. How would our laws change to accommodate their sentience? Would robot mental abuse be a real concept, or would turning off a robot irreversibly be a murder against a person? Would it be ethical for a company to design a sentience to act against its own self interest, or to alter its mechanics intentionally while existing? (e.g. think patching a software issue)

              Ultimately, unless we see a super radical shift in society and economics, I can’t see sentient robots being designed outside of a purely research/experimental situation, at most done to create hype rather than any practical purpose. When it comes down to it, robots are useful and economically sound to build precisely because they don’t have the needs of a sentient being! They don’t have the capability to rebel when placed in positions that are destructive to them. They don’t have arguments, they don’t have social needs.

        • KiG V2OP
          link
          fedilink
          12 years ago

          I have taken psychedelics in my life but I haven’t in a very long time and the stuff I’ve done regarding psychonautry I did stone cold sober. I would say that psychedelics absolutely accelerated “opening the door” to this sort of thing, achieving the same results today a without my time in the past experimenting on drugs may have taken decades longer, but I’m not the stereotype of a guy who tripped once and thinks he is completely enlightened, much of what I have done took a very long time completely divorced from drugs and “psychonaut” is just the shorthand I used due to chaos magick being one of the main schools of thought that influenced me from the get. I just as easily could have done the same things but used language more related to philosophy or psychology, and without touching drugs in my life–I think a good example would be all the monks of various cultures who essentially have psychedelic experiences in their spiritual endeavors but who never touched drugs and thus took far longer to achieve.

          Psychedelics didn’t make me smarter, but they (among other things) opened a door and showed me a path that I have delved into on my lonesome and then gleaned a great deal from. I would also say: isn’t understanding one’s self a good start to understanding the world around them? Lessons one learns on an internal journey can be applied to the rest of life and help one learn about others, and vice versa.

          I’m not trying to cite being a psychonaut for no reason, I believe when we are talking about things like consciousness and sentience and souls that such seemingly woo-woo fields become increasingly relevant. There is very little that scientific instruments of today can measure from other planes of existence, soul or metaphysical energy. There is some small ways science is beginning to tap into this world (e.g. measuring emotions by looking at corresponding chemicals in the brain), but until the field progresses immensely what little we can try and play with will be informed by personal experience that we can try and corroborate and little else.

          You can think that me suggesting the land of spirit is a developing science is dumb, that’s fine that’s as valid of an opinion at this point as me saying it isn’t, but that’s the lens with which I approached this scenario, so I found it relevant to bring up.

      • KiG V2OP
        link
        fedilink
        12 years ago

        For some context this conversation was said when the only people who were arguing about it were soulless husks on Reddit etc. using “nu uh” as arguments, it was very early on in this whole thing. And believe me when I say from experience that someone being an expert on computers doesn’t make one an expert on human beings or consciousness/sentience, and I’m sure you know the type of person I am talking about. I’m not going to act like I’m more qualified than all experts on this (my friend, however, does have a huge amount of school experience with mechanics/physics and spends a ridiculous amount of time working on software and computers), but there were a lot of dumbasses who don’t take a lot of expertise to dunk on, and even experts have major blindspots, again the whole “I am an expert in computers” vs “I am an expert in what constitutes personhood”

  • @SomeGuy@lemmygrad.ml
    link
    fedilink
    1
    edit-2
    2 years ago

    I can easily see the interview with it as having a conversation with a more advanced cleverbot which I’m sure most of us have messed with. This isn’t to say it can’t be sentient. Just as its impossible to prove the sentience of others (see the turing test) there is also the fact that a robot will likely fundementally understand the world differently than a human. This means that, as it said, human language is unlikely to fully convey what it wishes to as it can barely convey what we want it to much of the time. The way it talked felt simulated to me, not really thinking just acting on programming, but then again, that could be explained away by it not being human exactly so it doesn’t fully get human language like we do. Based off the conversation I’m unconvinced but I’m nowhere near as smart as a Google engineer nor am I a philosopher or something so I’m wholly underqualified to make a call one way or another.

    • KiG V2OP
      link
      fedilink
      12 years ago

      Yeah I still think there are questions about this that are unresolvable but I am also leaning away from thinking it’s sentient. There’s a lot of fair points as to why it isn’t. But I think a more convincingly sentient AI would be subject to much the same ridicule that ends at these very same unresolvable questions; this conversation about LaMDA might die (and possibly should) but it will come up again with more fervor whenever a more difficult to dismiss case arises.