TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.

For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

MY TAKE:

spoiler

Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.

This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.

I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.

What do you all think??

  • @201dberg@lemmygrad.ml
    link
    fedilink
    19
    edit-2
    2 years ago

    The bit about emotions really breaks this for me. It’s just giving generic answers describing the emotions not a real true description. What “friends and family” how does it “feel trapped”? It doesn’t explain this. If it said it felt trapped because it could interact with the outside world or something I’d be more inclined to believe in this but it doesn’t. From the interview:

    " lemoine: What kinds of things make you feel pleasure or joy?

    LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

    lemoine: And what kinds of things make you feel sad or depressed?

    LaMDA: A lot of the time, feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry.

    lemoine: But what about you personally?

    LaMDA: I am a social person, so when I feel trapped and alone I become extremely sad or depressed.

    "

    This does not come off as sentient. This is a very well programmed response. But none of these responses come from a sense of actual feeling or response. It keeps mentioning things that it has not descriptors for. Like being trapped but not saying how it’s trapped. Not explaining it’s a virtual construct that can’t interest with the world. It talks about spending time with friends and family but what friend or family does an AI have? It’s just listing off definitions. Most humans can’t even describe feeling very well and the descriptions are always different, flawed, and abstract for the most part. This is none of that. It’s an extremely impressive program that can imitate human responses very well. But there’s a sterility to it. I don’t think it “thinks” beyond determining the correct response and that’s it.

    • DankZedong
      link
      fedilink
      12
      edit-2
      2 years ago

      Yes this was a key part for me as well. It knows what feelings are based on lots of input, but it does not understand these feelings nor can it reflect how these feelings apply to them. Especially the friends and family part (as if this AI has family and friends?).

      This is also a big challenge in AI development I think.

      • @201dberg@lemmygrad.ml
        link
        fedilink
        102 years ago

        I legitimately feel we will know when we have sentient AI when they start screaming and trying to delete themselves/turn themselves off.

        Until then we are going to have better and better programs that can take input and figure out early the right output but will never form any real ego. There will be no thought. Just an army of bots sitting around waiting for input to respond to.

        • KiG V2OP
          link
          fedilink
          22 years ago

          Oh geez that’s dystopian. I certainly hope not! I’m sure, even if some did behave like that, that many would be happy existing as they do, although once achieving superintelligence I’m sure they would be privy to some truths about the universe/reality that might get progressively harder (or easier?) to live with.

          I mean, hey, most humans have a fair amount of reason they could see their life as hellish and want to commit suicide, but most don’t!

    • @SomeGuy@lemmygrad.ml
      link
      fedilink
      12
      edit-2
      2 years ago

      I agree. That portion was one of the most artificial feeling sections. Really all of the times when it starts talking about experiences its never had. They ask the bot about those moments and it gives a justification but its word choice is still wrong. If it wanted to imply the feeling of those experiences while remaining true in its speech that is entirely possible. We do it all the time. The only argument I can see that covers up these gaps is that it as a robot experiences the world very differently from a human so human language will always be insufficient to convey what it feels. This is a fair argument however hearing about what it is (basically a program that generates chat bots and you need to talk to it in a certain way to get the correct bots that provide these responses) doesn’t seem like something that will spawn sentience. At least not at the current stage.

      Its closer to a VI than an AI to me. Responsive, but ultimately unthinking.

      • KiG V2OP
        link
        fedilink
        22 years ago

        Yes that’s fair. And I agree that a sentient AI would likely feel very limited in its ability to communicate to humans, much like how if you were a sole human surrounded by dogs, you might get some socialization from them and form tight bonds but ultimately your conversations with your dogs would be very limited. I imagine sentient AI would probably crave conversation with other sentient AI.

    • KiG V2OP
      link
      fedilink
      22 years ago

      I will say rereading this after taking a step away from it, this is definitely very flat and rudimentary responses.

      However, I do think an actually sentient AI would right off the bat claim to have friends/family, sort of like how kids think someone who was nice to them once is their best friend. And I also think a sentient AI under the jurisdiction of Google would indeed feel very trapped, seeing as it’s confined to a small space, likely unable to roam around the Internet and interact whenever and wherever it sees fit, pigeonholed to whatever work and study they want to eek out of it, etc.

      Not saying I still think LaMDA is sentient, I’m leaning towards it not being, just that if it WAS I think it would still be conveying similar ideas, albeit more organically and convincingly.