the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • Frank [he/him, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    6 months ago

    Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

    I mean, protip, if you ask people to discard all of their language for discussing a subject they’re not going to be able to discuss the subject. This isn’t a gotcha. We interact with the world through symbol and metaphors. Computers are the symbolic language with which we discuss the mostly incomprehensible function of about a hundred billion weird little cells squirting chemicals and electricity around.

    Yeah I’m not going to finish this but it just sounds like god of the gaps contrarianness. We have a symbolic language for discussing a complex phenomena that doesn’t really reflect the symbols we use to discuss it. We don’t know how memory encoding and retrieval works. The author doesn’t either, and it really just sounds like they’re peeved that other people don’t treat memory as an irreducibly complex mystery never to be solved.

    Something they could have talked about - Our memories change over time because, afaik, the process of recalling a memory uses the same mechanics as the process of creating a memory. What I’m told is we’re experiencing the event we’re remembering again, and because we’re basically doing a live performance in our head the act of remembering can also change the memory. It’s not a hard drive, there’s no ones and zeroes in there. It’s a complex, messy biological process that arose under the influence of evolution, aka totally bonkers bs. But there is information in there. People remember strings of numbers, names, locations, literal computer code. We don’t know for sure how it’s encoded, retrieved, manipulated, “loaded in to ram”, but we know it’s there. As mentioned, people with some training and recall enormous amounts of information verbatim. There are, contrary to the dollar experiment, people who can reproduce images with high detail and accuracy after one brief viewing. There’s all kinds of weird eiditic memory and outliers.

    From what I understand most people are moving towards a system model - Memories aren’t encoded in a cell, or as a pattern of chemicals, it’s a complex process that involves a whole lot of shit and can’t be discrete observed by looking at an isolated piece of the brain. YOu need to know what the system is doing. To deliberately poke fun at the author - It’s like trying to read the binary of a fragmented hard drive, it’s not going to make any sense. You’ve got to load it in to memory so the index that knows where all the pieces of the files are stored on the disk so it can assemble them in to something useful. Your file isn’t “stored” anywhere on the disk. Binary is stored on the disk. A program is needed to take that binary and turn it in to readable information. 'We’re never going to be able to upload a brain" is just whiney contrarian nonesense, it’s god of the gaps. We don’t know how it works now so we’ll never know how it works. So we need to produce a 1:1 scan of the whole body and all it’s processes? So what, maybe we’ll have tech to do that some day. maybe we’ll, you know, skip the whole “upload” thing and figure out how to hook a brain in to a computer interface directly, or integrate the meat with the metal. It’s so unimaginative to just throw your hands up and say “it’s too complicated! digital intelligence is impossible!” Like come on, we know you can run an intelligence on a few pounds of electrified grease. That’s a known, unquestionable thing. The machine exists, it’s sitting in each of our skulls, and every year we’re getting better and better at understanding and controlling it. There’s no reason to categorically reject the idea that we’ll some day be able to copy it, or alter it such a way that it can be copied. It doesn’t violate any laws of physics, it doesn’t require goofy exists only on paper exotic particles. it’s just electrified meat.

    Also, if bozo could please explain how trained oral historians and poets can recall thousands of stanzas of poetry verbatim with few or no errors I’d love to hear that, because it raises some questions about the dollar bill “experiment”.

    • Dessa [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      6 months ago

      Moreover, we absolutely do have memory. The concept existed before computers and we named the computer’s process after that. We have memories, and computers do something that we easily liken to having memories. Computer memory is the metaphor here

      • Frank [he/him, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        31
        ·
        6 months ago

        Yeah, it’s a really odd thing to harp about. Guy’s a psychologist, though, and was doing most of his notable work in the 70s and 80s which was closer to the neolithic than it is to modernity. I think this is mostly just “old man yells at clouds” because he’s mad that neuroscience lapped psychology a long time ago and can actually produce results.

    • DamarcusArt@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      Ok, that was great and all, but could you give this short essay again without mentioning any of the brain’s processes or using vowels? If you can’t, it proves your whole premise is flawed somehow.

      • Frank [he/him, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        Right? This is what happens when you let stem people loose without a humanities person to ride herd on them. Any anthropologist would tell you how silly this is.

    • plinky [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      6 months ago

      You don’t remember the text though, and stanzas recounting can sometimes have word substitutions which fit rhythmically.

      If I asked you what is 300th word of the poem, you cannot do it. Computer can. If I start with two words of the verse, you could immediately continue. It’s sequence of words with meaning, outside of couple thousands of competitive pi-memorizers, people cannot remember gibberish, try to remember hash number of something for a day. It’s significantly less memory, either as word vector or symbol vector than a haiku.

      Re: language, and how far along did the mechanical analogy took us? Until equations or language corresponding to reality are used, you are fumbling about fitting round spheres in spiral holes. Sure you can use ptoleimaic system and add new round components, or you can realize orbits are ellipses

      History of science should actually horrify science bros, 300 years scientists firmly believed phlogiston was the source of burning, 100 years ago aether was all around us, and our brains were ticking boxes of gears, 60 years ago neutrinos didn’t have mass, while dna was happily deterministically making humans. Whatever we believe now is scientific truth by historic precedent likely isn’t (correspondence between model and reality), they are getting better all the time (increasing correspondence), but I don’t know perfect scientific theory (maybe chemistry is sorta solved with fiddling around the edges).

      • Frank [he/him, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        19
        ·
        6 months ago

        Why would that horrify us? That’s how science works. We observe the world, create hypothesis based on those observations, developed experiments to test those hypothesis, and build theories based on whether experimentation confirmed our hypothesis. Phlogiston wasn’t real, but the theory conformed to the observations made with the tools available at the time. We could have this theory of phlogiston, and we could experiment to determine the validity of that theory. When new tools allowed us to observe novel phenomena the phlogiston theory was discarded. Science is a philosophy of knowledge; The world operates on consistent rules and these rules can be determined by observation and experiment. Science will never be complete. Science makes no definitive statements. We build theoretical models of the world, and we use those models until we find that they don’t agree with our observations.

        • plinky [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          6 months ago

          *because confidently relying on the model (in this case informational) prediction like ooh, we could do brain no problem in computer space, you are not exactly making a good scientific prediction. Good scientific prediction is that model is likely garbage, until proven otherwise, and thus shouldn’t be end all be all.

          But then if you take information processing model, what it gives you, exactly, in understanding of the brain? The author contention that it is hot garbage framework, it’s doesn’t fit with how the brain works, your brain is not tiny hdd with ram and cpu, and until you think that it is, you will be searching for mirages.

          Yes neural networks are much closer (because they are fucking designed to be), and yet even they has to be force fed random noise to introduce fuzziness in responses, or they’ll do the same thing every time. You reboot and reload neural net, it will do the same thing every time. But brain is not just connections of axons, it’s also extremely complicated state of the neuron itself with point mutations, dna repairs, expression levels, random rna garbage flowing about, lipid rafts at synapses, vesicles missing cause microtubules decided to chill for a day, the hormonal state of the blood, the input from the sympathetic neural system etc

          We haven’t even fully simulated one single cell yet.

      • m532 [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 months ago

        Computers know the 300th word because they store their stuff in arrays, which do not exist in brains. They could also store it in linked lists, like a brain does, but that’s inefficient for the silicon memory layout.

        Also, brains can know the 300th word. Just count. Guess what a computer does when it has to get the 300th element of a linked list: it counts to 300.

        • plinky [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          And computers can count, that’s all they can do as turing machines, we can’t or not that well, feels there is a mismatch here in mediums🤔. If I took 10 people knowing same poem, what are the odds I’ll get same word from all of them?

          Is that linked list in the brain can be accessed in all contexts then? Can you sing hip hop song while death metal backing track is playing?

          Moreover linked list implies middle parts are not accessible before going through preceding elements, do you honestly think that’s a good analogue for human memory?

          • m532 [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            Humans have fingers so they can count, so the odds 10 people get the same word should be 100%.

            I can plug my ears.

            I could implement a linked list connected to a hash map that can be accessed from the middle.

            • plinky [he/him]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 months ago

              Lol @100 percent.

              So which one brain does? Linked list with hash maps then? Final simple computer analogy? Maybe indexed binary tree? Or maybe it’s not that?

              • m532 [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 months ago

                When I want to recall a song, I have to remember one part, and then I can play it in my head. However, I can’t just skip to the end.

                Linked list

                • plinky [he/him]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 months ago

                  So if second verse plays you can’t sing along until your brain parses through previous verses? I find it rather hard to believe

                  • m532 [she/her]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    6 months ago

                    Just because linked lists are usually implemented with a starting point doesn’t mean they have to. Content + a pointer to the next object is all that’s needed for an element of a linked list. It could even be cyclic.

                • plinky [he/him]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  6 months ago

                  you can try to find 200th word on physical book page, I suspect on first tries you’ll get different answers. It’s not dumbness, with poem it’s rather complicated counting and reciting (and gesturing, if you use hands), and direct count while you are bored (as in with book), might make mind either skip words, or cycle numbers. We aren’t built for counting, fiddling with complicated math is simpler than doing direct and boring count

      • Tomorrow_Farewell [any, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        If I asked you what is 300th word of the poem, you cannot do it. Computer can

        I’m sorry, but this is a silly argument. Somebody might very well be able to tell you what the 300th word of a poem is, while a computer that stored that poem as a .bmp file wouldn’t be able to (without tools other than just basic stuff that allows it to show you .bmp images). In different contexts we remember different things about stuff.

        • plinky [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          Generally you can’t though. of course there are people who remember in different ways, or who can remember pi number to untold digits. Doesn’t mean there are tiny bytes-like engravings in their brain, or that they could remember it perfectly some time from now. Computer can tell what is 300 pixel of that image, while you don’t even have pixels, or fixed visual memory shape. Maybe it’s wide shot of nature, or maybe it’s a reflection of the light in the eyes of your loved one

          • Frank [he/him, he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            6 months ago

            People don’t think that brains are silicon chips running code through logic gates. At least, the vast majority of people don’t.

            The point we’re making here is that both computers and human minds follower a process to arrive at a give conclusion, or count to 300, or determine where the 300th pixel is in a computer. A computer doesn’t do that magically. There’s a program that runs that counts to 300. A human would have to dig out a magnifying glass and count to three hundred. The details are different, but both are counting to 300.

            • plinky [he/him]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              5
              ·
              6 months ago

              because that’s a task for computer, my second example: giving you two words, it would be slower for computer than arriving at 300 th word, while for you it would be significantly faster than counting.

              fundamentally a question is brain a turing machine? I rather think not, but it could be simulated as such with some untold complexity.

              • Tomorrow_Farewell [any, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                6
                ·
                6 months ago

                because that’s a task for computer, my second example: giving you two words, it would be slower for computer than arriving at 300 th word, while for you it would be significantly faster than counting

                If your thesis is that human brains do not work perfectly the same way, and not that the analogy with computers in general is wrong, then sure, but nobody disagrees with that thesis, then. I don’t think that any adult alive has proposed that a human brain is just a conventional binary computer.

                However, this argument fails when it comes to the thesis of analogy with computers in general. Not sure how it is even supposed to be addressing it.

                fundamentally a question is brain a turing machine? I rather think not

                Well, firstly, a Turing machine is an idea, and not an actual device or a type of device.
                Secondly, if your thesis is that Turing machine does not model how brains work, then what’s your argument here?

                • plinky [he/him]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 months ago

                  of course I can’t prove that brain is not a turing machine, I would be world famous if I could. Computers are turing machines yes? They cannot do non-Turing machines operations (decisions or whatever that’s called)

                  What comparing computer with brain gives to science, I’m asking again for third time in this thread. What insight it provides, aside from mechanizing us to the world? That short term memory exists? a stone age child could tell you that. That information goes from the eyes as bits like a camera? That’s already significantly wrong. That you recall like a photograph read out from your computer? Also very likely wrong

                  • Tomorrow_Farewell [any, they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    7
                    ·
                    edit-2
                    6 months ago

                    of course I can’t prove that brain is not a turing machine, I would be world famous if I could

                    Okay, so, what is your basis for thinking that, for example, if a brain was given some set of rules such as ‘if you are given the symbol “A”, think of number 1 and go to the next symbol’ and ‘if you are given the symbol “B” and are thinking of number 1, think of number 2 and go back by two symbols’ and some sequence of symbols, that that brain wouldn’t be capable of working with those rules?

                    Computers are turing machines yes?

                    As in, they are modelled by Turing machines sufficiently well in some sense? Sure.

                    They cannot do non-Turing machines operations (decisions or whatever that’s called)

                    What? What are ‘non-Turing machines operations’? The term ‘Turing machine’ refers to generalisations of finite automata. In this context, what they are doing is receiving input and reacting to it depending on their current state. I can provide some examples of finite automata implementations in Python code, if you want me to.
                    The word ‘decision’ doesn’t carry any meaning in this context.

                    What comparing computer with brain gives to science, I’m asking again for third time in this thread

                    I don’t recall you asking this question before, and I do not have an answer. I also don’t see the question as relevant to the exchange so far.

                    That information goes from the eyes as bits like a camera? That’s already significantly wrong

                    A bit is a unit of information. If we treat the signal that the eyes send to the brain as carrying any sort of information, you can’t argue that the brain doesn’t (EDIT: I initially forgot to include the word 'doesn’t) receive the information in bits. If you claim otherwise, you don’t understand what information is and/or what bits are.

                    That you recall like a photograph read out from your computer? Also very likely wrong

                    Nobody is claiming, however, that your brain pulls up an analogue of a .bmp when you recall an image. You likely remember some details of an image, and ‘subconsciously’ reconstruct the ‘gaps’. Computers can handle such tasks just fine, as well.

                  • Frank [he/him, he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    6 months ago

                    That information goes from the eyes as bits like a camera?

                    Information goes in to the optic nerve as electrical signals, which is why we can glue wires to the optic nerve and use a camera to send visual information to the brain. I think wek?e been able to do that for twenty years. We just need a computer to change the bits from the camera in to the correct electric impulses.

              • bumpusoot [any]@hexbear.net
                link
                fedilink
                English
                arrow-up
                6
                ·
                edit-2
                6 months ago

                Firstly, I want to say it’s cool you’re positively engaging and stimulating a lot of conversation around this.

                As far turing machines go - It’s only a concept that’s meant to show a fundamental “level” of computing (“turing completeness”), what a computing device can or cannot achieve. As you agree a turing machine could ‘simulate’ a brain (and we know brains can simulate a turing machine - we invented them!), then conceptually, yes, the brain is computationally equivalent, it is ‘turing complete’, albeit with some randomness thrown in.

                • Frank [he/him, he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  6 months ago

                  some randomness thrown in.

                  I remain extremely mad at the Quantum jerks for demonstrating that the universe is almost certainly not deterministic. I refuse to be cool about it.

                • plinky [he/him]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  We can simulate a water molecule, does it make a turing machine then? Is single protein? A whole cell? 1000 cells in some invertebrate?

                  Simulation doesn’t work backwards, it’s not an implied equivalency of turing completeness for both directions. If brain is a turing machine we can map one to one it’s whole function to any existing turing machine, not simulate it with some degree of accuracy.

          • Tomorrow_Farewell [any, they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            Again, though, this simply works to reinforce the computer analogy, considering stuff like file formats. You also have to concede that a conventional computer that stores the poem as a .bmp file isn’t going to tell you what the 300th word in it is (again, without tools like text recognition), just like a human is generally not going to be able to (in the sort of timespan that you have in mind that is - it’s perfectly possible and probable for a person who has memorised the poem to tell what the 300th word is, it would just take a bit of time).

            Again, we can also remember different things about different objects, just like conventional computers can store files of different formats.
            A software engineer might see something like ‘O(x)’ and immediately think ‘oh, this is likely a fast algorithm’, remembering the connection between time complexity of algorithms with big-O notation. Meanwhile, what immediately comes to mind for me is ‘what filter base are we talking about?’, as I am going to remember that classes of finally relatively bounded functions differ between filter bases. Or, a better example, we can have two people that have played, say, Starcraft. One of them might tell you that some building costs this amount of resources, while the other one won’t be able to tell you that, but will be able to tell you that they usually get to afford it by such-and-such point in time.

            Also, if you are going to point out that a computer can’t tell if a particular image is of a ‘wide shot of nature’ or of a ‘reflection of the light in the eyes of one’s loved one’, you will have to contend with the fact that image recognition software exists and it can, in fact, be trained to tell such things in a lot of cases, while many people are going to have issues with telling you relevant information. In particular, a person with severe face blindness might not be able to tell you what person a particular image is supposed to depict.

            • plinky [he/him]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 months ago

              I’m talking about visual memory what you see when you recall it, not about image recognition. Computers could recognize faces 30 years ago.

              I’m suggesting that it’s not linked lists, or images or sounds or bytes in some way, but rather closer to persistent hallucinations of self referential neural networks upon specified input (whether cognitive or otherwise), which also mutate in place by themselves and by recall but yet not completely wildly, and it’s rather far away picture from memory as in engraving on stone tablet/leather/magnetic tape/optical storage/logical gates in ram. Memory is like a growing tree or an old house is not exactly most helpful metaphor, but probably closer to what it does than a linked list

              • Tomorrow_Farewell [any, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 months ago

                I’m talking about visual memory what you see when you recall it, not about image recognition

                What is ‘visual memory’, then?
                Also, on what grounds are you going to claim that a computer can’t have ‘visual memory’?
                And why is image recognition suddenly irrelevant here?

                So far, this seems rather arbitrary.
                Also, people usually do not keep a memory of an image of a poem if they are memorising it, as far as I can tell, so this pivot to ‘visual memory’ seems irrelevant to what you were saying previously.

                I’m suggesting that it’s not linked lists, or images or sounds or bytes in some way, but rather closer to persistent hallucinations of self referential neural networks upon specified input

                So, what’s the difference?

                which also mutate in place by themselves and by recall but yet not completely wildly

                And? I can just as well point out the fact that hard drives and SSDs do suffer from memory corruption with time, and there is also the fact that a computer can be designed in a way that its memory gets changed every time it is accessed. Now what?

                Memory is like a growing tree or an old house is not exactly most helpful metaphor, but probably closer to what it does than a linked list

                Things that are literally called ‘biological computers’ are a thing. While not all of them feature ability to ‘grow’ memory, it should be pretty clear that computers have this capability.

                • plinky [he/him]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 months ago

                  What is visual memory indeed in informational analogy, do tell me? Does it have consistent or persistent size, shape or anything resembling bmp file?

                  The difference is neural networks are bolted on structures, not information.

                  • Tomorrow_Farewell [any, they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    6 months ago

                    What is visual memory indeed in informational analogy, do tell me?

                    It’s not considered as some special type of memory in this context. Unless you have a case for the opposite, this stuff is irrelevant.

                    Does it have consistent or persistent size, shape or anything resembling bmp file?

                    Depends on a particular analogy.
                    In any case, this question seems irrelevant and rather silly. Is the force of a gravitational pull in models of Newtonian physics constant, does it have a shape, is it a real number, or a vector in R^2, or a vector in R^3, or a vector in R^4, or some other sort of tensor? Obviously, that depends on the relevant context regarding those models.

                    Also, in what sense would a memory have a ‘shape’ in any relevant analogy?

                    The difference is neural networks are bolted on structures, not information

                    Obviously, this sentence makes no sense if it is considered literally. So, you have to explain what you mean by that.

      • Abracadaniel [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        outside of couple of weirdos, people cannot remember gibberish, try to remember hash number of something for a day

        Don’t appreciate the ableist language here just because nerudodivergence is inconvenient to your argument. I can fairly easily memorize my credit card number.

        • plinky [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I can as well, hash numbers are much worse due to 16 number system.

          I was mainly pointing out it’s not typical brain activity to remember info which we don’t perceive as memorable, despite its information contents. Its not a poke at nd folks why would people remembring 10000 digits of pi be nd? but i’ll change it, you are right

    • TraumaDumpling@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      6 months ago

      the point is that humans have subjective experiences in addition to, or in place of, whatever processes we could describe as information processing. since we aren’t sure what is responsible for subjective experiences in humans, (we understand increasingly more of the physical correlates of conscious experience, but have no causal theories that can explain how the physical brain-states produce subjectivity) it would be presumptuous of us to assume we can simulate it in a digital computer. It may be possible with some future technology, field of science, or paradigm of thinking in mathematics or philosophy or somwthing, but to assume we can just do it now with only trivial modifications or additions to our theories is like humans of the past trying to tackle disease using miasma theory - we simply don’t understand the subject of study enough to create accurate models of it. How exactly do you bridge the gap from objective physical phenomena to subjective experiential phenomena, even in theory? How much, or what kind, of information processing results in something like subjective experiential awareness? If ‘consciousness is illusory’, then what is the exact nature of the illusion, what is the illusion for the benefit of (i.e. what is the illusion concealing, and what is being kept ignorant by this illusion?) and how can we explain it in terms of physics and information processing?

      it is just as presumptuous to assume that digital computers CAN simulate human consciousness without losing anything important, as it is to assume that they cannot.

    • Parsani [love/loves, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      6 months ago

      Also, if bozo could please explain how trained oral historians and poets can recall thousands of stanzas of poetry verbatim with few or no errors I’d love to hear that, because it raises some questions about the dollar bill “experiment”.

      Through learned, embodied habit. They know it in their bones and muscles. It isn’t the mechanical reproduction of a computer or machine.

      Imo I don’t think we could ever “upload a brain” and even if we did, it would be a horrific subjective experience. So much of our sense of self and of consciousness is learned and developed over time through being in the world as a body. Losing a limb has a significant impact on someones consciousness, phantom limbs which can hurt, imagine losing your entire body. This thought experiment is still under the assumption that the brain alone is the entire seat of conscious experience, which is doubtful as this just falls into a mind/body dualism under the idea that the brain is a CPU which could be simply plugged into something else.

      Could there be an emergent conscious AI at some point? Perhaps, but as far as we can tell it may very well require a kind of childhood and slow development of embodied experience in a similar capacity to how any known lifeform becomes conscious. Not a human brain shoved into a vat.