I’ve been reading about recent research on how the human brain processes and stores memories, and it’s fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.

Given this understanding of human cognition, I can’t help but wonder why AI isn’t being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.

Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I’d love to hear your thoughts!

  • iii
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    27 days ago

    AI does work like that.

    With (variational) auto-encoders, it’s very explicit.

    With shallow convolutional neural networks, it’s fun to visualize the trained kernel weights, as they often return an abstract, to me dreamlike, representations of the thing being trained for. Although derived through a different method, search for “eigenfaces” as an example of what I mean.

    In the recent hype model architecture, attention and transformers, the encoded state can be thought of as a compressed version of it’s input. But human interpretation of those values is challenging.

  • remotelove@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    27 days ago

    Thats kinda is how neural networks actually function. They don’t store massive amounts of data but, similar to us, tweak and adjust complex pathways of neurons that kinda just convert an input into a response.

    When you ask an LLM a question you are actually getting a list of words based on probabilities, not anything the LLM had to “think about” before responding. During its training, different patterns fed to the AI tweak and balance how and when specific neurons should fire. One way to think about it is that “memories” or data is stored in how the paths are formed, not actually in the core of the neuron itself.

    There are several hundred configurations of artificial neural networks that can mimic different functions of our brains, including memory.

      • iii
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 days ago

        Not necessarily, sometimes dimensionality reduction (the more common terminology used, for what is basically compression) is the explicit goal.

        Can be used for outlier detection, similarity search, etc.

        During training, you find a projection of the input, for example an image, to a smaller space, and then back to the original image. This is referred to as encoding and decoding. The error fuction would be a measure of how similar the in- and output images are.