I’ve been reading about recent research on how the human brain processes and stores memories, and it’s fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.

Given this understanding of human cognition, I can’t help but wonder why AI isn’t being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.

Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I’d love to hear your thoughts!

  • iii
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    26 days ago

    AI does work like that.

    With (variational) auto-encoders, it’s very explicit.

    With shallow convolutional neural networks, it’s fun to visualize the trained kernel weights, as they often return an abstract, to me dreamlike, representations of the thing being trained for. Although derived through a different method, search for “eigenfaces” as an example of what I mean.

    In the recent hype model architecture, attention and transformers, the encoded state can be thought of as a compressed version of it’s input. But human interpretation of those values is challenging.