Why is this not a thing?

  • Jumpinship@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    copied from a deleted reddit account, it’s quite brilliant :

    ELI5: Whole thing kind of works like a virus

    So, the client doesn’t know where your file is. They call a primary server for a list of some people connected to the IPFS network, then ask around that list progressively spreading, like a virus transmitting between hosts

    Once the “virus” finds the file, it brings it back to the client, stores a copy, and closes the connection. And since the client is now IN the network (knows where some other people are) then in order to retrieve further files they don’t need to use the primary server any more. Also, other clients can now fetch the file from the original client. So every copy makes finding those files faster — like auto scaling.

    IPFS built an algorithm around that concept, to make file cleanup, lookup, minimization, and integrity checking possible. They also use a hash ID system to store data so like, if the hashes match, you could get a piece from file A or file B (whichever is closer) to complete the hash sequence needed to build file C.

    It’s a pretty clever system, if you’re curious totally worth reading into the details.

    Probably got a lot wrong, but that’s how I understand it.
    
    • InverseParallax@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s ludicrously slow.

      We’re moving that way, we’ll get there, we need more nodes everywhere and honestly better distribution flows so it’s closer to a multicast system, but that’s probably the end goal.

  • eleitl@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s currently too slow still. Run your own node to get a feel for it.