• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    do you plan on using the ActivityPub protocol

    Maybe later, but federation isn’t an initial goal.

    I want a completely distributed system like BitTorrent or IPFS, so all data is stored on user devices instead of centralized servers (might have some servers to help with availability). I want moderation to be distributed as well, but I’m trying to figure out a way that can promote diversity instead of just falling into the hands of whatever group comes first (i.e. if with a voting model), or fracturing into lots of smaller groups (i.e. web of trust).

    I feel moderation needs to be good from the start, so I’m holding off on integrating with other services until I figure that out.

    A unified platform structure will eventually lead to an unified ideology

    Perhaps. Communities help, but the real issue is quality (or at least diversity) of moderation (I.e. the admins of instances until FT mods are chosen). Reddit worked well because it had pretty good moderation where it counted.

    • Lvxferre
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Distributed system? Madman! You’re going a step further! Mad respect for that, seriously. Now I want to see your project to get successful.

      Regarding moderation, did you see this text? I feel like it’s perhaps worth a try; I don’t expect it to devolve into web of trust-like “feuds” as there’ll be always people working as links between multiple groups, but it also prevents the “first come, first served” issue that you mentioned.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Great article! I was thinking along these lines, so I’m glad to see a formalized version of it.

        What if participants could automatically block the malicious peer, if they discover that the peer has been blocked by someone the participant trusts?

        That’s essentially what I’m after. Here’s the basic mechanism I’ve been considering:

        1. Users report posts, which builds trust with other users that reported that post
        2. Users vote on posts, which builds trust with other users that voted the same way
        3. Posts are removed for a given user if enough trusted people from #1 reported it
        4. Ranking of posts is based largely on #2, as well as suggestions for new communities
        5. Users can review a moderation log periodically (like Steam’s recommendation queue) to refine their moderation experience (e.g. agree or disagree with reports), and they can disable moderation entirely

        And since content needs to be stored on peoples’ machines, users would be less likely to host posts they disagree with, so hopefully very unpopular posts disappear (e.g. CSAM).

        So I’m glad this is formalized, I can probably learn quite a bit from it.