There are various reasons Lemmy succeeded as a Reddit alternative where others failed. One of the underappreciated ones is probably that the devs were communists. I know that sounds a little strange
Great article! I was thinking along these lines, so I’m glad to see a formalized version of it.
What if participants could automatically block the malicious peer, if they discover that the peer has been blocked by someone the participant trusts?
That’s essentially what I’m after. Here’s the basic mechanism I’ve been considering:
Users report posts, which builds trust with other users that reported that post
Users vote on posts, which builds trust with other users that voted the same way
Posts are removed for a given user if enough trusted people from #1 reported it
Ranking of posts is based largely on #2, as well as suggestions for new communities
Users can review a moderation log periodically (like Steam’s recommendation queue) to refine their moderation experience (e.g. agree or disagree with reports), and they can disable moderation entirely
And since content needs to be stored on peoples’ machines, users would be less likely to host posts they disagree with, so hopefully very unpopular posts disappear (e.g. CSAM).
So I’m glad this is formalized, I can probably learn quite a bit from it.
Great article! I was thinking along these lines, so I’m glad to see a formalized version of it.
That’s essentially what I’m after. Here’s the basic mechanism I’ve been considering:
And since content needs to be stored on peoples’ machines, users would be less likely to host posts they disagree with, so hopefully very unpopular posts disappear (e.g. CSAM).
So I’m glad this is formalized, I can probably learn quite a bit from it.