If such a constraint would be set, the community would not be able to pay for the amount of value that is currently provided by volunteer work. You have to pay a prohibitively large amount of money, or severely diminish the overall quality by cutting costs or by not providing a living-wage (not “minimum wage”) to the moderators.
The entire concept of “moderation” doesn’t make a lot of sense in a federated environment. Anyone can start an instance, and post anything they want to that instance. Moderators can block local content from being posted to the general public, and they can block local users from viewing remote content.
But they can’t block the general public from accessing remote content.
Moderation is no longer the gatekeeping of content. It is now about limiting consumption.
Think of moderation in the context as a subscribable Adblock list. Except it can block more than spam.
Your ad blocker currently has subscription lists by default, your can easily change them.
I don’t doubt that Mod Abuse was an endemic problem of reddit, and nothing of the following applies to big subs or communities or instances.
But mods are a nessesary part of healthy communities, Mods not only police the rules of a community but they run it as well, and sometimes they end up being the ones handling sensitive or controversial information.
I dout it commonly happens in huge communities, but when people feel they belong somewhat to a community they start to feel safe to share things they probably wouldn’t elsewhere online (if at all), and the day may arrive that some of this people, may found them selves in a mental health crisis, in the verge of suicide, and, being alone otherwise, will try to use this communities as a last resort, or straightforward post a suicide letter, and in most cases is up to mods to have to deal with that situation, Other things such as doxxing, or even naive people who put them selves in danger or expose too much without realising need to be privately discuss.
Is something mods do many times because there is just nobody else to do it, they are the only ones there, the only ones able to do something even if something few, and if that was already a thing on reddit, here will be much more of a thing.
Alongside things such as controversial decisions, who is to become mod and if somebody deserves a ban or not as well.
At the end of the day, you need people passionate for the community, that will do that work (i can tell from personal experience that moderating is a thankless hobby.) for free and for real, and not to manage tons of things for a wage or another kind of benefit.
TL:DR; Sometimes mods have to deal with sensitive stuff that shouldn’t be at all in the public eyes.
Moderators do not exist outside of the public. They are not special individuals with special motives. If there is bad content out there, it’s either going to be retracted by the source. Removed by the instance owners, as a public and high friction action, suppressed by the public consensus of many persons voting for it to be moderated out of view. And if it’s being persistently posted by a user, this user should be ostracized off the instance , either by public intervention by the instance admin (account deletion) or algorithmically by mass voting from peers.
Beyond that, and it’s already a good compromise, the user will not be protected from themselves because they opens the door to infinite abuse from the moderator class.
It’s just a fact of life that when you send information out there, there is no getting it back. The system to get out back is the system that enabled Reddit’s despotism and is now being leveraged through the APIs. Nobody should be more powerful than a order, except , a group of users.
Leaving the thankless task of moderation to free working volunteers entitles them to secrecy and unaccountability.
Moderation should be crowdsourced subscription service that pays at least minimum wage.
User should directly be able to choose which moderation mask they wish to wear and they should pay for it. If not in real money (because fees and regulations) then at least in some form of actually valuable, redeemable token to compensate moderator for their service.
The user should always have the final work on who and what they don’t want to see. Which moderator they subscribe.
Moderation must be transparent, modmail must public.
Moderators should not feel entitled to secrecy or abuse of their position (as would be the case for free work)
There would not be systemic power to wield by the moderators as a class, against the user class or the owner class.
It should ould be possible for a moderator to override or cancel the actions of another moderator. And then it comes to the order of operations in your list of moderator subscriptions.
Lemmy communities should publish a default set of moderators, which the user can pick and choose. Moderators should be able to publish multiple “moderation masks” and you pick which you like the most. Example “everything minus spam” “everything minus spam and sex” “everything minus spam and religion” “everything minus spam, racism, kletophobia and Jeff Bezos” and so on
Correct me if I’m wrong, but I thought Lemmy had mod logs by default.
Defederation is the moderation mask you speak of. Beehaw, for example, has the tankie instance blocked.
I also don’t find the distinction between owner and miderator useful. In the case of decentralized Lemmy instances with less than 1k users, a single owner may act as both moderator and owner.
Defederation is an incredible extreme. And it should be easily bypassed by the user’s client. It should even be a meaningless action.
My moderation mask I mean a kind of moderation action log. Your client reads it and applies it as an overlay to hide stuff from your sight.
If a user’s comment is marked as deleted by the moderator, and you’re subscribed to that moderator mask. Then your client treats it as deleted.
You can go into they mask and see if you agree with the moderators decision. If you don’t, then subscribe to other moderators.
The moderators become actors on your behalf that you can override if you don’t mind waiting through the crap yourself.
You could be subscribed to hundreds of moderators, maybe every user on the fediverse. Your client might only act on a moderation action if say, at least 10 independent users took the same moderation action and established a consensus.
I agree except someone has to host the content and they should get to decide what’s not allowed.
I don’t think owners should have a say in the content of the discussion. That is something for users and moderators to decide. I have had enough of the “take my ball and go home” situations. Make the database an encrypted blob that cannot be inspected directly.
Alternatively, there is no reason for instances to exist anyway. All computer users have orders of magnitude more compute power to each run their own single user instance. On their five year old phone and even in the dishwasher.
Text is small. Being a hard drive owner would not grant leverage over the freedom of expression of others.
Lemmy is not distributed enough, there is still too much power concentrated in the hands of the owner class and the moderator class.
Users should be their own owners. Moderators will still be needed but they will only operate with the consent of each user to mask off the content they don’t want to see.
Tbh it sounds like you really don’t like Lemmy and would rather use something else instead
You want me to host a community on my phone but only some stranger is allowed to curate the content on it? I don’t really understand this system you’re describing.
Any lemmeyverse user, including you, would be allowed to emit an opinion regarding moderation actions that would be performed on it.
In the same way votes are currently emitted, moderation actions should be “emitted” and every user can decide to follow, or override or have any kind of rule to interpret those actions.
For instance, a rule could be, if 100 users have emitted “delete” then enact this action in my locally filtered view.