Reddit implemented this, and it was abused heavily to push trolls posts and disinformation up the algorithm, since by blocking people who disagreed with them, after multiple attempts the naysayers could no longer see the posts.
Somebody tested it, and was able to get their testing misinformation posts heavily upvoted after just a few days.
Has happened multiple times to me. I called somebody out for saying something wrong or bigoted or whatever, they blocked me after responding to me, I could no longer respond back to their response. And then presumably they kept saying shit that I was not able to see because I was blocked
It’s a short-sighted way of implementing blocking, since it allows for heavy abuse by bad actors
Yeah, there was plenty of discussion on Reddit back in the day about the drawbacks and pitfalls of the blocking system. Surprised to see people calling for its implementation here.
If someone would do something similar here, they would at the very least get called out on !fediverselore@lemmy.ca or !yepowertrippinbastards@lemmy.dbzer0.com , and mods and admins would get called out to act on those. Reddit does not have such mechanisms.
Moderation does not matter if the post is made on a comm or instance which favors it cough .ml cough
Bots and brigading are not the issue here. Neither of them were a factor in the post I linked, and they are not a necessary part of the abuse process under discussion.
Yepowertrippinbastards works on a small scale, but it is not inherently scalable. As the fediverse grows, it will become less practical to name and shame bad actors on an individual basis. It also does not matter when the abuse system (preliminary blocklist) can be implemented by any new account.
The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can’t really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
user gets harassed, blocks the harasser
the harasser can still comment on every comment and post of that user, requiring mod and admins to jump in to stop the abuse. With the Bluesky system, users themselves can prevent that.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales.
BlueSky just passed 21 millions users.
Bots and brigading are not the issue here. Neither of them were a factor in the post I linked,
I had a look again at the post.
I first prepared the account by blocking all the moderators and 4 or 5 users who usually call out misinformation posts.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can’t really see what they were talking about), but I’m pretty sure there would be more than 4 or 5 people who would call out about misinformation.
The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky just passed 21 millions users.
BlueSky isn’t really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
there would be more than 4 or 5 people who would call out about misinformation.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it’s ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that’s invisible to everyone but server admins.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
BlueSky isn’t really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
We could say that every user can mod their own threads.
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
The way Reddit does it at the moment still allows good faith users to identify such behaviours: it shows [unavailable] when someone who blocked you comments, so you know you just have to open that link in a private tab to see the content. I actually have that at the moment as some right wing user blocked me as I would usually call out their bullshit. Still allows me to see their comments and post them to a meta community to call out their right wing sub.
Reddit implemented this, and it was abused heavily to push trolls posts and disinformation up the algorithm, since by blocking people who disagreed with them, after multiple attempts the naysayers could no longer see the posts.
Somebody tested it, and was able to get their testing misinformation posts heavily upvoted after just a few days.
https://www.reddit.com/r/TheoryOfReddit/comments/sdcsx3/testing_reddits_new_block_feature_and_its_effects/
Has happened multiple times to me. I called somebody out for saying something wrong or bigoted or whatever, they blocked me after responding to me, I could no longer respond back to their response. And then presumably they kept saying shit that I was not able to see because I was blocked
It’s a short-sighted way of implementing blocking, since it allows for heavy abuse by bad actors
Yeah, there was plenty of discussion on Reddit back in the day about the drawbacks and pitfalls of the blocking system. Surprised to see people calling for its implementation here.
Is there a long-sighted way to implement blocking?
Not when it takes one minute to create a new account
@Blaze@feddit.org, genuinely interested in your opinion on this considering the new information
Do you really believe that someone could get their a misinformation post heavily upvoted here? The main differences with Reddit are
If someone would do something similar here, they would at the very least get called out on !fediverselore@lemmy.ca or !yepowertrippinbastards@lemmy.dbzer0.com , and mods and admins would get called out to act on those. Reddit does not have such mechanisms.
I disagree with you to some extent.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can’t really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
BlueSky just passed 21 millions users.
I had a look again at the post.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can’t really see what they were talking about), but I’m pretty sure there would be more than 4 or 5 people who would call out about misinformation.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky isn’t really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it’s ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that’s invisible to everyone but server admins.
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
We could say that every user can mod their own threads.
The way Reddit does it at the moment still allows good faith users to identify such behaviours: it shows [unavailable] when someone who blocked you comments, so you know you just have to open that link in a private tab to see the content. I actually have that at the moment as some right wing user blocked me as I would usually call out their bullshit. Still allows me to see their comments and post them to a meta community to call out their right wing sub.