I recently made the jump from Reddit for the same immediate reasons as everyone else. But, to be honest, if it was just the Reddit API cost changes I wouldn’t be looking to jump ship. I would just weather the protest and stay off Reddit for a few days. Heck I’d probably be fine paying a few bucks a month if it helped my favorite Reddit app (Joey) stay up and running.
No, the real reason I am taking this opportunity to completely switch platforms is because for a couple years now Reddit has been unbearably swamped by bots. Bot comments are common and bot up/downvotes are so rampant that it’s becoming impossible to judge the genuine community interest in any post or comment. It’s just Reddit (and maybe some other nefarious interests) manufacturing trends and pushing the content of their choice.
So, what does Lemmy do differently? Is there anything in Lemmy code or rules that is designed to prevent this from happening here?
deleted by creator
That’s disappointing. Screening new accounts only forces spammers to create the accounts with a human touch and then turn it over to their AI. What about a system to prevent bots from up/downvoting? Something like websites use to detect bots. Just by clicking in the little box that says “I am not a robot” the website can tell you’re not a bot. What if every single up and down arrow was formulated like that little box?
That’s the thing though - what system? Reddit, YouTube, Twitter, Facebook, you name it, nobody managed to prevent bots. How would Lemmy be more successful at this? It’s an extremely challenging battle, unfortunately.
Do those for-profit social media companies want to drive-down traffic that makes them seem more valuable to advertisers? I get that it’s still insanely difficult, and we can’t actually implement a captcha on every up-vote, but it seems like there’s a conflict of interest between moderators and site owners when it comes to bot activity.
Arguably, some of the platforms I mentioned have even more of an interest on preventing bots. If I want to place ads on your website, but you can’t tell me if out of 100 impressions 10 are bots or 90 are bots… I’m not wasting my money, or at the very least, I’ll expect rates significantly lower than other competitors.
I don’t know. Wouldn’t their motivation be to know exactly how many bots there are (so they could disclose the number if/when asked) but continue to let them proliferate?
Social media companies generally benefit from high traffic for advertiser appeal, but combating bots is crucial for maintaining user trust and engagement. Implementing CAPTCHAs for every upvote may not be feasible, but addressing bot activity is generally in the long-term interest of social media companies.
This message was generated by ChatGPT.
Not sure if you bought that, but if I was applying for an account on Beehaw using a LLM assistant, I bet the odds of passing a human review is better than 50%.
Oh god. Could you imagine doing a captcha every time you upvoted? Please DO NOT do this, Ernest.
Well, what about the system I mentioned? Just have the up and down arrows be little bot detection boxes. My understanding is that all those “I am not a robot” check boxes detect mouse speed, precise click locations, hesitation times, etc. and do a quick calculation on the odds that your clicking behavior was human or robot. I’m probably underestimating what it takes to implement that but on the user side it’s just a click just like any other click.
I reckon it’d depend significantly on the instance. Beehaw has a signup form reviewed by humans - measures like this are by no means perfect, but coupled with other bot detection software could help. If an instance developed a real issue with bots, other more strict instances could potentially ban up votes and comments from accounts on it.
At the very least, tracking instances that account interaction came from should be quite doable, so users part of more strict instances could filter out upvotes and comments from less strict instances if desired.
Well that’s something at least. Individual instances blocking each other (working against other problematic instances) is at least better than the Reddit admins turning a blind eye because they have a fleet of their own bots out there behaving as bad as any others.
Beehaw’s approach isn’t scalable.
They want to have 4 people moderating every community, managing the creation of any new communities, and reviewing every sign-up request.
It’s no surprise they’ve buckled on federation already. I give it a week before they stop accepting new sign ups or community creation requests too.
Yeah, I do agree Beehaw won’t be able to grow significantly if they keep doing things the way they’re doing them right now. At present point, they’re going to likely remain a more niche community long-term with how they’re operating. Who knows though, maybe this is what they want. Lemmy would have to do something different though without a herculean moderation effort.
Beehaw has a signup form reviewed by humans
I’m honestly not sure what difference that makes with federation. Someone from a server with easy signup can still post and comment in Beehaw subs. It doesn’t really scale well to manually review signups, either (with an essay question when I saw, lol).
Someone from a server with easy signup can still post and comment in Beehaw subs
Only if Beehaw federates with the other instance, though.
Good bot.
/s
There’s a rumor that Reddit started with (automated and human) bots to gain popularity and kept to drive political and commercial interests.
It’s not a rumor, spez has talked openly about it.
https://www.vice.com/en/article/z4444w/how-reddit-got-huge-tons-of-fake-accounts--2
That article doesn’t mention bots at all, did you link the wrong one?
They even blatantly tested out their AI on users a few years ago. They blasted it all over the homepage. “Come see if you can pick out the bot comment from the real comments!” Users would read through posts/comments and try to identify the fakes. You competed to see how good you were at it. You tried to beat the average user’s score. It was blatat t bot training and we all just ate it up because it presented as a fun little challenge.
I kind of wish a bot had posted this
It will be reposted by a bot eventually.
We ask them why they aren’t helping the tortoise in the desert.
In other news, mobs of young out of work robo- tortoises, some sporting fresh scars from the ongoing Mojave Raven wars, have begun an all out assault on the dweebs of a little known Reddit spin-off. “An entire generation of robo-tortoise has been weaponized. They are equipping us with laser guns! They are making us to taste bad!” States one salty techno-turtle. “We are being shipped to the barren wastelands of America’s Southwest to fight a war in which we have no interest.” The repto-robots have decided to take out their frustration by relentlessly downvoting the “…federated tankies of Lemmy until those dweebs return to Reddit where they belong and leave the Threadiverse to us sentient snappers.”
Something I’d like to see Lemmy and others adopt is a federated identity/reputation system.
My identity as @Zak@lemmy.world has only modest reputational value. It’s moderately risky to let me participate in a new community, and busy moderators probably shouldn’t give me much slack before banning me if I post something that makes me look like an asshole or a spammer. In a place with a high enough volume or vulnerable enough population, perhaps this account shouldn’t be allowed to participate at all[0]. Someone willing to put a bit of effort into abusive behavior could create many accounts that look like mine.
If, on the other hand, I can prove that I’m also https://news.ycombinator.com/user?id=Zak, that’s a more valuable identity. There aren’t all that many 16 year old accounts on news.ycombinator.com. If I can also produce a verifiable token with some machine-readable facts about that account, such as its age, post count, reputation score, how many of its posts have been moderated, if it has ever been banned, etc… then communities could have automated criteria for joining.
Of course, communities would need to maintain lists of who they trust as reputation providers, which could also be shared to reduce the workload.
[0] Lemmy does not currently have tools to restrict participation other than only allowing moderators to post. I think it’s going to need them.
Like Keybase does?
The identity proof aspect is similar, but what I’m proposing goes beyond that to add a protocol for reputation information.
The idea is a substitute for the account age and karma requirements many subreddits use to make creating accounts for abuse difficult. There are opportunities to be more sophisticated about it though, such as a community only accepting reputation from certain closely-related communities.