- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
A well backed as usual peice by Benn Jordan on the basics of how misinformation farms work according to their own internal documentation, the goal of creating a post truth world, and why a sizable percentage of twitter users start talking about OpenAi’s terms of service every time they update it.
Absolutely incredible breakdown of the problem. In addition to twitter, I strongly suspect Reddit is infested with a similar increase in bot accounts, which would explain how a sub I used to moderate there has some of the highest page visits its ever had, yet its actual user engagement hasn’t changed at all, or even gone down.
Corporate websites, who have a financial incentive to allow the bots, have become completely unusable. The difference in interaction on Lemmy is incredibly stark, which goes to show that the fediverse seems to be far more resilient against bots since we can defederate from an instance that gets taken over, like cutting off an infected limb to stop the spread.
Hopefully, but I worry no small part of it at the moment is just that we’re too small to be worth the bother. If the fediverse grows big enough to matter, well I worry about what dedicated teams of people working a full time job could do. One or two people can easily run a few dozen active accounts, which in turn could easily dominate conversation on an instance.
Hmm… That could be an issue, you’re right.
If it does get that bad, we’d gave to act more defensively by only federating with instances that have reviewed sign-ups and have received an endorsement on fediseer.
That would result in a more isolated experience, but if that’s the only way to combat it, then we’ll have to shift with the needs of the moment to keep it mostly humans we’re interacting with, and to make the moderation workload manageable.
It’s very easy to spin up a new instance though, so I’m surprised there’s not a lot of spam. AFAIK most servers still federate with any new servers by default as soon as a user on the new server subscribes to a person/community on an existing server. That’s important to ensure equal treatment and that new servers are not disadvantaged, but it can also have issues.
The Fediseer project from @db0@lemmy.dbzer0.com helps prevent bot farms from proliferating, as new servers require an endorsement from an already trusted instance to become ‘legit’. And they can be marked as untrustworthy as well, causing them to be defederated fairly quickly, limiting its reach.
We also have a MUCH higher moderator to user ratio compared to corpo sites, with a range between 100 to 2,500 users per mod depending on instance, Vs. 250,000 users per mod on sites like twitter, so we can more adequately spot and deal with spam on the network.
Thanks for the info! I didn’t know about the Fediseer project - I don’t think it existed when I created my Mastodon and Lemmy servers, or I just wasn’t aware of it.
It’s been out for the past year. It’s all word of mouth but a lot of instances have fediseer badges to draw attention to it.
Thanks for maintaining it! I signed up, now I just need to see if anyone’s willing to guarantee my single-user Lemmy instance. Not adding my Mastodon since I don’t know if I’ll even keep running it.
FYI when you request a guarantee, you can specify another instance to solicit a guarantee from. They will get a PM to inform them.
Generally getting people to guarantee for others consistently has been the biggest struggle on fediseer >_<
We absolutely have this problem on Lemmy too. Even on Beehaw. Hell, there’s a particularly high profile user here who posted constantly and focused squarely on spoiling potential Democrat votes who utterly disappeared the moment he was told by staff to knock it off. All other engagement dropped off and I still haven’t seen him post-election.
How many others were less prolific and didn’t shut down their activity? How many other accounts are literally just the same person?