Hello everyone,
We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
We keep working on a solution, we have a few things in the works but that won’t help us now.
Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.
Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.
But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.
Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.
I’m afraid the fediverse will need a crowdsec-like decentralized banning platform. Get banned one platform for this shit, get banned everywhere.
I’m willing to participate in fleshing that out.
Edit: it’s just an idea, I do not have all the answers, otherwise I’d be building it.
What you’re basically talking about is centralization. And, as much as it has tremendous benefits of convenience, I think a lot of people here can cite their own feelings as to why that’s generally bad. It’s a hard call to make.
They didn’t say anything about implementation. Why couldn’t you build tooling to keep it decentralized? Servers or even communities could choose to ban from their own communities based on a heuristic based on the moderation actions published by other communities. At the end of the day it is still individual communities making their own decisions.
I just wouldn’t be so quick to shoot this down.
There is no way that could get abused… Like say, by hosting your own instance and banning anyone you want.
Anything can be abused, but you can also build proper safeguards.
deleted by creator
I think a ban list is, unfortunately, contrary to the notion of decentralization; as otherwise warranted as it is in this instance.
What about a centralized list of data on major offenders, which could be made subscribable by instances? Perhaps a way of calculating probability of origin, matched with offenses made by that origin in the past? That way an instance could take it under advisory, and take actions suiting them?
This is only the beginning of an idea, but if it is possible to collaborate without centralizing, we should explore it.
It might be in the works, comment
Soo… You want to centrelize a decentralized platform…
You can have a local banlist supplemented by a shared banlist containing these CSAM individuals for example.
That ban list could be a set of rich objects. The user that was banned, date of action, community it happened in, reason, server it happened at. Sysops could choose to not accept any bans from a particular site. Make things fairly granular so there’s flexibility to account for bad actor sysops.
But how do you know that these people actually spread CSAM and someone isn’t abusing their power?
[ptrck has been permanently banned from all social media]
I feel like this would be difficult to enforce
Mabe FIDO for identity purposes is a good idea. Mabe some process that takes a week to calculate an identity token and an approval and rejection system for known tokens
We already have that, it’s called prison. Can’t go on the internet from Prison (at least I’d assume so, wouldn’t make much sense if people could). That’s not 100% since people need to be caught for it to work but once they are it certainly is.
Though other Global ban solutions don’t really work well because they require a certain level of compliance that criminals aren’t going to follow though with (i.e. Not commiting identity theft). They can also be abused by malicious actors to falsely ban people (especially with the whole identity theft thing).