• azuth@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Given new commercial entrants into the Fediverse such as WordPress, Tumblr and Threads, we suggest collaboration among these parties to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem

      In such a system, the server on which a post originates would submit imagery to PhotoDNA for analysis

      This same technique could also be applied to other hosted media analysis mechanisms (e.g. Google’s SafeSearch or Microsoft’s Analyze Image API40

      While large social media providers utilize signals such as browser User-Agent, TLS fingerprint,8 IP and many other mechanisms to determine whether a previously suspended bad actor is attempting to re-create an account, Mastodon admins have little to work with apart from a user’s IP and e-mail address, both of which are easily fungible.

      So basically people might have joined the fediverse in large due to privacy reasons but if fediverse is to be “ethical” it should share your images with big tech as well as track you better.

      He also laments Tor and E2E messaging.

      • fubo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        1 year ago

        Anyone who’s on Lemmy for “privacy reasons” is probably not looking very closely at the technology. Everything you do here, including votes and DMs, is effectively public. All of it can be scraped, ingested, processed, etc. by absolutely anyone.

        • azuth@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Votes are federated. They are tied to account names. Only your instance can tie them to your IP.

          DMs are insecure in that admin instances can read them. Most instances tell you not to use them.

          Scraping is more resource intensive than using an API to have data submitted to you. Since you are now offering a service you can set terms on what you can legally do with that data while scraping can lead to legal issues. PR issues as well.

          In general using a corporate social media will allow companies to track you (or buy the tracking data from the social media company) far more thoroughly than scraping lemmy.

    • Melpomene@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      The article reads like a low key hit piece, the report is good and has food for thought.

      As an aside, always look at anything NCMEC says with a critical eye. They do great work in their space, but they are vehemently anti-decentralization and anti-privacy.

  • SJ0@lemmy.fbxl.net
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    3
    ·
    1 year ago

    I feel like these are just establishment hit pieces. They do it every time to up and coming platforms…

    • dragontamer@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I know enough about internet porn to know that the online-porn communities will love something like Fediverse, and furthermore, the child-exploitation groups would also love something like this.

      But what’s surprising to me in this study is that they focused on the top 25 Mastodon servers. They’ve included specific keywords they were looking for (yall know what keywords I mean), and include a practical methodology involving just hashing files + matching known CSAM databases, rather than forcing a human to go through this crap and picking out what they think is, or isn’t CSAM.

      It seems like a good study from Stanford. I think you should at least read the paper discussed before discounting it. We all know that even here on the Lemmy-side of the Fediverse, that we need to be careful about who to federate with (or disfederate from). Its no surprise to me that there will be creepos out there on the Internet.


      112 hits is pretty small, in the great scheme of things. But its also an automated approach that likely didn’t get all the CSAM out there. The automated hits seem to have uncovered specific communities and keywords to use to help search for this stuff + moderate, and includes some interesting methodologies (ex: hashed files compared against a known-database) that could very well automate the process for a server like Lemmy.world.

      I see this as a net-positive study. There’s actually a lot of good, important, work that was done here.

  • flyoverstate@kbin.social
    link
    fedilink
    arrow-up
    38
    ·
    1 year ago

    The “report” is issued by something called the Stanford Internet Observatory, which is not in fact a telescope on a hill, but rather an operation by the guy who, from 2015-2018, was the “Chief Security Officer” of Facebook - an ironic title, considering that this was the period of the Cambridge Analytica machination, the Rohingya genocide, and the Russian influence operation that exposed 128 million Facebook users to pro-Trump disinformation.

    https://kolektiva.social/@ophiocephalic/110772380949893619

  • mrmanager@lemmy.today
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Sounds like they are becoming worried over the growth of these networks and wants to convince the large public that they should stay away.

    It’s pretty much standard tactics to paint a false picture of something, and they get away with it too. I bet people will now say “mastadon, isn’t that where there is child porn? No thanks”.

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    I’m not fully sure about the logic and hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).

  • Melpomene@kbin.social
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 year ago

    A thought… given that these databases use known hashes of child exploitation media to identify offenders, could someone release an add-on to Mastodon etc that could use the hash DB to delete, flag the account, and notify whoever? It would serve to cut down on NCP too if a victim were willing to hash them.

    Edit: Hey serial downvoter!

  • cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Why would anyone use Mastodon for this stuff? It would be private Telegram groups or something like that. This kind of “research” is barely a step above trolling or low-effort clickbait.

    • fubo@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 year ago

      The researchers are looking at actual posts on actual servers. The research itself is not made up. The speculation and handwaving that the tech press feels the need to introduce into it? That’s made up.

      As for why would anyone use Mastodon for it — your typical Internet pedophile isn’t any smarter than your typical Internet user, and half of those are below average.

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      While I don’t agree with the author’s use of “major CSAM problem”, if you browse instance federation list, you might notice a few mastodon instances that suggest they might be a MAPs community at best (domain name like pedo-school, mapsupport, etc usually with cute dolls in their banner image). They are closed community so can’t see what happening inside.

    • Melpomene@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Maybe, or they just trade it via Google / iCloud etc. I’m absolutely sure Telegram etc are used, but I tend to think that most predators use whatever is convenient.

      Report found a fair few examples on Mastodon instances so it does happen.

  • MyOpinion@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    The Apache foundation has got a huge child sex problem. They must be policed by Microsoft. /s

  • jocanib@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They don’t seem to list the instances they trawled (just the top 25 on a random day with a link to the site they got the ranking from but no list of the instances, that I can see).

    We performed a two day time-boxed ingest of the local public timelines of the top 25 accessible Mastodon instances as determined by total user count reported by the Fediverse Observer…

    That said, most of this seems to come from the Japanese instances which most instances defederate from precisely because of CSAM? From the report:

    Since the release of Stable Diffusion 1.5, there has been a steady increase in the prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with increasing levels of realism.17 This content is highly prevalent on the Fediverse, primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in Japan, its laws exclude computer-generated content as well as manga and anime. The difference in laws and server policies between Japan and much of the rest of the world means that communities dedicated to CG-CSAM—along with other illustrations of child sexual abuse—flourish on some Japanese servers, fostering an environment that also brings with it other forms of harm to children. These same primarily Japanese servers were the source of most detected known instances of non-computer-generated CSAM. We found that on one of the largest Mastodon instances in the Fediverse (based in Japan), 11 of the top 20 most commonly used hashtags were related to pedophilia (both in English and Japanese).

    Some history for those who don’t already know: Mastodon is big in Japan. The reason why is… uncomfortable

    I haven’t read the report in full yet but it seems to be a perfectly reasonable set of recommendations to improve the ability of moderators to prevent this stuff being posted (beyond defederating from dodgy instances, which most if not all non-dodgy instances already do).

    It doesn’t seem to address the issue of some instances existing largely so that this sort of stuff can be posted.