• ᴇᴍᴘᴇʀᴏʀ 帝@feddit.uk
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    2 days ago

    “We have done this in the past for quarantined communities and found that it did help to reduce exposure to bad content, so we are experimenting with this sitewide,” according to the main post. Reddit “may consider” expanding the warnings in the future to cover repeated upvotes of other kinds of actions as well as taking other types of actions in addition to warnings.

    Thoughtcrime time.

    Bigger picture - what if Xitter, Meta and Reddit (all run by Trump humpers) started centrally compiling this kind of thing to flag up “persons of interest”?

  • Séän@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    The only subreddit I’ve been visiting is LeopardsAteMyFace and I got a warning. How is it inciting violence if it’s ALREADY happened?

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Because an AI indiscriminately saw your comment and looked for keywords and just issued a warning, that’s what they are not telling to people. I had the same thing on another sub, except the mods also were involved so it was a ban, reddit rules are vague ASF

  • yesman@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    8
    ·
    3 days ago

    I mean when everyone else is jettisoning moderation, reddit is cracking down on bots and trolls? I don’t hate it.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      The thing is, these recent ban waves they have been going after the low hanging fruits. Of accounts, small advertisers, and not the problematic ones like the state sponsored political troll accounts, at least not in large numbers, both by RU and the US, but we know they represent a large amount of traffic on the site. Many articles posted on Reddit was pointed out as being a bot too.

      • Ledericas@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Their ai detects a ban in your other accounts it decides to ban all your “connected accts” even if you haven’t used that acct for years

    • kat@orbi.camp
      link
      fedilink
      arrow-up
      10
      ·
      3 days ago

      I mean they’re deciding based on what falls as violent on whatever arbitrary classifiers they’re feeling that day.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    15
    ·
    3 days ago

    Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.

    Now, I don’t particularly think this is a good idea, but I can see the benefit of this as well. People have the freedom to upvote whatever they choose, even if I think they are dumb for doing it, and they shouldn’t have to worry about anyone other than law enforcement or lawyers (in extreme edge cases) using that information against them.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      8
      ·
      3 days ago

      One thing I like about lemmy is you can still upvote ‘removed by moderator’ comments and I always do because it’s funny

    • Hubi@feddit.org
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      3 days ago

      Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.

      This wouldn’t even be possible on Lemmy.

        • CommanderCloon@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          If lemmy did this you’d see forks ripping this out, not to mention anything other than lemmy would not have it, so only a very small subset of the fediverse would be subject to this at all, making it perfectly useless