A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.

more than 1,700 comments made by AI bots

The bots made more than a thousand comments over the course of several months and at times pretended to be a “rape victim,” a “Black man” who was opposed to the Black Lives Matter movement, someone who “work[s] at a domestic violence shelter,” and a bot who suggested that specific types of criminals should not be rehabilitated.

The experiment was revealed over the weekend in a post by moderators of the r/changemyview subreddit, which has more than 3.8 million subscribers. In the post, the moderators said they were unaware of the experiment while it was going on and only found out about it after the researchers disclosed it after the experiment had already been run. In the post, moderators told users they “have a right to know about this experiment,” and that posters in the subreddit had been subject to “psychological manipulation” by the bots.

  • atempuser23@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 hours ago

    First AI came for the artists, then coders, now the trolls and shitposters.

    How dare they take the jobs from hardworking Russians influencing online discourse.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    8 hours ago

    Gotta love the hollow morality in telling users they’ve been psychologically manipulated this time but yet do nothing about the tens of thousands of bots doing the exact same thing 24/7 the rest of the time.

  • LarmyOfLone@lemm.ee
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    9 hours ago

    r/changemyview is one of those “fascist gateway” subs, or at least one of the subs that I suspect as that. The gateway works by introduing “controverial” topics which are really fascist but they get upvote because “look at this idiot!”. But slowly it moves the overton window and gives those people who actually do believe in inequality a space to grow in. Opinions that are racist, anti-feminist, anti-trans, ultra-nationalist, authoritarian or generally against equality and justice.

    Reddit was slowly boiled over the last decade like a frog.

    And you can be absolutely sure that not only researchers are researching this for science, there are plenty of special interests that do this. It really started with climate change denial.

  • dukeofdummies@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    21 hours ago

    … well how many deltas did it get?

    I hate it as an experiment on principle but c’mon how well did it do?

      • ArtificialHoldings@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        Feels like AI would really excel in this. It’s personalized argumentation that can basically auto complete for the most statistically likely (ie popular) version of an argument. CMV posts largely aren’t unique, there’s a lot of prior threads to draw from which got deltas.

  • Grool The Demon@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    ·
    1 day ago

    I remember when my front page was nothing but r/changemyview for like a week and I just unsubscribed from the subreddit completely because some of the questions and the amount of hits felt like something fucky was going on. Guess I was right.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    1 day ago

    To be fair, I can see how it being “unauthorized” was necessary to collecting genuine data that isn’t poisoned by people intentionally trying to soil the sample data.

    • notabot@lemm.ee
      link
      fedilink
      arrow-up
      25
      ·
      1 day ago

      It’d be pretty trivial to do the same here, 1700 or so comments over ‘several months’, is less than 25 a day. No need even for bot posting, have the LLM ingest the feed, spit out the posts and have the intern make accounts and post them.

      • nectar45@lemmy.zip
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        23 hours ago

        Well at least this place is exclusively where people who got banned from reddit end up so they will struggle to find us at least…

  • besselj@lemmy.ca
    link
    fedilink
    arrow-up
    38
    ·
    1 day ago

    It’ll be interesting to find out if this research got IRB approval prior to the study

  • Ogmios@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    ·
    1 day ago

    I remember the good old days when the NSA had to physically fly an airplane over the border and spray one of our towns with chemicals if they wanted to run psychological experiments on people!

  • Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    edit-2
    1 day ago

    …posters in the subreddit had been subject to “psychological manipulation…

    There are no users there. It’s just bots talking to other bots.

  • BilboBargains@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    12
    ·
    21 hours ago

    What difference does it make if you’re talking to a bot? We never meet our interlocutors anyway. Would these people have the same reaction if it were revealed they were talking to a role playing person because I’m pretty sure we’ve already done that many times over.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      I’m guessing the problem with saying some of these things is proliferation of fake news. Like the anti BLM guy, I wonder how much this person stuck to the facts. You can’t present anti-BLM without making up shit, and they projected it to millions of people.

      You’re right that for a perfect logician, only the argument matters, but for humans, no.

    • Flagstaff@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 hours ago

      What kind of question is this? A role-playing person talking to others who don’t know they’re role-playing is deceitful, so are you saying there’s absolutely nothing wrong with deception?

      It’s generally expected outside of /r/jokes, /r/twosentencehorror, etc. that the people you’re talking to are telling the truth as they know it, or else why talk at all?

      • BilboBargains@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        10 hours ago

        I don’t know what your experience is on Reddit but mine came to be that what I was reading couldn’t be trusted. I remember stumbling across a post on some technical subject that I happen to understand very well and couldn’t believe the twaddle that was advanced in the comments with utter conviction and certainty. It got me thinking about all the things I had read and just accepted because I know nothing about them. This is our information landscape, for better or worse.

        Why should it be any different in a role playing scenario? These platforms are motivating engagement and people love an emotional story and so that is presented to us. If we loved true stories more, we would get them instead. I don’t think there’s any malice intended, we’re getting what we want because morons love their feels over their knowledge. It’s the reason the Americans have Trump and Elon and antivax, these people inhabit social media but it is the last thing they should turn to for truth because they are dumb as a sack of rocks and are getting played, shorty.

        • Flagstaff@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          6 hours ago

          Sure, this is unfortunately our disinformation landscape now, but regardless, is deception okay (given how it’s always intentional by definition)?

          Going back to your original question, I do think people would be annoyed by bots and outed human liars alike; at least, I would be, since the bots are controlled by people anyway.

          As for my Reddit experience, I like /r/scams, among other similarly healthy and truly informative places to be.