A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.
more than 1,700 comments made by AI bots
The bots made more than a thousand comments over the course of several months and at times pretended to be a “rape victim,” a “Black man” who was opposed to the Black Lives Matter movement, someone who “work[s] at a domestic violence shelter,” and a bot who suggested that specific types of criminals should not be rehabilitated.
The experiment was revealed over the weekend in a post by moderators of the r/changemyview subreddit, which has more than 3.8 million subscribers. In the post, the moderators said they were unaware of the experiment while it was going on and only found out about it after the researchers disclosed it after the experiment had already been run. In the post, moderators told users they “have a right to know about this experiment,” and that posters in the subreddit had been subject to “psychological manipulation” by the bots.
Ah, to freely give away my data to a corporation whilst acting as a lab rat…
… well how many deltas did it get?
I hate it as an experiment on principle but c’mon how well did it do?
Found it, 137 deltas.
Impressive, a bit worrisome
This is what should get departments defunded, not DEI 🫠
That’s place is mostly bots anyway, I’m sure no one noticed.
It’s like the MMO Erenshor. Every player other than yourself is a bot pretending to be a human being.
Why you gotta call me out like that
I remember when my front page was nothing but r/changemyview for like a week and I just unsubscribed from the subreddit completely because some of the questions and the amount of hits felt like something fucky was going on. Guess I was right.
Same happened to the relationshipsadvice and aita subreddits, the number of posts suddenly skyrocketed with incredibly long, overly-detailed stories that smacked of LLM-generated content.
To be fair, I can see how it being “unauthorized” was necessary to collecting genuine data that isn’t poisoned by people intentionally trying to soil the sample data.
This is straight up unethical, at best.
It can be both.
Both
Ok maybe I dont miss reddit after all…
It’d be pretty trivial to do the same here, 1700 or so comments over ‘several months’, is less than 25 a day. No need even for bot posting, have the LLM ingest the feed, spit out the posts and have the intern make accounts and post them.
Well at least this place is exclusively where people who got banned from reddit end up so they will struggle to find us at least…
Hey, I didn’t get banned (yet), I just prefer the vibe here.
I created my account to have one ready, then when I got banned, it was easy to hop over
It’ll be interesting to find out if this research got IRB approval prior to the study
I hope for the sake of their careers they did
Given the highlighted positions the bots took, I’m not sure we should worry about their specific careers ending.
I remember the good old days when the NSA had to physically fly an airplane over the border and spray one of our towns with chemicals if they wanted to run psychological experiments on people!
…posters in the subreddit had been subject to “psychological manipulation…
There are no users there. It’s just bots talking to other bots.
If you think this is shocking, just wait for the big reveal about r/StanfordBasement.
What difference does it make if you’re talking to a bot? We never meet our interlocutors anyway. Would these people have the same reaction if it were revealed they were talking to a role playing person because I’m pretty sure we’ve already done that many times over.
What kind of question is this? A role-playing person talking to others who don’t know they’re role-playing is deceitful, so are you saying there’s absolutely nothing wrong with deception?
It’s generally expected outside of /r/jokes, /r/twosentencehorror, etc. that the people you’re talking to are telling the truth as they know it, or else why talk at all?