• myliltoehurts@lemm.ee
    link
    fedilink
    English
    arrow-up
    150
    ·
    6 个月前

    So they filled reddit with bot generated content, and now they’re selling back the same stuff likely to the company who generated most of it.

    At what point can we call an AI inbred?

            • BakerBagel@midwest.social
              link
              fedilink
              English
              arrow-up
              13
              ·
              6 个月前

              But there were still bots making shit up back then. r/SubredditSimulator was pretty popular for awhile, and repost and astroturfing bots were a problem form decades on Reddit.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                6 个月前

                Existing AIs such as ChatGPT were trained in part on that data so obviously they’ve got ways to make it work. They filtered out some stuff, for example - the “glitch tokens” such as solidgoldmagikarp were evidence of that.

        • mint_tamas@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          6 个月前

          That paper is yet to be peer reviewed or released. I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            6 个月前

            That paper is yet to be peer reviewed or released.

            Never doing either (release as in submit to journal) isn’t uncommon in maths, physics, and CS. Not to say that it won’t be released but it’s not a proper standard to measure papers by.

            I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

            Quoth:

            If each linear model is instead fit to the generate targets of all the preceding linear models i.e. data accumulate, then the test squared error has a finite upper bound, independent of the number of iterations. This suggests that data accumulation might be a robust solution for mitigating model collapse.

            Emphasis on “finite upper bound, independent of the number of iterations” by doing nothing more than keeping the non-synthetic data around each time you ingest new synthetic data. This is an empirical study so of course it’s not proof you’ll have to wait for theorists to have their turn for that one, but it’s darn convincing and should henceforth be the null hypothesis.

            Btw did you know that noone ever proved (or at least hadn’t last I checked) that reversing, determinising, reversing, and determinising again a DFA minimises it? Not proven yet widely accepted as true, crazy, isn’t it? But, wait, no, people actually proved it on a napkin. It’s not interesting enough to do a paper about.

            • mint_tamas@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 个月前

              Peer review, for all its flaws is a good minimum before a paper is worth taking seriously.

              In your original comment you said tha model collapse can be easily avoided with this technique, which is notably different from it being mitigated. I’m not saying that these findings are not useful, just that you are overselling them a bit with this wording.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                6 个月前

                It was someone different who said that. There’s a chance the authors might’ve gotten some claim wrong because their maths and/or methodology is shoddy but it’s a large and diverse set of authors so that’s unlikely. Fraud in CS empirics is generally unheard of, I mean what are you going to do when challenged, claim that the dog ate the program you ran to generate the data? There’s shenanigans about the equivalent of p-hacking especially from papers from commercial actors trying to sell stuff but that’s not the case here, either.

                CS academics generally submit papers to journals more because of publish or perish than the additional value formal peer review offers. It’s on the internet, after all. By all means, if you spot something in the paper that’s wrong then be right on the internet.

    • restingboredface@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 个月前

      I wonder if Open AI or any of the other firms have thought to put in any kind of stipulations about monitoring and moderating reddit content to reduce ai generated posts and reduce risk of model collapse.

      Anybody who’s looked at reddit in the past 2 years especially has seen the impact of ai pretty clearly. If I was running open ai I wouldn’t want that crap contaminating my models.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    1
    ·
    6 个月前

    They always were.

    Only now they’ve agreed to pay Reddit for it. This is what their third party lockdown was really all about.

    They’re helping themselves to your Lemmy comments for free, as that’s just how it’s designed. If you post anything publicly anywhere, it’s getting slurped up by a bot somewhere.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      6 个月前

      I’m not a lawyer. But isn’t the reason they had to go to reddit to get permission is because users hand over over ownership to reddit the moment you post. And since there’s no such clause on Lemmy, they’d have to ask the actual authors of the comments for permission instead?

      Mind you, I understand there’s no technical limitation that prevents bots from harvesting the data, I’m talking about the legality. After all, public does not equate public domain.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        6 个月前

        users hand over over ownership to reddit the moment you post

        Not ownership. Just permission to copy and distribute freely. Which basically is necessary to run a service like this, where user-submitted content is displayed.

        And since there’s no such clause on Lemmy, they’d have to ask the actual authors of the comments for permission instead?

        It’s more of a fuzzy area, but simply by posting on a federated service you’re agreeing to let that service copy and display your comments, and sync with other servers/instances to copy and display your comments to their users. It’s baked into the protocol, that your content will be copied automatically all over the internet.

        Does that imply a license to let software be run on that text? Does it matter what the software does with it, like display the content in a third party Mobile app? What about when it engages in text to speech or braille conversion for accessibility? Or index the page for a search engine? Does AI training make any difference at that point?

        The fact is, these services have APIs, and the APIs allow for the efficient copying and ingest of the user-created information, with metadata about it, at scale. From a technical perspective obviously scraping is easy. But from a copyright perspective submitting your content into that technical reality is implicit permission to copy, maybe even for things like AI training.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        6 个月前

        Well the legality seems to be something you can ignore when you have billions of dollars in VC money to fritter around.

        It certainly didn’t stop them hoovering up music and movies, and the owners of those have a lot more power than any of us do.

        Tech is fast, the law is slow, and you can make many times the cost of lawyers and fines by the time anybody gets around to telling you to stop it.

      • Alimentar@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 个月前

        Well even if it was a legal argument, they wouldn’t care. Like Facebook and all the rest. They say they don’t share your data but we all know that’s a lie

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 个月前

        Well they’ve probably got filters that remove all that before it teaches their Ai to swear. So you need to be more subtle for 𝑓ucks sake.

        • metaStatic@kbin.social
          link
          fedilink
          arrow-up
          17
          ·
          6 个月前

          yep they fuckin got us

          but it’s not like our posts are safe here either. This is the world we live in now.

          • andrew@lemmy.stuart.fun
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 个月前

            But here, the API is open and I can run my own copy and train my own LLM same as anyone else. It’s not one asshole who decides to whom and for how much he’ll sell the content we all gave him for free, so he can justify his $193 million paycheck.

          • the_doktor@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 个月前

            We have to either make AI illegal or make it accountable by giving references to where it gets its data so it can properly cite its sources.

        • db2@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          3
          ·
          6 个月前

          They’re not multiple though, edit it and then delete it and it’s gone. They disabled all the tools to do it though so it’s manually or nothing now.

          • Coasting0942@reddthat.com
            link
            fedilink
            English
            arrow-up
            14
            ·
            6 个月前

            Damn. You outsmarted them well paid data jockeys. And assuming your edits change the actual comment and don’t simply hide the original.

            I could be an idiot too though. Reddit might have been running this whole shit show on the original version of the database system and be upselling to buyers.

          • SchmidtGenetics@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            6 个月前

            They just reload a previous cached comment, doesn’t matter how many times you edit or delete, it’s all logged and backed up.

        • Imgonnatrythis@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 个月前

          Will be interesting to see if they stoop so low as to allow this. Probably wouldn’t be a super wise move as most deleted posts are likely material that would not be great to train on anyway. My first thought when I read this was, “well, not on MY posts” I’m clean off of reddit.

          • mox@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            6 个月前

            There have already been reports of people being banned and finding their posts restored in response to their attempts to delete them.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            6 个月前

            There are torrents of complete Reddit comment archives available for any random person who wants them, I’m sure Reddit themselves has a comprehensive edit history of everything.

      • bobs_monkey@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        6 个月前

        I used redact.dev to mass edit all my comments, worked pretty well. Problem is that if you mass delete, they’ll restore them pretty quick, but so far they haven’t reverted my edits.

      • Rolando@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 个月前

        Back when I deleted all my comments, I was told I could claim to be in Europe and make a request citing the European law that Reddit has to follow. I think Reddit had a page where you could make the request, but of course it was hard to find.

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 个月前

      Realistically, when you’re operating at Reddit’s scale, you’re probably keeping a history of each comment for analytics purposes.

  • Everythingispenguins@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    6 个月前

    Some day historians will be able to look back at this moment and be able to determine it was what caused ChatGPT to become horny and weird.

  • AlexWIWA@lemmy.ml
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    6 个月前

    LLMs have been training on Reddit posts since at least 2012. Nothing really new here.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    6 个月前

    What makes you think that they are not scraping Lemmy too? The only reason they might not be is probably how niche Lemmy and the fediverse are, but I am sure there have been people already doing it.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      6 个月前

      Fediverse is designed to do exactly that. It’s free flow of information which is a good thing. Don’t let corporations hijack this beautiful concept. We all want information to be free.

    • olympicyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 个月前

      I’m not mad about the scraping. The linkedin scraping case pretty much cemented that there was nothing that could be done to stop it. I’m just mad that I can no longer use the app of my choice. No such problem with Lemmy.

    • AlexWIWA@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 个月前

      Lemmy is even easier to scrape. Just set up your own instance, then read the database after activity pub pushes everything to you.

    • kia@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 个月前

      I’m sure they are, but Reddit probably provides these companies with lots of personalized metadata they collect just for them which they may not get from Lemmy.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    6 个月前

    They now are paying Reddit? I thought they could just scrape for free.

    Also, you can not delete anything on the internet. Once something is public there will always be a copy somewhere.

    • Fetus@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      6 个月前

      Scraping through a website at the scale they are talking about isn’t really viable. You need access to the API so that you can have very targeted requests.

      This is why reddit changed their API pricing and screwed over everyone using third party apps. They can make more money selling access to LLM trainers than they could from having millions of people using apps that rely on the API.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 个月前

        Scraping at scale is actually cheaper than buying API access. It’s a massive rising market, try googling “web scraping service” and there are hundreds of services that provide API to scrape any public web page and bypass the blocks for you and render all of the javascript.

        • BatrickPateman@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 个月前

          Scraping ia nice for static conten, no doubt. But I wonder at what point it is easier to request changes to a developing thread via API than to request the whole page with all nested content over and over to find the new answes in there.

          • Dr. Moose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 个月前

            Following a developing thread is a very tiny use case I’d imagine and even then you can just scrape the backend API that is used on the public page for the same results as private API.

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      6 个月前

      There’s actually legal precedent against scrapping a website through unofficial channels, even if the information is public. But basically, if you scrape a website and hinder their ability to operate, it falls under “virtual trespassing”.

      I’m assuming it would be even worse now that everyone is using the cloud and that scrapping their site would cause a noticeable increase in resource cost (and thus, directly cost them more money because of cloud usage fees).

      It’s why APIs are such a big deal. They provide you with an official, controlled, entry point to a platform’s data.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        6 个月前

        It’s the opposite! There’s legal precedence that scraping public data is 100% legal in the US.

        There are few countries where scraping is illegal though like Japan and China. European countries often also have things called “database protection” laws that forbid replicating public databases through scraping or any other means but that has to be a big chunk of overal database. Also there are personally identifiable info (PII) protection laws that protect storing of people data without their consent (like GDPR).

        Source: I work with anti bot tech and we have to explain this to almost every customer who wants to “sue the web scrapers” that lol if Linkedin couldn’t do it, you’re not sueing anyone.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 个月前

          Refreshing to see a post on this topic that has its facts straight.

          EU copyright allows a machine-readable opt-out from AI training (unless it’s for scientific purposes). I guess that’s behind these deals. It means they will have to pay off Reddit and the other platforms for access to the EU market. Or more accurately, EU customers will have to pay Reddit and the other platforms for access to AIs.

  • Dark_Dragon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    6 个月前

    Reddit banned me through IP address or something. Whatever new account i create will be banned within 24hrs even if i don’t upvote a single post or comment. I tried with 10 new account all banned and all new email address. So gave up and randomly changed all my good comments. Shifted permanently to lemmy. Missing some of the most niche community. But not so much to return to reddit.

    Edit: I didn’t even commit any rule violation. Took a too long to change from modded reddit app. I only logged in once. That doesn’t amount to blocking me from every using reddit.

    • dumblederp@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 个月前

      If you use a vpn and a disposable email you can get about a week out of an account if you need to comment, it’ll get quietly shadowbanned though.

    • macrocephalic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 个月前

      All future AI will have autocorrect errors and will look like no one read it before hitting enter. You’re welcome.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    11
    ·
    6 个月前

    This form of propaganda is my pet peeve. It’s not “your posts” as soon as you put something to public you don’t get to eat your cake. It’s out there, you shared it. Don’t share it if you don’t want humanity to ingest and use it.

    • Dataprolet@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      6 个月前

      You’re technically right, but nobody anticipated and therefore agreed on their posts being used for training LLMs.

    • Azzu@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      6 个月前

      It’s not about it being used to train AI. It’s about the AI either not being open source/I don’t get access to it (i.e. not benefitting me) or reddit being paid for my comments (i e. also not benefitting me).

      If this AI training would get me or the public access to the AI, or I would be paid for my comments instead of Reddit, I’d be fine with it.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        6 个月前

        yeah but you don’t get to choose that. You give away that right as soon as you participate in public discourse. It’s a zero sum game - either it’s a public for everyone or no one.

        Don’t get me wrong, Reddit is a bitch but I think people want to cut their noses off to spite their faces here. It’s much more important to have free information flow than to fuck reddit.

        My fear is that people will vote in some really dumb rules to spite AI and restrict free information flow accidentally.

        • Azzu@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          6 个月前

          That’s how it is currently and maybe also your opinion. But that doesn’t mean it has to be like that in a society. It’s your opinion that everything public can go private at any time (training proprietary private AI), but we can decide as a society that’s not how we want to do things. We can require stuff that used public data to be public as well.

          And yeah I kinda get to choose that. As democratic society, anything that the public (i.e. including me) decides, goes. Of course, if there are people like you that don’t want stuff trained on public data to be required to be public, democracy will also work in the sense that we don’t get that, as it is currently.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      6 个月前

      Hate to break it to you, but the time to do that was over a year ago, and even then it wasn’t ever really a sure thing - we don’t really know what their backup policies are around that stuff.

      This is what the former power user community that made an exodus from Reddit roughly a year ago has been trying to communicate, but a ton of people here seem to enjoy keeping their toes in the water over there, with rather predictable consequences (literally, the post we’re commenting on).

      All that said: I am very much looking forward to the absolutely titanic lawsuit around GDPR I’m sure is in the works over this.

      • AlexWIWA@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 个月前

        Not even a year ago. Reddit has been used for training data for well over a decade. We used it in 2012 in an AI class.

    • snownyte@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 个月前

      Wish I had known this beforehand in like several accounts I’ve had with that shit-ass place.

      Then again, it’s likely that Reddit has shit archived because Spez is one of them data-farmers like Mark is. Nothing is truly deleted from their sites. It’s just archived.

      There’s been lots of evidence that proves this, because people have dug up old comments, even down to who posted it originally. Then, even if your account is deleted, your comment body is still there, I know because I’ve deleted an account and checked back where I was before.

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 个月前

    I didn’t delete my comments before nuking my account, but I’m pretty sure the grand majority were shitposts containing ample amounts of smut, gore and other ridiculous over the top shit. So I consider this a win.