• stickly@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    Respectfully, you have no clue what you’re talking about if you don’t recognize that case as the exception and not the rule.

    Many of these early generation LLMs are built from the same model or trained on the same poorly curated datasets. They’re not yet built for pushing tailored propaganda.

    It’s trivial to bake bias into a model or put guardrails up. Look at deepseek’s lock down on any sensitive Chinese politics. You don’t even have to be that heavy handed, just poison the training data with a bunch of fascist sources.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      You are arguing there is a possibility it will go that way, while I was talking about a possibility of a more advanced AI that is open source, has verifiable arguments with sources. While the negative outcome is very important, you’re practically dog-piling me to suppress a possible positive outcome.

      RIGHT NOW even without AI the vast majority of people are simply unable to perceive reality on certain important topics. Because of propaganda, polarization, profit seeking through clickbait, and other effects. You can’t trust, and you can’t verify because you ain’t got the time.

      My argument is that a more advanced and open source AI could provide reliable information because it has the capability to filter and analyze a vast ocean of data.

      My argument is that this potential capability might be crucial to escape the current (non AI) misinformation epidemic. What you are arguing is not an argument against what I’m arguing.

      • stickly@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

        Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

        The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

        (coincidentally from an article on the topic of LLM use for propoganda)

        You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

        For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.

        These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can’t be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.

          So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don’t see anything that precludes this possibility or this goal.

          The other thing is that we can’t expect a top down approach to work, but the tools need to be “democratic”. And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don’t know how much it costs to take something like deepseek and train a new model on updated datasets, but it can’t be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.

          Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.

          Of course I don’t have all the answers, and my argument could be put stupidely as “The only thing that can stop a bad AI with a gun is a good AI with a gun”. But I see “democratizing” AI as a crucial step.