• LarmyOfLone@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can’t be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.

    So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don’t see anything that precludes this possibility or this goal.

    The other thing is that we can’t expect a top down approach to work, but the tools need to be “democratic”. And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don’t know how much it costs to take something like deepseek and train a new model on updated datasets, but it can’t be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.

    Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.

    Of course I don’t have all the answers, and my argument could be put stupidely as “The only thing that can stop a bad AI with a gun is a good AI with a gun”. But I see “democratizing” AI as a crucial step.