This could be a tool that works across the entire internet but in this case I’m mostly thinking about platforms like Lemmy, Reddit, Twitter, Instagram etc. I’m not necessarily advocating for such thing but mostly just thinking out aloud.
What I’m imagining is something like a truly competent AI assistant that filters out content based on your preferences. As content filtering by keywords and blocking users/communities is quite a blunt weapon, this would be the surgical alternative which lets you be extremely specific in what you want filtered out.
Some examples of the kind of filters you could set would for example be:
- No political threads. Applies only to threads but not comments. Filters out memes aswell based on the content of the media.
- No political content whatsoever. Hides also political comments from non-political threads.
- No right/left wing politics. Self explainatory.
- No right/left wing politics with the exception of good-faith arguments. Filters out trolls and provocateurs but still exposes you to good-faith arguments from the other side.
- No mean, hateful or snide comments. Self explainatory.
- No karma fishing comments. Filters out comments with no real content.
- No content from users that have said/done (something) in the past. Analyzes their post history and acts accordingly. For example hides posts from people that have said mean things in the past.
Now obviously with a tool like this you could build yourself the perfect echo chamber where you’re never exposed to new ideas which probably is not optimal but it’s also not obvious to me why this would be a bad thing if it’s something you want. There’s way too much content for you to pay attention to all of it anyway so why not just optimize your feed to only have stuff you’re interested in? With a tool like this you could quite easily take a platform that’s an absolute dumpster fire like Twitter or Reddit and just clean it up and all of a sudden it’s useable again. This could possibly also discourage certain type of behaviour online because it means that trolls for example could no longer reach the people they want to troll.
The problem with filtering political content is people are pretty bad at identifying their blind spots. It’s a pretty common trap that some people fall into where what they want as non-political conversation is actually just conversation which doesn’t challenge their political view.
The same people will then conclude that the tool is making “political” choices in what it’s hiding from them.
You’re also focusing a lot on left vs right. What about “third way” or centrist politics? What about fringe groups that people don’t really consider left or right? How does this work with different countries having different ideas of what’s left and what’s right (Overton window)? For example, I’d say the US doesn’t have a left and right, it has a centre-right and a far-right party.
Finally plenty of people are happy in their echo chambers, despite them being terrible for a person. Challenging and reflecting on the way you fundamentally think about the world (basically what politics boils down to) is hard and sometimes unpleasant. It’s easy to see how many people go down the easy road where they double down on existing opinions and seek out echo chambers.
I used the term truly competent AI because obviously something like “no politics” is quite a broad guideline and the AI has to then figure out what you actually mean by this. Non-competent AI would filter out discussions about stuff like vegan food then aswell but obviously this is not what you meant. This is just a thought experiment about what it would be like if it actually worked as intented.
What I’m imagining is something that also studies your own behaviour on the platform to learn what you’re actually into and if it’s not sure it could either ask you or do some sort of A-B testing to see what you engage with and what was that engagement like. This would make it possible to have a platform where the unfiltered experience would be a true wild west but which you then optimize to your liking yourself.
I think the idea that we need to be more efficient in consuming content is quite dystopic. I agree that not only should we be trying to reduce echo chamber, but content consumption as a whole. As a chronically online person in cybersecurity, I do not see a tenable future where humans continue to consume content at the rate they are. There needs to be a reduction in internet integration and online consumption. You’re right that there’s too much content for one person to reasonably sift through; the reasonable decision then is to reduce the amount of content rather than try to create a sieve. The amount of information that we try to consume on the internet is dangerous and harmful to us, and is destroying the foundations of society. I’m not some traditionalist nut or conspiracy theorist; it’s just easy to see that the benefits we get from globalized information sharing are very heavily offset by the constant influx of shit. I think people should have easy and free access to information and knowledge; I also think the current hierarchy of the internet was a mistake and that the majority of people do not need and in fact should not have computers.
Also what you’re asking for is an incredibly invasive AI that is used for massive data collection and aggregation to track and serve you the content that is most addictive for you. I see no reasonable world where that is a good thing. It is only a good idea in our current world, which I do not believe is reasonable.
Personally the way I think about is that since I’m going to spend a certain amount of time online anyways then why not atleast enjoy that time. For example I like discussing/debating ideas on platforms like Lemmy and Reddit but too often I find myself wasting time with someone whose not doing it in a good faith; they’re not open to have their mind changed and they’re not putting any effort into trying to change my mind either. They just want to dunk on what they deem as a stupid idea and more often than not they’re performing for their imagined audience. It would probably be better for us both if I wouldn’t engage with people like that to begin with. I really don’t need more than one decent person in order to have an interesting discussion. If there’s 20 others shouting insults into the void because my content filtering has blocked them I think that’s better than me relying on sheer will-power to resist the urge to reply back to these people.
Idk why people expect viewers want to have their ideas challenged when they’re just wanting a general idea of the happenings in the world.
Like I want to know when the Houthis have hit another ship, causing an environmental disaster in the Red Sea (fertilizer). I DO NOT want to know about some bullshit laws being passed in a no name jurisdiction by some no name judge in a no name state that will get overturned in a month. It doesn’t affect me.
attabit.com is a good example of this AI summary done right.
But let me make myself clear. Nobody wants to be subjected to your ideology and echo chambers are fine. It’s not my responsibility to open up my attention to whatever it is you think is socially important at the time.
Echo chambers.
We already do that online. But yes, this would make it worse.
Filtering out opposing viewpoints like “no right/left politics” leaves people woefully uninformed and partisan.
People are already exposed to the opposing side and we’re more divided than ever. It’s not obvious to me that 1. most people would want to put themselves in a perfect echo chamber like that and 2. if they do, that it would be a bad thing and should be forbidden.
deleted by creator
My reasons are probably atypical, but there’s no way that I’d use it.
My issues are not competence or the fact that it’s AI; it’s transparency. I want to know exactly which rules are being used to curate my posts and comments, and I don’t trust other people or a filtering algorithm to do it. (Except if I’m the one creating said filtering algorithm out of simple rules).
The AI would definitely develop an implicit bias, as it has in many implementations already.
Plus, while I understand the motivation, its good to be exposed to dissenting opinions now and then.
We should be working to decrease echo chambers, not facilitate them
OP is talking on hypothetical grounds of a “competent AI”. As such, let’s say that “competence” includes the ability to avoid and/or offset biases.
Assuming that was possible, I would probably still train mine to remove only extremist political views on both sides, but leave in dissenting but reasonable material.
But if I’m training it, how is it any different than me just skipping articles I don’t want to read?
Even if said hypothetical AI would require training instead of simply telling it what you want to remove, it would be still useful because you could train it once and use it forever. (I’d still not use it.)
weird how if i click your username, it takes me right to your comments…
almost as if, you’re dumb troll…
also not obvious to me why this would be a bad thing if it’s something you want.
Because as democratic society we rely on conses or compromise which is only possible through understanding the other side. Also to not have your ideas challenged can’t be really useful for personal growth - but that’s, obviously a choice.
Truly competent AI
Ya nope.
Why not?
They exist, but an LLM is only as good as your prompt engineering, and most people don’t know how to do that.
While it does sound like a good idea, I feel like most o people would use it to make an echo chamber.
We get enough people using adbockers, then companies started blurring the line between ads and content.
I think the line between memes and content have already been blurred quite a bit. Politics and content too.
There are probably newer packages, but crm114 is a trainable command-line text classifier that uses Markov chains that I’ve used before.
You could probably get a corpus of political discussion and train it to detect that.
While not exactly the same, BlueSky (a twitter spinoff that has recently started federating, but not with ActivityPub so only federating with other BlueSky instances) has customizable feeds, so you pick the algorithm that suits you (or I assume make your own).
Microblogging isn’t my thing so I don’t know much about it, I just read the BlueSky (regular sized) blog post linked on Lemmy the other day.
I hate AI “discovery” feeds. IMO the best way to curate my feed is to explicitly follow and blocklist things I don’t want to see.
Instead of trying to shoehorn AI into doing this, we should let content creators tag their own posts. Then we can filter out specific tags.
I especially don’t want an AI that tries to understand “political” posts, because what counts as political is ambiguous and confusing.
Is someone coming out as “they/them” a political statement? Does the person running the AI agree with you?
Does the person running the AI have your enjoyment of the platform as a priority, or just your engagement?
You’re imagining an uncompetent AI. That is not what this thread is about. You don’t hate AI discovery feeds, you hate bad AI discovery feeds. This thought experiment is about one that actually does what it’s supposed to. If you don’t believe such thing could exist, then fair enough but that’s an entirely different discussion.
Fair enough.
Although, to be honest, even if such a magical AI did exist, I’d still be uncomfortable using it. I’m the kind of person who wants to understand and know how things work, and why it choose to show me what it did. But that’s probably just me.
Oh absolutely. I feel the same way. I’m sure there are ways around this. For example you could from time to time see what has been filtered out (and why) and if there’s something you have no issue with you could let it know and thus even further refine your feed. Alternatively you could set it so that if it’s not sure it would default to allowing it and then by downvoting for example you could give it more information about your specific preferences.
Consider another market: businesses looking to identify current topics of interests / discussions that are relevant to what they are doing.
The AI could summarise the posts and offer suggestions on what to post, when to post, where to post, etc., with references to the posts / threads that they’re basing this information on.
This is all bundled as an online marketing tool, targeted towards small businesses focused on growth.
deleted by creator
It’s free and open source. It work perfectly and doesn’t have any of the issues you listed.
My question is not about any of that. It’s about wether such feature would ultimately cause more good or harm. It’s a philosophical question above all else.
deleted by creator
Alright. This question clearly is not for you lol
deleted by creator
The question is clearly stated in the title
"Free to the user” means the user is the product and someone is making money pushing ads to users or selling user data.
wrong
deleted by creator
it literally takes one counterexample to prove how wrong you are but i’ll start enumerating them if you like
audacity
https://www.audacityteam.org/deleted by creator
Well if it’s AI based I don’t want it.
Why?
Because currently it’s marketing buzzword for an not even half-matured technology that relies on privacy invading data scrubbing. And even with that the results are very mixed and can’t be relied on. So I’d rather not have a tool with 30-50% false positive rate censor my content.
What you’re describing is an uncompetent AI and that is not what this thought experiment is about. It’s about one that actually does what it’s intented for and does it really well. If you don’t believe such AI could exists then fair enough but that is not what this thread is about.
AI doesn’t do anything perfectly, it’s all based on statistical trends. The most control we could have over our feeds would be chronological displays of the stuff we choose to follow.
Sounds good. Go build it.
no mean, hateful, or snide comments
I think your idea actually sucks. Too bad my comment would be removed in your fairy tail website.