The ban follows a long-running battle between Brazil’s supreme court and Elon Musk. It shows the country will no longer tolerate tech giants ignoring the rule of law.
An argument being made in another social media case (involving TikTok) is that algorithmic feeds of other users’ content are effectively new content, created by the platform. So if Twitter does anything other than a chronological sorting, it could be considered to be making its own, deliberately-produced content, since they’re now in control of what you see and when you see it. Depending on how the TikTok argument gets interpreted in the courts, it could possibly affect how Twitter can operate in the future.
Let’s say this goes through, how is a company going to prove it is not using an “algorithmic feed” unless they open source their code and/or provide some public interface to test and validate feed content?
Plus, even without an “algorithmic feed”, couldn’t some third party using bots control a simple chronological or upvote/like-based feed? And then those third parties, via contracts and agreements, would manipulate the content rather than the social media owner itself.
unless they open source their code and/or provide some public interface to test and validate feed content
This honestly seems like a good idea. I think one of the ways to mitigate the harm of algorithmically driven content feeds is openness and transparency.
An argument being made in another social media case (involving TikTok) is that algorithmic feeds of other users’ content are effectively new content, created by the platform. So if Twitter does anything other than a chronological sorting, it could be considered to be making its own, deliberately-produced content, since they’re now in control of what you see and when you see it. Depending on how the TikTok argument gets interpreted in the courts, it could possibly affect how Twitter can operate in the future.
It’s certainly arguable that the algorithm constitutes an editorial process and so that opens them up to libel laws and to liability.
Fair point.
That argument is being made in the USA, not the UK.
https://www.motherjones.com/politics/2024/08/federal-court-tiktok-230-liable-blackout-challenge-nylah-anderson-death/
Let’s say this goes through, how is a company going to prove it is not using an “algorithmic feed” unless they open source their code and/or provide some public interface to test and validate feed content?
Plus, even without an “algorithmic feed”, couldn’t some third party using bots control a simple chronological or upvote/like-based feed? And then those third parties, via contracts and agreements, would manipulate the content rather than the social media owner itself.
This honestly seems like a good idea. I think one of the ways to mitigate the harm of algorithmically driven content feeds is openness and transparency.
Well for the end users and any regulators it’s a great idea. But the companies aren’t going to go along with this.
Then they must be held liable for what they allow to spread on their platforms