upvote
reply
This is an opinion and I believe it's wrong. And you just have to look at the statute to see why [1]:

> (c) Protection for “Good Samaritan” blocking and screening of offensive material

> (2) Civil liability

> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

"in good faith" is key here. Here's another opinion [2]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://www.naag.org/attorney-general-journal/the-future-of-...

reply
I don't expect that to work, but who knows. Editors "rank", curate, select, present, etc content to people, and have for a long time, and it's always understood to be speech.

Remember, according to that link, 230 does not give platforms any new rights. It simply makes it easier for them to end cases faster and cheaper, that they would have already won on 1st amendment grounds.

reply
Why do you believe that "Section 230 differentiates between publishers and platforms"?
reply
Section 230(c)(i) [1]:

> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material

> (1) Treatment of publisher or speaker

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a protection for being a platform for third-party (including user-generated) content.

Some more discussion on this distinction [2]:

> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.

It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".

By the way, my view here isn't a fringe view [3]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

This is exactly my view.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://bipartisanpolicy.org/article/section-230-online-plat...

[3]: https://www.naag.org/attorney-general-journal/the-future-of-...

reply
This isn't good reasoning. According to your analysis, any website, ISP, or hosting provider that uses a firewall or Cloudflare is by definition a publisher, since they algorithmically shape traffic to prohibit suspicious IP addresses from accessing content.
reply
Not at all. Intenet matters. Is Cloudfare trying to shape user behavior or push a particular position or content? No.

Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.

I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.

And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.

This is nothing like Cloudfare DDoS protection.

reply