upvote
This is an opinion and I believe it's wrong. And you just have to look at the statute to see why [1]:

> (c) Protection for “Good Samaritan” blocking and screening of offensive material

> (2) Civil liability

> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

"in good faith" is key here. Here's another opinion [2]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://www.naag.org/attorney-general-journal/the-future-of-...

reply
I don't expect that to work, but who knows. Editors "rank", curate, select, present, etc content to people, and have for a long time, and it's always understood to be speech.

Remember, according to that link, 230 does not give platforms any new rights. It simply makes it easier for them to end cases faster and cheaper, that they would have already won on 1st amendment grounds.

reply