You own a restaurant, where you sell poisoned (intentionally and knowingly) food. A group of people band up for class action lawsuit for poisoning them, and have the lawyers post a sign at your restaurant, that everyone poisoned there should reach out and get some compensation.
Should you be allowed to take the sign down?
I know this answer doesn't pass the vibes test, but it's how the law actually works. If you post a sign on someone's property without permission, you'll get in trouble for trespassing, vandalism, or both.
So get a judge to issue an order. In a serious situation, they very well might.
It’s a lawsuit, with the users of the platform as the damaged party, against the platform. Removing the possibility to reach the users should result in a default judgement with maximum damages immediately.
If they went back to operating as “friends and family feed providers” then letting them keep their 230 immunity would be easier to justify.
When they are making editorial decisions about what to content to promote to you and what content to hide from you, then they should lose it.
What it does say is you aren't liable for something someone else wrote.
It doesn't create liability for things not covered by it.
Guess who decides the order and contents of Facebook feeds? Facebook does. So they wouldn't be liable for someone writing a post saying "gas the jews" but they would still be liable for choosing to show it at the top of everyone's front page, if that was a choice, because the front page was choice-based rather than chronological.
This is not how it works when you're found guilty of committing harm. Tobacco companies are a good example of this.
It's not just a Meta issue either.
You don't even have to invoke the idea that Meta is big enough to be regulated as a public utility for this to have broad precedent in favor of forcing a malicious actor to inform its victims that they might be entitled to a small fraction of their losses in compensation.
I get that the distinction matters a bit from time to time (court cases keep blurring the line in the US though), but:
1. With all the other shit that makes it through the filter, this was pretty clearly a targeted, strategic takedown rather than some sort of broad "we don't allow bad ads on the platform." Allowing "all ads" isn't the thing being argued; it's allowing "this ad."
2. The non-offensive idea of "abusers shouldn't be allowed to deceive and gaslight their victims" is pretty strongly in favor of this being a bad move on Meta's part if it was an intentional act. Maybe it shakes out fine for them legally in this particular instance, but the fact that as a society we routinely require companies and individuals to behave with more appearance of moral standing than this suggests that blocking this particular ad is over the line, and it's neither naive nor utopianistic to think so. Even if it's legally in the light-grey, it's an abuse of power worth talking about, and hopefully it inspires more people to leave their platform.
https://www.reuters.com/investigations/meta-is-earning-fortu...
Mine is that it could then well be required to do so by law. Companies are not individuals, so I don't think they are owed any freedoms beyond what is best for utility they can provide.
Is their defence of Section 230 protections not in part rooted in that claim of impartiality?