I actually am more at odds with HN than many people might be because I think the lies surrounding covid and the censorship were absolutely wrong and platforms could genuinely after things like that lay claim to being unfairly directed, but you can tell Zuck doesn't actually care because he immediately started doing that
Does Zuckerberg have some kind of clinical condition where he just can't imagine how other people might see him?
Sure this will slow down the personal injury lawyers finding clients but it won't stop them, meantime it is more ammunition for Facebook's enemies to use against it.
It is one thing to do shady business, it is another thing to incriminate yourself. If you were involved with weed and somebody sent you an email asking if they could come around and pick up a Q.P. next Saturday I'd expect you to give the person a correction in person that they shouldn't do that again.
Not to say you should be like Epstein but I mean he and the people he corresponded with had some sense so there is is very little evidence of criminal activity in millions of emails.
At Facebook on the other hand all the time people sent emails about things that could just as easily been left as "dark matter" unexplained and minimally documented decisions but no it is like that M.F. Doom song "Rapp Snitch Knishes", like a bunch of children or something with no common sense at all.
Yeah, it's called having-too-much-money-to-careitis.
Cory Doctorow describes Mark Zuckerberg's and Elon Musk's attitude toward other people as billionaire solipsism [1].
[1] https://pluralistic.net/2026/01/05/fisher-price-steering-whe...
Not sure he cares. He's literally got hundreds of billions of dollars to his name, and the corporation he founded is worth trillions.
Nah, he just doesn't care. Nothing he does will ever get people (en masse, onesie, twosies don't matter) to stop using Meta products.
People can/will complain about him forever, but shitty people will continue to help him build things, and shitty people will continue to use them.
https://trends.google.com/explore?q=facebook&date=all&geo=US
and maybe Zuck doesn't think he can do anything about it. There are different theories but i like this one:
-- originally you would put some imagination and elbow grease into using Facebook and get some intention which made it very attractive and interesting to people around 2010
-- then it found a business model which was dependent on your not being able to use imagination and elbow grease to get attention which made it less interesting in general but still somewhat interesting because now you could put cash into the slot machine and get cash out
-- over time they lowered the payout of the slot machine which made the game less interesting and more dependent on 100% profitable scams which could function no matter how bad the payout was; people lose trust in the platform and stop engaging with ads, real advertisers don't want to be seen next to scam ads (lest they be seen as scams) which further lowers the payout and makes the game less interesting over time
-- and now they won't even take your money... so who cares?
I think it is more like radioactive decay than say, cheese going bad, but maybe I'm wrong. You can't smell the radioactive decay!
The game is rigged, also Instagram and Whatsapp (yeah, companies get acquired. but WA's Acton was very explicit - "delete Facebook" (also, ever tried deleting FB? almost impossible. more network effects). he was pissed off at what happened)
Once I started using the social features on my MQ3 I found it really was Zuck's worst nightmare. I met all these nice retirees who were fun to play Beat Saber with and who would go on cruises and post YouTube links to pano videos they take of the ship.
I feel probably that the emotional maturity of most billionaires is at the toddler level or below, and I mean that quite seriously and literally.
Kafkaism is natural and organic.
I believe we need to strengthen 230, but with the added caveat that affected platform owners must stop gaming the algorithms, that it must require user-driven curation. Let me curate my own feed, stop shoving shit in front of my eyes. When you do so, you're making heavy editorial decisions, and should be open to liability.
Companies are not liable if they have proper ID of the person who submitted the content and can provide that to a plaintiff. If they have not made a good-faith effort to know who submitted this info (like taking ID, not just an email address) then they're taking responsibility for the submitted content.
Which means sites that have responsible moderation can still allow anonymous contributions.
The real problem is the inherent asymmetry of legal battles, where the wealthiest can fight forever with endless motions and have near-total impunity while a legal action would basically nuke a normal person's life. Not to mention the fact that an international border can often make this whole conversation moot.
Anonymous contributions, up to the point of somebody compromising the service? With the quantity of password hash thefts, I suspect we'll see even more ID thefts this way.
I can't imagine using any service that asks for ID, except perhaps from the well-established giants, so an exception for identifiability would effectively be a gigantic moat granted to the largest internet companies to keep out competition. Anything like that would need to be paired with massive anti-trust changes, as well as perhaps government take-over of the giants as utilities, none of which sounds very appealing...
That said, don't take any of my rambling as discouragement, your type of thinking is exactly what we need, we need massive amounts of policy discussion and your suggestion is very innovative.
One of my issues is the lack of liability in practice. The poster is technically liable but they're anon, behind proxies, foreign, etc. and unaccountable. It results in people being harmed online without recourse.
These companies should have a duty to know who their users are.
I don't think you can in the US. Maybe elsewhere, but in the US AFAIK the author is responsible for the content they publish, not the bookstores carrying the books.
>This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230.
No it doesn't. Section 230 doesn't allow sites to host illegal content, of course only "legality" within the framework of US law matters.
All it says is that the liability for user posted content lies with the user posting the content, not the platform hosting it. Which to me seems appropriate.
They analyze the video posts on instagram. If they detect the video has even a small amount of commercial value, they classify it as branded content and you need to pay for it to get promoted.