the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved
treat the company as a traditional publisher
because they are, they're editorialising by selecting the content
vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230
In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.
Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.
I am kind of rooting for the AI slop because the status quo is horrific, maybe the AI slop cancer will put social media out of its misery.
That's not where it stops.