Distancing yourself from temptations is an effective and proven way to get rid of addictions, the programs constantly trying to get you to relapse is not a good feature. Like imagine a fridge that constantly puts in beer, that would be very bad for alcoholics and people would just say "just don't drink the beer?" even though this is a real problem with an easy fix.
“The algorithm” of social media is the opposite.
Why not let you choose to get a less addictive algorithm? Older algorithms were less addictive, so its not at all impossible to do this, many users would want this.
consider air travel in the present day. ticketing at essentially all airlines breaks down as: premium tickets that are dramatically expensive but offer comfortable seats, and economy tickets that are cramped and seem to impose new indignities every new season. what could be the harm from legislation that would change that menu?
the harm would be fewer people able to travel, fewer young people taking their first trip to experiencing the other side of the world, fewer families visiting grandma, etc.
As much as people hate the air travel experience, the tickets get snapped up, and most of them strictly on the basis of price, and next most taking into account nonstops. This gives us a gauge as to how much people hate air travel: they don't.
this doesn't mean airlines should have no regulation, it doesn't mean monopoly practices are not harmful to happiness, it doesn't mean that addictions don't drive people to make bad choices, it doesn't mean a lot of things.
I'm just trying to get you to see that subtle but significant harm to human thriving can easily come from regulations.
And about whitelisting, I honestly don’t think you’re comparing apples to apples. The point of the algorithm is dynamically recommending new content. It’s about discovery.
Governments saying "if you are a social content platform with more than XX million users you have to provide these options on recommendation algorithms: X Y Z". It is that easy.
> And about whitelisting, I honestly don’t think you’re comparing apples to apples. The point of the algorithm is dynamically recommending new content. It’s about discovery.
And some people want to turn off that pushed discovery and just get recommended videos from a set of channels that they subscribed to. They still want to watch some tiktok videos, they just don't want the algorithm to try to push bad content on them.
You are right that you can't avoid such algorithm when searching for new content, but I don't see why it has to be there in content it pushes onto you without you asking for new content.
https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
Maybe it would be cleaner to state that a system has no purpose (at least not until it is sentient), instead it has behaviors. Then one can observe that the purpose of the designers or maintainers of a system simply happens to be at odds (or as AI safety researchers would say, are "out of alignment with") the behavior of the system.
That all of course presupposes that one can accurately deduce the purposes of the designers/maintainers.. In the case of TikTok, I'd bet that we are all in agreement that their purpose is nothing more nor less than maximal value-extraction from people wishing to express themselves with videos multiplied against an audience of people who wish to view videos multiplied again against advertisers who want to insert propaganda into eyeballs.