This is exactly how we got here though. Technology is not passive. It changes incentives, procedures, ideas and shapes the world. If we don't structurally limit what and how it's used, then we are not in control, no matter what are choices personally are.
You’ll probably get one of three outcomes: regulatory capture by monopolies, self dealing by bureaucrats to enrich themselves or gain power, or regulatory capture by self absorbed ideologues who halt all progress or force it down some ideologically approved path.
In none of those scenarios is anything aligned with the best interest of the people.
AI isn't like that. One problem is that it's rather generally misunderstood at this point. "AI" is not "intelligence". It's intelligence-adjacent, and something like LLMs is part of our psyche...the subconscious facility that allows us to form sentences without really thinking about it.
At any rate, I have to agree with most of the points the blog author brings up.
Hate to break it to you but it's always been this way, and it was easier in the past when information was so much more expensive to distribute.
The whale's been eaten now. The broader Internet is mostly not trustworthy, or convenient, and the information is not even very plentiful.
People will and are retreating into high-trust zones. In-person networks, product recommendations from real friends, and closed group chats.
It's not the end of the world, but things have changed. We'll have to put more work into finding information than we're used to.
I’ve had the same thoughts, but if you look deeper, it all circles back to what we already had: (open, transparent) public institutions, society, and government by the people. The foundation wasn't the problem; the environment was.
Along the way, social media noise, engagement-optimisation and Kardashian-style "entertainment news" infecting real news made an attention economy where, no matter how scandalous you are, attention can be minted into dollars. That is what polluted our infosphere and lead to the lack of trust.
Now, nobody trusts these previously mentioned public entities any more - sometimes due to state-actor or ad-tech disinformation, and sometimes for good reason like when the poisoned public allowed these 80s-style telemarketer-style political weirdos and their cronies to take over public administration.
Ironically this was the main reason LLMs were introduced in the first place, not to benefit the poor, but to widen the gap between the rich and the poor.
Folks in the "now" have always had a tendency to cling to their fictions as if they were truth for whatever reason; like nationalist exceptionalism, racial superiority, or religions rooted in "othering", etc. Humans seem to have an innate desire to fool themselves and trust in things they should not. Perhaps it's simply a sort of existential coping mechanism of living in a cold, unforgiving reality. We seek the comfort of lies.
Organizing around groups of trust, tends to lead to factionalism and conflicts. Knowing and trusting are sadly very different things in our species.
You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.
The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.
We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.
Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.
That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.