Only someone who's lost the plot (or arrived late) would summarily conflate Barlow's 1996 Declaration with "one of those sovereign citizen TikToks where someone in traffic court is claiming diplomatic immunity under maritime law". The article itself has fallen victim to the weaponized co-optation whose framework it describes.
The author says "I remember thinking it was genius when I first read it. I was young enough [...]", believing it was due to being impressionable, but it's more likely that it was due to having lost something along the way. Or rather, it was stolen from them and they didn't even realize.
The Declaration was right, it was just naively optimistic and severely underestimated its opponent + incorrectly presumed digital natives would automatically be on the "right" side. Now we are where we are. And it's just the beginning of the pendulum's counterswing.
We're far from achieving this goal, and we underestimated our opponents by a lot. But it would be foolish to blame the Barlows of the world instead of blaming the tyrants and corporate opportunists that go to great lengths [0] to sabotage and interfere.
[0] https://en.wikipedia.org/wiki/Edward_Snowden#Revelations
For public feeds, you seem to assume that only the propagandists can leverage bots effectively, which is the right assumption for the centrally-controlled social media platforms of today. But if we make a platform that is just some protocols that can't be controlled by anyone, you and I would be able to spin up anti-propaganda bots to pwn the propaganda bots without fear of repercussion. Anyone can try to push public opinion in a specific direction, but someone else will simply go the opposite way. There would be no moderator or algorithm to artificially boost one type of noise over another, so we would actually get a less corrupted feed that accurately represents what people are thinking because the noise cancels eachother out. And if you want to customize the feed, we could make client-side filters and algorithms. There could be an open-source algorithm called "Hacker News" that you can just download and install into your open-source social media client.
As for keeping the powerful in check, don't forget that we've kind of lost equality before the law at this point, as shown by the Epstein saga. If we try to remove anonymity from the Internet right now, it will only be used to surveil regular citizens but not the people we need to keep in check. I would happily support a law that selectively enforces the other way around, though: let's mandate real identity for all government personnel online and expose their Polymarket accounts.
The people who need anonymity are the people who would be punished for saying things people in power don't like.
I don’t think that’s true, unfortunately. You have lots of cases of major propaganda accounts found to be foreign actors and pretty much nothing happened to them
Even if it were so, it is still a win. Without anonymity there is no liberty to the weak at all. And thus for that liberty we must endure all the crap.
But I also don’t expect that removing anonymity would in itself improve the current world, things are at a point where people living in democracies are openly advocating for the destruction of every single liberal ideals. Sure that’s in part astroturfed by anonymous accounts but way too many people couldn’t care less if they real identity would be linked to those claims
And since technological anonymity and privacy are clearly moving us towards fascism, it's not a net good anymore.
There were and will be opposing voices also in deepest fascism.
But more broadly, totalitarism is rather the term, where the whole society is total under control of one ideology. That can be fascism, but also other ideologies steive for that.
But yes, allowing anonymous voices is one way to counter it.
looks like we're talking different fascisms.
I don't want to offend you, it is just that your phrase is like straight from "1984" (or from Russia today) - "war is peace" and the likes.
I think you're completely ignoring the premise of the articles argument (as I understand it). The failure of the declaration was a feature not a flaw. In otherw words it was never about the freedom of the individual but the freedom of large corporations.
In the end governments (even totalitarian ones in a limited sense), are vehicles of the people. Unregulated spaces will favor the person with the most resources and thus lead to more concentration of power. It's essentially a information centric continuation of Reaganomics. The article argues that this could have been (and was, e.g. by Winner) anticipated in the 90s, and that in fact this was the intention of Barlow and co.
Google is back to pushing remote attestation (ie WEI), Apple has already had it for quite some time. "AI" is a great Schelling point excuse for capital structures to collude rather than compete, whether it's demanding identification / "system integrity" (aka computational disenfranchisement) for routine Web tasks or simply making computing hardware unaffordable (and thus even less practical for most people, whether it's GPUs, RAM, or RPis for IoT projects).
There are some silver linings like AI codegen empowering individuals to solve their own problems, and/or really go to town hacking/polishing their libre project for others to use.
But at best I see a future 5-10 years down the road where I've got a few totally-pwnt corporate-government-approved devices for accomplishing basic tasks (with whatever I/O devices are cost-effective from the subset we're allowed to use), and then my own independent network that cannot do much of what's required to interface with (ie exist in) wider society.
The 1990s vision of computing was a bicycle - or car - for the mind. It was libertarian in the sense that if you had a device it would empower you to get where you wanted to go more quickly.
And the rhetoric around it was very much about personal exploration on a new and exciting frontier.
The 2020s vision is more like a totalitarian transport network where you don't own the vehicle, you don't own the network, there's constant propaganda telling you how to structure your journey to the standard destinations, and deviation is becoming increasingly impossible.
The device is just an access port to the network. It's dumbed down, so even if you understand how it works you can't do much with it. And as AI becomes more prevalent, your ability to understand that will diminish further.
So the end result is very plausibly a state where you're completely reliant on AI to do anything. And AI is owned by the pseudo-state oligopoly - the same oligopoly which runs the propaganda networks that sell you ads, hype selected content while suppressing other content, and genrally try to influence your behaviour.
It's the complete opposite of the original vision.
Will consumer AI fix this? Probably not. Even if the hardware keeps improving - debatable - a personal device is never going to be able to compete, in any sense, with an international network of data centres.
And this is where the geopolitical aspect comes in and where an increasing number of studies calls this 'Digital Authoritarianism' with the stated goal of a nation or company (or both in cooperation) keeping control of the population, the narrative and the access to information.
An overview of the literature and studies on the subject: https://www.tandfonline.com/doi/full/10.1080/02681102.2024.2...
A recent study that implicitely inverstigates the role of corporations in the trend: Digital Authoritarianism: from state control to algorithmic despotism https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5117399&... It's a bit long(ish), 29 pages (the last 10 are references) but worth a read.
The device becomes a magic artifact. Like a palantir. Many fantasy stories look like there were (or still are somewhere out there) great people who made all the magical stuff in the story while the people in the story have no idea how that stuff works.
That is possibly the way our civilization going. Especially when the datacenters will be in space, and only the "dumb" Starlink like terminals on Earth.
There's one way to deal with this, but I doubt it'll be popular in these parts: Communal ownership of the means of production.
Don't use the oligarchy's AI. Your personal hardware is going to be too weak. But together, we can own our own server farms.
The quote is a direct reference to a core tenet of Marxist theory, socialism, and communism.
Historically, communal ownership at scale has almost always been implemented via a centralized state, which has tended to gravitate towards authoritarianism. The Soviet Union and East Germany, and many other countries along those lines, didn't really fit the "hippy co-op" image very well.
If AI can code, and empower individuals to do it on a local device, it is already smart enough to educate masses on the matters of their self-interest, such as freedom and solidarity.
I don't think the powers will be able to gatekeep it. There might be some grief but overall human freedom will prevail.
They don’t know what they could have or why the new captcha is funny, thus they can never come up with a prompt that leads to them being educated on the matter. They would have to know that they don’t know and since there is no public discourse for such matters in their Facebook timelines, their thinly right wing digital news outlets and their Viber and what’s app chats they will never know that they don’t know.
This gets referred to as the "moderation issue" because its true cause is too inconvenient.
Algorithms that promote engagement also tend to promote conflict. The major services want people spending more time on their service looking at ads, so they promote engagement and therefore conflict.
The cause of it isn't the decentralized internet, it's the centralized corporate feed.
Lower-case internet is ok as a tool for making spaces. But I reckon humane-ness, or really, virtue, is a habit built from within. And the habits the Internet rewards are generally the wrong ones.
Honestly I think it mostly self selected based on who had the technical ability to participate, especially at that time.
For me, the "but" is that I would rather have someone be mean to me than have a corporation collecting all of my data and using it to try and advertise at me
I never saw this as surprising because cyber-libertarianism reads like Gnosticism to me. Even in the sentence you quoted there's already the subtext of being left out "more human than your government" etc. (odd choice of possessive for a man who was campaign coordinator for Dick Cheney)
The people who were into this stuff tended to have an unhealthy relationship to their physical bodies, physical community, felt excluded, tended to have an Enders Game psychology of feeling both inferior and superior at the same time (extremely bad combination for people with power), equipped with the secret cyber knowledge that would give them access to some new space nobody else knew off, and I was never surprised that you got Peter Thiel and Palantir out of this instead of a digital utopia.
I’ve read Ender’s Game about 20 years ago, but I don’t remember that being a theme in the novel. Could you elaborate what you mean here?
but in short, Ender is the archetypal victim hero. He's always bullied, tormented, abused but also stronger, more intelligent, more emotionally deep and yet always remains the victim who even when inflicting planet scale violence remains ostensibly innocent. This is also the stereotypical young adult show anime protagonist or the fantasy of the bullied high school nerd.
And that really is the psychology you'll find with a lot of folks of the 90s libertarian internet circle in particular those who amassed a lot of money and power.
In this account the U.S. State Department's Internet Freedom Agenda (which many of my friends and colleagues have been directly funded by) is about destabilizing other countries, while Russian or Chinese spies in turn relish American Internet freedoms because they can stir up conflicts here.
I have never endorsed this view but I've run into forms of it again and again and again. Adjacent to it is the idea that some of our prior social harmony was due to a more controlled or at least more homogeneous media landscape.
("Information and Communication Technology" does not make sense here)
I think "Information and communications technology (ICT)":
> https://en.wikipedia.org/w/index.php?title=Information_and_c...
That’s not some kind of crypto denunciation against cosmopolitan diversity, but it is what it is and I do think there’s a there, there.
The idea you mentioned is the mark of an authoritarian who considers expressed dissent a sign of weakness instead of a crucible for the strength of ideas. That they literally cannot conceive of a purpose of it other than propaganda or division because they see democracy as inherently a weakness and they think that a 'strong man' is needed to create unity.
It is a similar tell to bigots who cite 'homogeneous society means' as being inherently socially cohesive or responsible for low crime because they cannot comprehend a cohesion based on something other than ethnic unity.
Or reflexive deceivers promising to 'restore a sense of trust' because the thought of being trustworthy even never comes to mind as something to promise as a lie. I have seen that one in officials in response to corruption or abuse scandals far too many times. A cousin to that is expressing fear of 'turning into a low trust society' where they promise parades of horribles to try to poison the well against people rightfully distrusting them.
For starters, that Putin was right when he was calling the internet a CIA project back in 2010, 2011, those whereabouts.
Later edit: From 2013 [1]:
> Barlow: Let me give you an example: I have been advising the CIA and NSA for many years, trying to get them to use open sources of information. If the objective is really to find out what is going on, the best way to do this, is by trading on the information market where you give information to get information.
[1] https://www.huffpost.com/entry/i-want-to-tear-down-the-v_b_4...
When the corporation that runs as a planned economy with only a few unaccountable leaders at the top has as much power as any other existing government, what makes them any different in terms of morality or “goodness”?
I have never gotten a coherent answer and a few times I’ve received violence in response to the question(also a lol as one of the violent ones was also the one to introduce me to the concept of NAP).
Libertarians seem incapable of rationality and are about as convincing as any true believer of a religion you don’t believe in as an outside observer.
Corporations work on markets, with customers, and need to dynamically adapt to the demands of the customers. Therefore the concept of planned economy goes out the window.
Leaders in a corporation are accountable to the share holders, so again, what you say makes no sense.
Morality relates to value carriers, in the form on conscious human beings, it has no relevance to "the corporation", so for ethical questions you ask the person.
I know you will never research this, but for others who are interested in the only ethical and realistic system to govern society, libertarianism, to great places to start is Johan Norbergs The capitalist manifesto, and Ludwig von Mises Liberalism.
Many libertarians and liberals believe that it's the freedoms one has that make system anti hierarchical.
But as you point out when you have absolute freedom in market based society then you eventually end up with intensely deep hierarchies.
In other words you are free to do everything but there is no guarantee you can do anything - even the most basic things like get food or shelter. And most end up with the short side of the stick.
I disagree. By meaningful real-world standards, the average Internet space is in fact extremely humane and polite. People will bring up the random exceptions where groups of people absolutely hate one another and these hates eventually spill over into online spaces, but that's what these are, limited exceptions. By and large, the average online interaction is potentially far more reflective of desirable human values than the ways complete strangers usually interact offline. Perhaps this is a matter of pure self-selection among a tiny niche of especially intellectually-minded folks, but even if this was the case it would still be creating an affordance that wasn't there before.
At the same time there's the Cambridge Analytica/SCL strand where a corporation literally sells election fixing services that rely on data gathered from social media accounts.
To be fair these are all extensions of political and media trends that already existed, and which online tech could amplify by some orders of magnitude.
Even so. The damage is very real.
One standard technique is to use attack bots to find a wedge issue and weaponise it by raising the temperature from both sides.
This can easily be automated now, so we're well past the point where literal humanity is the most important element.
Real living standards have been stagnant or falling since 1971. We've been making time up by working more, buying plastic and filling our free time with distractions.
Blaming the internet for 50 years of policy is both stupid and pointless. In short: what governments want you to do instead of asking why your grand father could buy a house at 20.
Propaganda is only effective when it's true.
Junk messages trying to use "wedge issues" for attention are nothing new, they existed in the 1990s too. You underestimate just how transparent they are, even on modern-day social media which in many ways is a highly favorable environment to such tactics.