upvote
> * enough people believe it will happen and act accordingly*

Here comes my favorite notion of "epistemic takeover".

A crude form: make everybody believe that you have already won.

A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

reply
This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.

I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.

reply
IMO this is a symptom of the falling rate of profit, especially in the developed world. If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps. This in turn means you must make ever-escalating claims of future profitability. Now, here we are in a world where multiple brand name entrepreneurs are essentially saying that they are building the last investable technology ever, and getting people to believe it because the alternative is to earn less than inflation on Procter and Gamble stock and never getting to retire.

If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.

reply
Profit is a myth of epistemic collapse at this point. Productivity gains are also mythical and probably just anecdotal in the moment.
reply
What percentage of work would you say deals w/ actual problems these days?
reply
In a post-industrial economy there are no more economic problems, only liabilities. Surplus is felt as threat, especially when it's surplus human labor.

In today's economy disease and prison camps are increasingly profitable.

How do you think the investor portfolios that hold stocks in deathcare and privatized prison labor camps can further Accelerate their returns?

reply
Or just play into the fact that it's a Keynesian Beauty Contest [1]. Find the leverage in it and exploit it.

1. https://en.wikipedia.org/wiki/Keynesian_beauty_contest

reply
On the other hand talking about those believes can also lead to real changes. Slavery used to be seen widely a necessary evil, just like for instance war.
reply
I don’t actually know a ton about the rhetoric around abolitionism. Are you saying they tried to convince people that everybody else thought slavery was evil? I guess I assumed they tried to convince people slavery was in-and-of-itself evil.
reply
The "Silent Majority" - Richard Nixon 1969

"Quiet Australians" - Scott Morrison 2019

reply
We really need a rule in politics which bans you (if you're an elected representative) from stating anything about the beliefs of the electorate without reference to a poll of the population of adequate size and quality.

Yes we'd have a lot of lawsuits about it, but it would hardly be a bad use of time to litigate whether a politicians statements about the electorate's beliefs are accurate.

reply
The thing is... on both the cited occasions (Nixon in 1968, Morrison in 2019), the politicians claiming the average voter agreed with them actually won that election

So, obviously their claims were at least partially true – because if they'd completely misjudged the average voter, they wouldn't have won

reply
I don’t recall the circumstances under which Morrison ended up Prime Minister.

Like most Australians, I’m in denial any of that episode ever happened.

But, using the current circumstances as an example, Australia has a voting system that enables a party to form government even though 65% of voting Australia’s didn’t vote for that party as their first preference.

If the other party and some of the smaller parties could have got their shit together Australia could have a slightly different flavour of complete fucking disaster of a Government, rather than whatever the fuck Anthony Albanese thinks he’s trying to be.

Then there’s Susan Ley. The least preferred leader of the two major parties in a generation.

Susan Ley is Anthony Albanese in a skirt.

I would have preferred Potato Head, to be honest.

reply
People vote for people they don't agree with.

When there are only two choices, and infinite issues, voters only have two choices: Vote for someone you don't agree with less, or vote for someone you quite hilariously imagine agrees with you.

EDIT: Not being cynical about voters. But about the centralization of parties, in number and operationally, as a steep barrier for voter choice.

reply
Combined with the quirk in Australia’s preferential voting system that enable a government to form despite 65% of voters having voted 1 for something else.

As a result, Australia tends to end up with governments formed by the runner up, because no one party actually ‘won’ as such.

reply
Two options, not two choices. (Unless you have a proportional representation voting system like ireland, in which case you can vote for as many candidates as you like in descending order of preference)

Anyway, there’s a third option: spoil your vote. In the recent Irish presidential election, 13% of those polled afterwards said they spoiled their votes, due to a poor selection of candidates from which to choose.

https://www.rte.ie/news/analysis-and-comment/2025/1101/15415...

reply
Please don’t encourage people to waste their vote.

Encourage people to vote for the candidate they dislike the least, then try to work out ways to hold government accountable.

If you’re in Australia, at least listen to what people like Tony Abbott, the IPA, and Pauline Hanson are actually saying these days.

reply
That’s much more true for Nixon in 1968 than Morrison in 2019

Because the US has a “hard” two party system - third party candidates have very little hope, especially at the national level; voting for a third party is indistinguishable from staying home, as far as the outcome goes, with some rather occasional exceptions

But Australia is different - Australia has a “soft” two party system - two-and-a-half major parties (I say “and-a-half” because our centre-right is a semipermanent coalition of two parties, one representing rural/regional conservatives, the other more urban in its support base). But third parties and independents are a real political force in our parliament, and sometimes even determine the outcome of national elections

This is largely due to (1) we use what Americans call instant-runoff in our federal House of Representatives, and a variation on single-transferable vote in our federal Senate; (2) the parliamentary system-in which the executive is indirectly elected by the legislature-means the choice of executive is less of a simplistic binary, and coalition negotiations involving third party/independent legislators in the lower house can be decisive in determining that outcome in close elections; (3) twelve senators per a state, six elected at a time in an ordinary election, gives more opportunities for minor parties to get into our Senate - of course, 12 senators per a state is feasible when you only have six states (plus four more to represent our two self-governing territories), with 50 states it would produce 600 Senators

reply
And minor parties receive funding from the Australian Electoral Commission if they receive over certain percentage of votes.

It was 5% last time I cared to be informed by may be different now, and they would receive $x for each vote, or what ever it is now.

reply
Also, there is nothing centre-right about Susan Ley.

She is the leftest left leaning leader of the Liberal party I’ve ever had the misfortune of having to live through.

She was absolutely on board with this recent Hitlerian “anti-hate” legislation that was rammed through with no public consultation.

Okay, that’s a bit uncharitable. We had 48 hours.

reply
> We really need a rule in politics which bans you (if you're an elected representative) from stating anything about the beliefs of the electorate without reference to a poll of the population of adequate size and quality.

Except that assumes polls are a good and accurate way to learn the "beliefs of the electorate," which is not true. Not everyone takes polls, not every belief can be expressed in a multiple-choice form, little subtleties in phrasing and order can greatly bias the outcome of a poll, etc.

I don't think it's a good idea to require speech be filtered through such an expensive and imperfect technology.

reply
Just make it broad enough that we never get a candidate promoting themselves as “electable” again.
reply
That get covered by the mechanisms of social credibility.
reply
Isn't that how Bitcoin "works"?
reply
err... how Bitcoin works, or how the speculative bubble around cryptocurrencies circa 2019-2021 worked?

Bitcoin is actually kind of useful for some niche use cases - namely illegal transactions, like buying drugs online (Silk Road, for example), and occasionally for international money transfers - my French father once paid an Argentinian architect in Bitcoin, because it was the easiest way to transfer the money due to details about money transfer between those countries which I am completely unaware of.

The Bitcoin bubble, like all bubbles since the Dutch tulip bubble in the 1600s, did follow a somewhat similar "well everyone things this thing is much more valuable than it is worth, if I buy some now the price will keep going on and I can dump it on some sucker" path, however.

reply
> Bitcoin is actually kind of useful for some niche use cases - namely illegal transactions, like buying drugs online (Silk Road, for example),

For the record - the illegal transactions were thought to be advantaged by crypto like BTC because it was assumed to be impossible to trace the people engaged in the transaction, however the opposite is true, public blockchains register every transaction a given wallet has made, which has been used by Law Enforcement Agencies(LEA) to prosecute people (and made it easier in some cases).

> and occasionally for international money transfers - my French father once paid an Argentinian architect in Bitcoin, because it was the easiest way to transfer the money due to details about money transfer between those countries which I am completely unaware of.

There are remittance companies that deal in local currencies that tend to make this "easier" - crypto works for this WHEN you can exchange the crypto for the currencies you have and want, which is, in effect, the same.

reply
Most bubbles have a peak and crash. "The Bitcoin bubble" keeps peaking and crashing and then going on to a higher peak.
reply
Mining rigs have a finite lifespan & the places that make them in large enough quantities will stop making new ones if a more profitable product line, e.g. AI accelerators, becomes available. I'm sure making mining rigs will remain profitable for a while longer but the memory shortages are making it obvious that most production capacity is now going towards AI data centers & if that trend continues then hashing capacity will continue diminishing b/c the electricity cost & hardware replenishment will outpace mining rewards.

Bitcoin was always a dead end. It might survive for a while longer but its demise is inevitable.

reply
Ontological version is even more interesting, especially if we're talking about a singularity (which may be in the past rather than future if you believe in simulation argument).

Crude form: winning is metaphysically guaranteed because it probably happened or probably will

Refined: It's metaphysically impossible to tell whether or not it has or will have happened, so the distinction is meaningless, it has happened.

So... I guess Weir's Egg falls out of that particular line of thought?

reply
Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.
reply
V 1.02: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because they believe that the others believe that you have enough power to crush the dissent. The moment this belief fades, you fall.
reply
Is that not the "Emperor's New Clothes" form? That would be like version 0.0.1
reply
it's a sad state these days that we can't be sure which country you're alluding to
reply
You ever get into logic puzzles? The sort where the asker has to specify that everybody in the puzzle will act in a "perfectly logical" way. This feels like that sort of logic.
reply
deleted
reply
Its the classic interrogation technique; "we're not here to debate whether your guilty or innocent, we have all the evidence we need to prove your guilt, we just want to know why". Not sure if it makes it any different though that the interrogator knows they are lying
reply
deleted
reply
Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?

Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...

Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.

For instance those small experiments that train a network on addition problems mentioned in a sibling post. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behavior. The machine learning weights is just the medium it is expressed in.

What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). We don't really know this yet.

reply
Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.

But that problem is MUCH MUCH MUCH harder than people make it out to be.

For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

You can get around that with agentic frameworks, but all of those right now are manually coded.

So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.

It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.

,

reply
> For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.

reply
On the other hand the average human has a context window of 2.5 petabytes that's streaming inference 24/7 while consuming the energy equivalent of a couple sandwiches per day. Oh and can actually remember things.
reply
>So obviously, error correction with inputs/outputs is not the way we get to intelligence.

This doesn't seem to follow at all let alone obviously? Humans are able to reason through code without having to become a completely discrete computer, but probably can't reason through any length of assembly code, so why is that requirement necessary and how have you shown LLMs can't achieve human levels of competence on this kind of task?

reply
> but probably can't reason through any length of assembly code

Uh what? You can sit there step by step and execute assembly code, writing things down on a piece of paper and get the correct final result. The limits are things like attention span, which is separate from intelligence.

Human brains operate continuously, with multiple parts being active at once, with weight adjustment done in real time both in the style of backpropagation, and real time updates for things like "memory". How do you train an LLM to behave like that?

reply
Couldn't you periodically re-train it on what it's already done and use the context window for more short term memory? That's kind of what humans do - we can't learn a huge amount in short time but can accumulate a lot slowly (school, experience).

A major obstacle is that they don't learn from their users, probably because of privacy. But imagine if your context window was shared with other people, and/or all your conversations were used to train it. It would get to know individuals and perhaps treat them differently, or maybe even manipulate how they interact with each other so it becomes like a giant Jeffrey Epstein.

reply
You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.

In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.

This is separate from directly answering the question "is a singularity coming?"

reply
The problem is, there's two groups:

One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.

The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.

So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..

reply
This entire chain of reasoning takes for granted that there won't be a singularity

If you're talking about "reforming society", you are really not getting it. There won't be society, there won't be earth, there won't be anything like what you understand today. If you believe that a singularity will happen, the only rational things to do are to stop it or make sure it somehow does not cause human extinction. "Reforming society" is not meaningful

reply
There will be earth!
reply
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

And there are plenty of people that take issue with that too.

Unfortunately they're not the ones paying the price. And... stock options.

reply
History paints a pretty clear picture of the tradeoff:

* Profits now and violence later

OR

* Little bit of taxes now and accelerate easier

Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

reply
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
reply
If you replace "taxes" with more general "investment", it's everywhere. A good example is Amazon that has reworked itself from an online bookstore into a global supplier of everything by ruthlessly reinvesting the profits.

Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.

If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

reply
With the odd story that we paid the price for it in the long term.

This book

https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...

tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.

Funny the Mellon family went on to further political mischief

https://en.wikipedia.org/wiki/Richard_Mellon_Scaife#Oppositi...

reply
Ha, we gutted our manufacturing base, so if we bring it back it will now be state of the art! Not sure if that will work out for us, but hey their is some precedence.
reply
The dollar became the world's reserve currency because the idea of Bancor lost to it. Thus subjecting the US to the Triffin dilemma which made the US capital markets benefit at the expense of a hugely underappreciated incentive to offshore manufacturing.

You can't onshore manufacturing and have a dollar reserve currency. The only question then is, Are you willing to de-dollarize to bring back manufacturing jobs?

This isn't a rhetorical question if the answer is yes, great, let's get moving. But if the answer is no, sorry, dollarization and its effects will continue to persist.

reply
This is the silver lining in many bad stories: the pendulum will always keep on swinging because at the extremes the advantage flips.
reply
I'll take a look at that story later. I'm curious though, why is US metallurgy consistently top-notch if the processes are inferior? When I use wrenches, bicycle frames, etc from most other countries I have no end of troubles with weld delamination, stress fractures compounding into catastrophic failures, and whatnot, even including enormous wrenches just snapping in half with forces far below what something a tenth the size with American steel could handle.
reply
> I'm curious though, why is US metallurgy consistently top-notch if the processes are inferior?

I really wonder what you're comparing with.

Try some quality surgical steel from Sweden, Japan or Germany and you'll come away impressed. China is still not quite there but they are improving rapidly, Korea is already there and poised to improve further.

Metal buyers all over the globe are turning away from the US because of the effects of the silly tariffs but they were not going there because the quality, but because of the price.

The US could easily catch up if they wanted to but the domestic market just isn't large enough.

And as for actual metallurgy knowledge I think russia still has an edge, they always were good when it came down to materials science, though they're sacrificing all of that now for very little gain.

reply
Which are these other countries? Have you tried something actually made in Japan, or in Germany, for instance?

What you describe seems like very cheap Chinese imports fraudulently imitating something else.

reply
> the state is usually a much more sloppy investor

I don’t find this to be true

The state invests in important things that have 2nd and 3rd order positive benefit but aren’t immediately profitable. Money in a food bank is a “lost” investment.

Alternatively the state plays power games and gets a little too attached to its military toys.

reply
State agencies are often good at choosing right long-term targets. State agencies are often bad at the actual procurement, because of the pork-barrelling and red tape. E.g. both private companies and NASA agree that spaceflight is a worthy target, but NASA ends up with the Space Shuttle (a nice design ruined by various committees) and SLS, while private companies come up with Falcon-9.
reply
Sounds like a false dichotomy. NASA got all these different subcontractors to feed, in all these different states and they explicitly gutted MOL and dynasoar and all the air force projects that needed weird orbits and reentry trajectories so the space shuttle became a huge compromise. Perverse incentives and all that. It's not state organizations per se but rather non-profits that need to have a clear goal that creates capabilities, tools and utilities that act as multipliers for everyone. A pretty big cooperative. Like, I dunno , what societies are supposed to exist for.
reply
But DoD with its weird requirements, and the Congress with its power to finance the project and the desire to bring jobs from it to every state, and the rules of contracting that NASA must follow, are all also part of the state, the way the state ultimately works.
reply
Yeah, our use of our military force provides some of the most obvious cases of "bad investment". Vietnam, Iraq, etc

And there are many others that might've been a positive investment from a strictly financial perspective, but not from a moral one: see Banana Republics and all those times the CIA backed military juntas.

reply
> Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.

Be careful. The data does not confirm that narrative. You mentioned the 1950s, which is a poignant example of reality conflicting with sponsored narrative. Pre WOII, the wealthy class orbiting the monopolists, and by extension their installed politicians, had no other ideas than to keep lowering taxes for the rich on and on, even if it only deepened the endless economic crisis. Many of them had fallen in the trap of believing their own narratives, something we know as the Cult of Wealth.

Meanwhile, average Americans lived on food stamps. Politically deadlocked in quasi-religious ideas of "bad governments versus wise business men", America kept falling deeper. Meanwhile, with just 175,000 serving on active duty, the U.S. Army was the 18th biggest in the world[1], poorly equipped, poorly trained. Right wing isolationism had brought the country in a precarious position. Then two things happened. Roosevelt and WOII.

In a unique moment, the state took matters in their own hands. The sheer excellence in planning, efficiency, speed and execution of the state baffled the republicans, putting the oligarchic model of the economy to shame. The economy grew tremendously as well, something the oligarchy could not pull of. It is not well-known that WOII depended largely on state-operated industries, because the former class quickly understood how much the state's performance threatened their narratives. So they invested in disinformation campaigns, claiming the efforts and achievements of the government as their own.

1. https://www.politico.com/magazine/story/2019/06/06/how-world...

reply
What does WOII mean?

I assume you are talking about WW2 and at first thought it was a typo.

reply
WOII is how dutch speaking/writing people would refer to WW2, it is literally 'wereld oorlog 2'.
reply
BTW the New Deal tried central planning and quickly rejected it. I'd say that the intense application of the antitrust law in the late 1930s was a key factor that helped end the Great Depression. The war, and wartime government powers, were also key: the amount of federal government overreach and and reforms do not compare to what e.g. the second Trump administration has attempted. It was mostly done by people who got their positions in the administration more due to merit and care about the country than loyalty, and it showed.

The post-war era, under Truman and Eisenhower administrations, reaped the benefits of the US being the wealthiest and most intact winner of WWII. At that time, the highest income tax rate bracket was 91%, but the effective rate was below 50%.

reply
> It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

The US is also shaping up to be the principal winner in Artificial Intelligence.

If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.

reply
Are you sure that in today's reality the fruits of the AI race will be harvested by "the people"?
reply
> The US is also shaping up to be the principal winner in Artificial Intelligence.

There is no early mover advantage in AI in the same way that there was in all the other industries. That's the one thing that AI proponents in general seem not to have clued in to.

What will happen is that it eventually drags everything down because it takes the value out of the bulk of the service and knowledge economies. So you'll get places that are 'ahead' in the disruption. But the bottom will fall out of the revenue streams, which is one of the reasons these companies are all completely panicked and are wrecking the products that they had to stuff AI in there in every way possible hoping that one of them will take.

Model training is only an edge in a world where free models do not exist, once those are 'good enough' good luck with your AI and your rapidly outdated hardware.

The typical investors horizon is short, but not that short.

reply
Early on in the AI boom NVidia was highly valued as it was seen as the shovel-maker for research and development. It certainly was instrumental early on but now there are a few viable options for training hardware - and, to me at least, it's unclear whether training hardware is actually the critical infrastructure or if it will be something like power capacity (which the US is lagging behind significantly in), education, or even cooling efficiency.

I think it's extremely early to try and call who the principal winner will be especially with all the global shifts happening.

reply
Violence was a moderating factor when people on each side were equally armed, and number was a deciding factor.

Nowadays you could squash an uprising with a few operators piloting drones remotely.

reply
Flying a drone around is easy. Identifying who is on the in group and out group and then moving them is the hard part.

I’m not sure you have really thought out what the drone part is meant to do. Militaries gave outgunned populaces for decades at this point. You don’t need drones to kill civilians.

reply
It's actually quite easy. Whoever isn't in the bunker is the outgroup. You only needed to tell people apart when you needed some meatware to man the factories and work the fields.

Militaries can side with the crowd, or more likely decide to keep the power for themselves.

reply
Every possible example of “progress” have either an individual or a state power purpose behind it

there is only one possible “egalitarian” forward looking investments that paid off for everybody

I think the only exception to this is vaccines…and you saw how all that worked during Covid

Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad

reply
COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.
reply
Yeah buddy we agree
reply
[flagged]
reply
I find it interesting that this is the conclusion you draw from this. I won’t go into a discussion on the efficacy of the various mandates and policies in reducing spread of the disease. Rather, I think it’s worth pointing out that a significant portion of the proponents of these policies likely supported them not because of a desire to follow the authority but because they sincerely believed that a (for them) relatively small sacrifice in personal freedom could lead to improved outcomes for their fellow humans. For them, it was never about blindly following authority or virtue signalling. It was only ever about doing what they perceived as the right thing to do.
reply
So if the arguments are rooted in medical reasons, it's okay to be inhumane? Nazi propaganda argued that getting rid of Jews helped prevent the spread of diseases, because we all know that Jews are disease carriers. See how slippery the slope is here? Certainly you have seen the MAGA folks point out the measles outbreaks are coming from illegal immigrants, right?

I am quite sure that people felt justified in their reasoning for their behavior. That just shows how effective the propaganda was, how easy it is to get people to fall in line. If it was a matter of voluntary self sacrifice of personal freedoms, I wouldn't have made this comment. People decided to demonize anyone who did not agree with the "medical authority", especially doctors or researchers that did not tow the party line. They ruined careers, made people feel awful, and online the behavior was worse because how easy it was to pile on. Over stuff that is still to this day not very clear cut what the optimal strategy is for dealing with infectious disease.

reply
Naziism is rooted in Jim Crow and slavecatchers.

COVID restrictions were public health, an overriding concern listed in the US Constitution as general welfare as a reason for the US government to exist at all.

reply
Yea, closing beaches and parks is on par with the Nazis did to the Jews.

The Covid measures were also totally targeted at certain groups of people with immutable characteristics and not at people who actively wanted to spread disease.

How are people like you still making arguments like this in 2026? Were you also one of the people claiming we’d all be dead in a year from the vaccines?

reply
It is so easy to critique the response in hindsight. Or at the time.

But critiques like that ignore uncertainty, risk, and unavoidably getting it "wrong" (on any and all dimensions), no matter what anyone did.

With a new virus successfully circumnavigating the globe in a very short period of time, with billions of potential brand new hosts to infect and adapt within, and no way to know ahead of time how virulent and deadly it could quickly evolve to be, the only sane response is to treat it as extremely high risk.

There is no book for that. Nobody here or anywhere knows the "right" response to a rapidly spreading (and killing) virus, unresponsive to current remedies. Because it is impossible to know ahead of time.

If you actually have an answer for that, you need to write that book.

And take into account, that a lot of people involved in the last response, are very cognizant that we/they can learn from what worked, what didn't, etc. That is the valuable kind of 20-20 vision.

A lot of at-risk people made it to the vaccines before getting COVID. The ones I know are very happy about everything that reduced their risk. They are happy not to have died, despite those who wanted to let the disease to "take its natural course".

And those that died, including people I know, might argue we could have done more, acted as a better team. But they don't get to.

No un-nuanced view of the situation has merit.

The most significant thing we learned: a lot of humanity is preparing to be a problem if the next pandemic proves ultimately deadlier. A lot of humanity doesn't understand risk, and doesn't care, if doing so requires cooperative efforts from individuals.

reply
It's usually the same people that would have been the loudest to shout if it had not worked as well as it did...
reply
It's the same people who don't even notice that we don't talk about acid rain anymore, because we solved it with government regulation for pretty cheap.

They even indignantly mention the Ozone layer, insisting that "Look, liberals told you to care but its not a problem anymore", ignorant entirely of the immense global effort to fix that.

reply
You should study the prevention paradox.
reply
"Nazi", "Fascist", etc are words you can use to lose any debate instantly no matter what your politics are.

I think the sane version of this is that Gen Z didn't just lose its education, it lost its socialization. I know someone who works in administration of my Uni who tracks general well being of students who said they were expecting it to bounce back after the pandemic and they've found it hasn't. My son reports if you go to any kind of public event be it a sewing club or a music festival people 18-35 are completely absent. My wife didn't believe him but she went to a few events and found he was right.

You can blame screens or other trends that were going on before the pandemic, but the pandemic locked it in. At the rate we're going if Gen Z doesn't turn it around in 10 years there will not be a Gen Z+2.

So the argument that pandemic policy added a few years to elderly lives at the expensive of the young and the children that they might have had is salient in my book -- I had to block a friend of mine on Facebook who hasn't wanted to talk about anything but masks and long COVID since 2021.

reply
Never seen the attempt by governments to contain a global pandemic that killed millions and threatened to overwhelm healthcare compared to Nazism before, but why should I be surprised? Explains a lot about the sorry state of modern politics.
reply
Great zinger buddy, you really showed off your wit.
reply
If you edit your comment to add punctuation, please let me know: I would like to read that final pile of words.

I did try, I promise.

reply
Ok here: Everything from the semiconductor through the vacuum cleaner, automobile, airplanes and steam engines was developed to give a small group an advantage over all the other groups. It has always been the case, it will always be the case.

Fundamentally, at the root nature of humanity, humans do not care about the externalities, either good or bad.

reply
That's a slightly odd way of looking at it. I'm guessing the people developing airplanes or whatever thought of a number of things including - hey this would be cool to do - and - maybe we can make some money - and - maybe this will help people travel - and - maybe it'll impress the girls - and probably some other things too. At least that's roughly how I've thought when I make stuff, never this will give a small group an advantage.
reply
But the whole point is embedded in the task otherwise you wouldn’t do it

If somebody is using monetary resources to buy NFT‘s instead of handing out food to the homeless then you get less food for the homeless

All of the things listed are competitive task situations and you’re looking for some advantage that makes it easier for you

well if it makes it easier for you then it could make it easier for somebody else which means you’re crowding out other options in that action space

That is to say the pie is fixed for resources on this planet in terms of energy and resource utilization across the lifespan of a human

reply
Vacuum cleaner -> sell appliances -> sell electric motors

But there was a clear advantage in quality of life for a lot of people too.

Automobile -> part of industrialization of transport -> faster transport, faster world

Arguably also a big increase in quality of life but it didn't scale that well and has also reduced the quality of life. If all that money had gone into public transport then that would likely have been a lot better.

Airplanes -> yes, definitely, but they were also clearly seen as an advantage in war, in fact that was always a major driver behind inventions.

Steam engine -> the mother of all prime movers and the beginnings of the fossil fuel debacle (coal).

Definitely a quality of life change but also the cause of the bigger problems we are suffering from today.

The 'coffin corner' (one of my hobby horses) is a real danger, we have, as a society, achieved a certain velocity, if we slow down too much we will crash, if we speed up the plane will come apart. Managing these transitions is extremely delicate work and it does not look as though 'delicate' is in the vocabulary of a lot of people in the driving seats.

reply
This is where the concept of trickle down economics came from though and we know that that’s not actually accurate

I used to hear about this with respect to how fun funding NASA would get us more inventions because they funded Velcro

No it’s simply that there was a positive temporary externality for some subset of groups but the primary long term benefit went to the controller of the capital

The people utilizing them were marginally involved because they were only given the options that capital produced for them

reply
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).

reply
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
reply
if people believe its a threat and it is also real then what matters is timing
reply
Which would also mean the accelerationists are potentially putting everyone at risk. I'd think a soft takeoff decades in the future would give us a much better chance of building the necessary safeguards and reorganizing society accordingly.
reply
This is a soft takeoff

We, the people actually building it, have been discussing it for decades

I started reading Kurzweil in the early 90s

If you’re not up to speed that’s your fault

reply
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
reply
> If the singularity does happen, then it hardly matters what people do or don't believe.

Depends on how you feel about Roko's basilisk.

reply
God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.
reply
The culture that brought you "speedrunning computer science with JavaScript" and "speedrunning exploitative, extractive capitalism" is back with their new banger "speedrunning philosophy". Nuke it from orbit; save humanity.
reply
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

We've already been here in the 1980s.

The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.

reply
All the discussion of investment and economics can be better informed by perusing the economic data in Rise and Fall of American Growth. Robert Gordon's empirical finding is that American productivity compounded astonishingly from 1870-1970, but has been stuck at a very low growth rate since then.

It's hard to square with the computer revolution, but my take post-70s is "net creation minus creative destruction" was large but spread out over more decades. Whereas technologies like: electrification, autos, mass production, telephone, refrigeration, fertilizers, pharmaceuticals, these things produced incomparable growth over a century.

So if you were born in the 70s America, your experience of taxes, inflation, prosperity and which policies work, all that can feel heavier than what folks experienced in the prior century. Of course that's in the long run (ie a generation).

I question whether AI tools have great net positive creation minus destruction.

reply
> prior to reforming society into one that does not predicate survival on continued employment and wages

There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.

reply
The UK seems to be prototyping that. We're changing to a society where everyone lives by claiming benefits. (eg. https://www.gbnews.com/money/benefits-claimants-earnings-rev...)
reply
You would need a new sense of self and a life free of fear, raising children where they can truly be anything they like and teach their own kids how to find meaning in a life lived well. "Best I can do is treefiddy" though..
reply
"If men define situations as real, they are real in their consequences."

Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas.

https://en.wikipedia.org/wiki/Thomas_theorem

reply
I thought the Singularity had already happened when the Monkeys used tools to kill the other Monkeys and threw the bone into the sky to become a Space Station.
reply
> here’s how LLMs actually work

But how is that useful in any way?

For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

reply
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

reply
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.

No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.

To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.

reply
>No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point.....

To be fair, only if you pose this question singularly with no proceeding context. If you want the raw LLM to answer your question(s) reliably then you can have the context prepended with other question-answer pairs and it works fine. A raw LLM is already capable of being a chatbot or anything else with the right preceding context.

reply
If such a simplistic explanation was true, LLM's would only be able to answer things that had been asked before, and where at least a 'fuzzy' textual question/answer match was available. This is clearly not the case. In practice you can prompt the LLM with such a large number of constraints, so large that the combinatorial explosion ensures no one asked that before. And you will still get a relevant answer combining all of those. Think combinations of features in a software request - including making some module that fits into your existing system (for which you have provided source) along with a list of requested features. Or questions you form based on a number of life experiences and interests that combined are unique to you. You can switch programming language, human language, writing styles, levels as you wish and discuss it in super esoteric languages or morse code. So are we to believe this answers appear just because there happened to be similar questions in the training data where a suitable answer followed? Even if for the sake of argument we accept this explanation by "proximity of question/answer", it is immediately that this would have to rely on extreme levels of abstraction and mixing and matching going on inside the LLM. And that it is then this process that we need to explain how works, whereas the textual proximity you invoke relies on this rather than explaining it.
reply
> Maybe you don't.

My best friend who has literally written a doctorate on artificial intelligence doesn't. If you do, please write a paper on it, and email it to me. My friend would be thrilled to read it.

reply
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?

You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.

reply
We know exactly what is going on inside the box. The problem isn't knowing what is going on inside the box, the problem is that it's all binary arithmetic & no human being evolved to make sense of binary arithmetic so it seems like magic to you when in reality it's nothing more than a circuit w/ billions of logic gates.
reply
We do not know or understand even a tiny fraction of the algorithms and processes a Large Language Model employs to answer any given question. We simply don't. Ironically, only the people who understand things the least think we do.

Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.

reply
reply
"Look man all reality is just uncountable numbers of subparticles phasing in and out of existence, what's not to understand?"
reply
Your response is a common enough fallacy to have a name: straw man.
reply
I think the fallacy at hand is more along the lines of "no true scotsman".

You can define understanding to require such detail that nobody can claim it; you can define understanding to be so trivial that everyone can claim it.

"Why does the sun rise?" Is it enough to understand that the Earth revolves around the sun, or do you need to understand quantum gravity?

reply
Good point. OP was saying "no one knows" when in fact plenty of people do know but people also often conflate knowing & understanding w/o realizing that's what they're doing. People who have studied programming, electrical engineering, ultraviolet lithography, quantum mechanics, & so on know what is going on inside the computer but that's different from saying they understand billions of transistors b/c no one really understands billions of transistors even though a single transistor is understood well enough to be manufactured in large enough quantities that almost anyone who wants to can have the equivalent of a supercomputer in their pocket for less than $1k: https://www.youtube.com/watch?v=MiUHjLxm3V0.

Somewhere along the way from one transistor to a few billion human understanding stops but we still know how it was all assembled together to perform boolean arithmetic operations.

reply
Honestly, you are just confused.

With LLMs, The "knowing" you're describing is trivial and doesn't really constitute knowing at all. It's just the physics of the substrate. When people say LLMs are a black box, they aren't talking about the hardware or the fact that it's "math all the way down." They are talking about interpretability.

If I hand you a 175-billion parameter tensor, your 'knowledge' of logic gates doesn't help you explain why a specific circuit within that model represents "the concept of justice" or how it decided to pivot a sentence in a specific direction.

On the other hand, the very professions you cited rely on interpretability. A civil engineer doesn't look at a bridge and dismiss it as "a collection of atoms" unable to go further. They can point to a specific truss and explain exactly how it manages tension and compression, tell you why it could collapse in certain conditions. A software engineer can step through a debugger and tell you why a specific if statement triggered.

We don't even have that much for LLMs so why would you say we have an idea of what's going on ?

reply
It sounds like you're looking for something more than the simple reality that the math is what's going on. It's a complex system that can't simply be debugged through[1], but that doesn't mean it isn't "understood".

This reminds me of Searle's insipid Chinese Room; the rebuttal (which he never had an answer for) is that "the room understands Chinese". It's just not satisfying to someone steeped in cultural traditions that see people as "souls". But the room understands Chinese; the LLM understands language. It is what it is.

[1] Since it's deterministic, it certainly can be debugged through, but you probably don't have the patience to step through trillions of operations. That's not the technology's fault.

reply
No one relies on "interpretability" in quantum mechanics. It is famously uninterpretable. In any case, I don't think any further engagement is going to be productive for anyone here so I'm dropping out of this thread. Good luck.
reply
Quantum mechanics has competing interpretations (Copenhagen, Many-Worlds, etc.) about what the math means philosophically, but we still have precise mathematical models that let us predict outcomes and engineer devices.

Again, we lack even this much with LLMs so why say we know how they work ?

reply
I thought the Hinton talking to Jon Stewart interview gives a rough idea how they work. Hinton got Turing and Nobel prizes for inventing some of the stuff https://youtu.be/jrK3PsD3APk?t=255
reply
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

Sometimes things seems unbelievable simply because they aren't true.

reply
> It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating.

It's funny how, in order to explain one complex phenomenon, you took an even more complex phenomenon as if it somehow simplifies it.

reply
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
reply
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

reply
It's true people need something to do, but I don't think the COVID shutdown (lockdowns didn't happen in the U.S. for the most part though they did in other countries) is a good comparison because the entire society was perfused with existential dread and fear of contact with another human being while the death count was rising and rising by thousands a day. It's not a situation that makes for comfortable comparisons because people were losing their damn minds and for good reason.
reply
That’s a fair point. I don’t mean to trivialize the actual fears and concerns surrounding the pandemic.
reply
Just say it simply,

1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.

I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

reply
You may be throwing the baby out with the bathwater. I learned more last year from ChatGPT Pro than I'd learned in the previous 5, FWIW.
reply
Just say 'LLMs'. Whenever someone name drops a specific model I can't help but think it's just an Ad bot.
reply
The "Pro" part is particularly suspect
reply
I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.

A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.

You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.

reply
I agree, you're probably right! Thanks!
reply
Currently, everything suggests the torment nexus will happen before the singularity.
reply
> It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.

reply
Pro tip: call it a "law of nature" and people will somehow stop pestering you about the why.

I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.

On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)

reply
In science these days, the term "Law" is almost never used anymore, the term "Theory" replaced it. E.g Theory of special relativity instead of Law of special relativity.
reply
Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.

Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?

reply
>Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.

If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.

We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.

Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:

"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"

What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.

Let's revisit your statement.

"the mechanics of how LLMs work to produce results are observable and well-understood".

Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?

reply
It's pattern matching, likely from typography texts and descriptions of umbrellas. My understanding is that the model can attempt some permutations in its thinking and eventually a permutation's tokens catch enough attention to attempt to solve, and that once it is attending to "everyday object", "arc", and "hook", it will reply with "umbrella".

Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong: https://claude.ai/share/497ad081-c73f-44d7-96db-cec33e6c0ae3 . Here's me specifically asking for the three key points above: https://claude.ai/share/b529f15b-0dfe-4662-9f18-97363f7971d1

I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.

Edit: I poked at it a little longer and I was able to get some more specific matches to source material binding the concept of umbrellas being drawn using the letter J: https://claude.ai/share/f8bb90c3-b1a6-4d82-a8ba-2b8da769241e

reply
>It's pattern matching, likely from typography texts and descriptions of umbrellas.

"Pattern matching" is not an explanation of anything, nor does it answer the question I posed. You basically hand waved the problem away in conveniently vague and non-descriptive phrase. Do you think you could publish that in a paper for ext ?

>Why am I confident that it's not actually doing spatial reasoning? At least in the case of Claude Opus 4.6, it also confidently replies "umbrella" even when you tell it to put the parenthesis under the J, with a handy diagram clearly proving itself wrong

I don't know what to tell you but J with the parentheses upside down still resembles an umbrella. To think that a machine would recognize it's just a flipped umbrella and a human wouldn't is amazing, but here we are. It's doubly baffling because Claude quite clearly explains it in your transcript.

>I feel like I have a pretty good intuition of what's happening here based on my understanding of the underlying mathematical mechanics.

Yes I realize that. I'm telling you that you're wrong.

reply
I don't have much more to add to the sibling comment other than the fact that the transcript reads

> When you rotate ")" counterclockwise 90°, it becomes a wide, upward-opening arc — like ⌣.

but I'm pretty sure that's what you get if you rotate it clockwise.

reply
>Do you think you could publish that in a paper for ext ?

You seem to think it's not 'just' tensor arithmetic.

Have you read any of the seminal papers on neutral networks, say?

It's [complex] pattern matching as the parent said.

If you want models to draw composite shapes based on letter forms and typography then you need to train them (or at least fine-tune them) to do that.

I still get opposite (antonym) confusion occasionally in responses to inferences where I expect the training data is relatively lacking.

That said, you claim the parent is wrong. How would you describe LLM models, or generative "AI" models in the confines of a forum post, that demonstrates their error? Happy for you to make reference to academic papers that can aid understanding your position.

reply
From Gemini:When you take those two shapes and combine them, the resulting image looks like an umbrella.
reply
You can't keep pushing the AI hype train if you consider it just a new type of software / fancy statistical database.
reply
Yes, there is - benefit of a doubt.
reply
Agree. I think it is just people have their own simplified mental model how it works. However, there is no reason to believe these simplified mental models are accurate (otherwise we will be here 20-year earlier with HMM models).

The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.

reply
Did you mean to use the word "mental"?
reply
> [...] prior to reforming society [...]

Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

reply
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
reply
Literally nobody’s trying because there is no solution

The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly

and so there is no solution because humans can’t plan or execute on a plan

reply
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
reply
FWIW, you'd probably be able to buy a lot of goods and services for $7/day, if robots were doing literally all the work.
reply
> if robots were doing literally all the work

Let me know when ChatGPT can do your laundry.

reply
Agreed. The quality of life bar will be higher for sure. But it will still technically be a "subsistence" lifestyle, with no prospect of improvement. Perhaps that will suffice for most people? We're going to find out.
reply
I thought the answer was "42"
reply
>It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.

reply
> Folks vibe with the latter

I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

reply
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

reply
It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.

It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.

reply
Boy, will they be annoyed if the result of the AI race will be something considerably less than AGI, so all the people are still needed to keep numbers go up.
reply
I don't think so, I think they know there's no AGI, or complete replacement. They are using those hyperbolic statements to get people to buy in. The goal is just to depress the value of human labor, they will lay people off and hire them back at 50% wages (over time), and gaslight us "well you have AI, there isn't as much skill required"

Ultimately they just want to widen the inequality gap and remove as much bargaining power from the working class. It will be very hard for people not born of certain privileges to climb the ranks through education and merit, if not impossible.

Their goal will be to accomplish this without causing a French Revolution V2 (hence all the new surveillance being rolled out), which is where they'll provide wars for us to fight in that will be rooted in false pretenses that appeal to people's basest instincts, like race and nationalism. The bunkers and private communities they build in far off islands are for the occasion this fails and there is some sort of French Revolution V2, not some sort of existential threat from AI (imo).

reply
Reality won't give a shit about what people believe.
reply
You’re “yaas queen”ing a blog post that is just someone’s Claude Code session. It’s “storytelling” with “data,” but not storytelling with data. Do you understand? I mean I could make up a bunch of shit too and ask Claude Code to write something I want to stay with it too.
reply
What is your argument for why denecessitating labor is very bad?

This is certainly the assertion of the capitalist class,

whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.

It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.

The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,

in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.

reply
For ages most people believed in a religion. People are just not smart and sheepy followers.
reply
Most still do.
reply
romans 1:20
reply
The goal is to eliminate humans as the primary actors on the planet entirely

At least that’s my personal goal

If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

As it stands today and in all the annals of history there does not exist a system that does what I just described.

Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

And no mondragon does not have one of these

reply
This looks like a very comfortable, pleasant way of civilization suicide.

Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])

Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.

[1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...

reply
Civilization suicide is the ideal
reply
Your ideal. Definitely not mine.

Get rid of everyone else so your life is easier and more sustainable... I guess I need to make my goal to get rid of you? Do you understand how this works yet?

reply
No, you should make your goal to teach AndrewKemendo to appreciate his existence as the inscrutable gift it is, and to spend his brief time in this universe helping others appreciate the great gift they've been given and using it to the fullest.

See how it works?

reply
AndrewKemendo (based on his personal website) looks to be older than me. If he hasn't figured out the miracle of getting to exist yet, unfortunately I don't think he's going to.
reply
Only looked at his website because it was mentioned and wow. This is not quite at timecube levels but it’s closer to timecube than it is to coherence.

The man seems unwell if he has kids based on his other comments and is still talking about “civilization suicide” and “obviating humans”

reply
So why are you wasting your time being a miracle on anything other than building the successor to us?
reply
Because I don't believe humans need succeeding by machines? You're obviously a Curtis Yarvin / Nick Land megafan. I'm of the opinion that these people are psycopaths and I think most people would agree with my sentiment.
reply
I’m a father of three I already know all about that there’s nothing you’re gonna teach me there I’m fully integrated
reply
Somebody probably ought to take your kids away from you if they haven’t already
reply
It’s mildly amusing to see someone with the username ‘tinfoilhatter, arguing with someone else who definitely needs one
reply
Sounds like we both have our tasks then

Good luck

reply
Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.
reply
Because every effort people use to do anything else is a waste of resources and energy and I want others to stop using resources to make bullshit and put all of them into ASI and human obviation

There are no more important other problems to solve other than this one

everything else is purely coping strategies for humans who don’t want to die wasting resources on bullshit

reply
Nobody can stop you from having this view, I suppose. But what gives you the right to impose this (lack of) future on billions of humans with friends and families and ambitions and interests who, to say the least, would not be in favor of “human obviation”?
reply
You should probably build an organization that can counter it
reply
Bell labs was pushed aside because Bell Telephone was broken up by the courts. (It's currently a part of Nokia of all things - yeah, despite your storytelling here, it's actually still around :-)
reply
I don't see a credible path where the machines and robots help you...

> "eliminate humans as the primary actors on the planet entirely"

...so they can work with you. The hole in your plan might be bigger than your plan.

reply
Most people need more social contact, not less. Modern tech is already alienating enough.
reply
Why would the machines want to work with you or any other human?
reply
In the mean time your use of resources has an opportunity cost for other people. So expect backlash
reply
Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.

I still do.

The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.

Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.

But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.

To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.

Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.

But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.

reply
...Man, men really will do anything to avoid going to therapy.
reply
Whereas I agree that working with machines would help dramatically in achieving science, there would be in your world no one truly understanding you. You would be alone. Can't imagine how you could prefer that.
reply
Now this is transhumanism! Don't let the cope and seething from this website dissuade you from keeping these views.
reply
Thank you!
reply
Ah yes, because the majority of people pushing for transhumanism aren't complete psyco / sociopaths! You're in great company! /sarcasm
reply
I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.

First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.

So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.

The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.

reply
> Nobody knows how LLMs work.

I'm sorry, come again?

reply
Nobody knows how LLMs work.

Anybody who claims otherwise is making a false claim.

reply
I think they meant "Nobody knows why LLMs work."
reply
same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.
reply
Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.
reply
nobody can how how something that is non-deterministic works - by its pure definition
reply