upvote
I think the thin-client/flat-client is a pendulum that swings every few years.

Main-frame (thin) -> PC (fat) -> Internet/Cloud (thin) -> Mobile (fat) -> AI (thin)

I expect this to continue until the next technology transition.

In each of these shifts, and there have been others, things are not completely fat or thin, more of an in-between state but leaning to local vs cloud.

reply
We won't be in a supply crunch forever. We'll have a demand crunch. The demand of powerful consumer hardware will shrink so much that producing them will lose the economics of scale. It 've always been bound to happen, just delayed by the trend of pursuing realistic graphics for games.

People who are willing to drop $20k on a computer might not be affected much tho.

reply
> People who are willing to drop $20k on a computer might not be affected much tho.

They probably won't, but those willing to drop $3-10k will be if the consumer and data-center computing diverge at the architectural level. It's the classical hollowing out the middle - most of the offerings end up in a race-to-the-bottom chasing volume of price-sensitive customers, the quality options lose economies of scale and disappear, and the high-end becomes increasingly bespoke/pricey, or splits off into a distinct market with an entirely different type of customers (here: DC vs. individuals).

reply
My bet is that phone hardware will be used more and more in mini PCs and laptops keeping the cost down and volume up. We see it with Apple and many Chinese mini PC makers I looked at.
reply
This is so true. Convergence will continue. H/W miniaturization will keep increasing. In fact, new brands could easily appear and even overtake the largest players. For example, have you seen this massive range of docking technology.

https://us.ugreen.com/collections/usb-c-hubs - these docks only require a single USB port to connect to. That could be a SBC working as a handheld. These docks could end up being the largest cost component in the new era of all-in-ones. UGreen could be the next Apple as screens and processors snap-on to these hubs, in addition to their own range of power banks and SSD enclosures. Their quality is high too.

In fact, I would go so far as to say we are entering a tinkering culture, and free-energy technologies are upon us as a response oppressive economic times. Sort of like how the largest leaps in religious and esoteric thought have occurred in the most oppressive of circumstances.

People will reject their crappy thin clients, start tinkering and build their own networks. Knowledge and currency will stay private and concentrated - at least at first.

reply
RAM is going to be the most expensive component, I suppose.

But indeed, once you have USB-C support on your device, you can connect all kinds of periphery through it, from keyboards to 4K screens. Standardized device classes obviate the need for most drivers.

reply
Yep. I was thinking that as crypto miners pivot into AI https://catenaa.com/markets/cryptocurrencies/jpmorgan-morgan... - there must also be a case for miners (anyone really) liquidating their hardware, including memory. So the price of memory has its own limits-to-growth - latent availability, but that's another topic.
reply
If this ends up being true, desktop Linux adoption might make inroads. Windows apps run like crap on ARM and no one is bothering to make ARM builds of their software.
reply
Because ARM Windows is locked down tightly. The same will interfere with Linux adoption on similar hardware.
reply
The original Raspberry Pi was built around an overstock phone chip. Modern alternatives built around Rockchip and similar high-end phone chips venture into the territory of lower-end laptops. Aliexpress is full of entry-level laptops based on ARM phone chips (apparently running Android).

This will likely extend further and further, more into the "normie" territory. MS Windows is, of course, the thing that keeps many people pinned to the x64 realm, but, as Chromebooks and the Steam Deck show us, Windows is not always a hard requirement to reach a large enough market segment.

reply
No, a set-top-box chip.
reply
All we need is for HDMI to be unlocked so it works on phones, or maybe VGA adapters that work on phones. And a way to "sideload" our own apps. Hackers please make this happen.
reply
Some modern phones do DisplayPort over USB C.
reply
Unified hardware helps some and hurts some. See: same gpus for gaming and for AI.
reply
Apple just launched a $600 amazing laptop and the top models have massive performance. What are we talking about here?
reply
I don't think personal computers will go away, but I think the era of "put it together yourself" commodity PC parts is likely coming to an end. I think we're going to see manufacturers back out of that space as demand decreases. Part selection will become more sparse. That will drive further contraction as the market dries up. Buying boxed motherboards, CPUs, video cards, etc, will still exist, but the prices will never recover back to the "golden age".

The large PC builders (Dell, HP, Lenovo) will continue down the road of cost reduction and proprietary parts. For the vast majority of people pre-packaged machines from the "big 3" are good enough. (Obviously, Apple will continue to Apple, too.)

I think bespoke commodity PCs will go the route, pricing wise, of machines like the Raptor Talos machines.

Edit: For a lot of people the fully customized bespoke PC experience is preferred. I used to be that person.

I also get why that doesn't seem like a big deal. I've been a "Dell laptop as a daily driver" user for >20 years now. My two home servers are just Dell server machines, too. I got tired of screwing around with hardware and the specs Dell provided were close enough to what I wanted.

reply
There are upsides here as well! I think of things like the NUC or Mac Mini - ATX is from 1995, I'm hopeful computers will become nicer things as we trend away from the bucket-o-parts model.

I'm very excited about the Steam Machine for the reasons you mention - I want to buy a system, not a loose collection of parts that kind-of-sort-of implement some standard to the point that they probably work together.

reply
Reading some of the doomer comments in this thread feels like taking a glimpse into a different world.

We're out here with amazing performance in $600 laptops that last all day on battery and half of this comment section is acting like personal computing is over.

reply
They don't run the software I want to run (Linux, Windows games) and/or with the performance I want.

Raspberry Pi is way cheaper than those things, and I'm sure you could hook one up with an all-day battery for $100-200.. Doesn't mean it's "better".

reply
They trade blows performance wise with the M1 MacBook Pro sitting on my desk. And theres nothing stopping asahi linux running on them except for driver support. They look like fantastic machines.

They’re not ideal for all use cases, of course. I’m happy to still have my big Linux workstation under my desk. But they seem to me like personal computers in all the ways that matter.

reply
Two different populations — those interested in computing, and those interested in computers.
reply
Personal computing and IBM PC clones are not the same thing. The fall of PC clones can happen while other personal computing devices continue to be produced. The $600 laptop is not a PC.
reply
Apple laptops are PCs (Personal Computers). They are not IBM PCs. But IBM hasn't made PCs in years, and there hasn't been any IBM PC hardware to clone in years.
reply
but I don't want a $600 amazing laptop, i want a powerful desktop x86 machine with loads of ram and disk space. As cheap as it was a couple of years ago.
reply
> As cheap as it was a couple of years ago.

I also want housing as cheap as it was a couple of years ago.

reply
You can have both. You just have to undo the forced bail-in of Millennial and Gen-Z/Alpha/Beta productivity to cover the debts and lifestyles of Silent Gen/Boomer/Gen-X asset holders. The insanity of contemporary markets doesn't reflect anything natural about the world's economic priorities, but instead the privileging of the priorities of that cohort. They've cornered control until enough people call bullshit. So, call bullshit.
reply
x86 going away wouldn't be surprising. Ignoring David Patterson was a mistake to begin with.
reply
Looking at AMD's x64 server offerings, I don't see why that would go away.

But I can imagine that it would become less prevalent on personal machines, maybe even rare eventually.

reply
Not sure about the memory, but Xeon Scalable/Max ES/QS chips and their boards are still not horribly expensive.

Prior to the crunch, you could have anything from 48-64 cores and a good chunk of RAM (128GB+). If you were inordinately lucky, 56 cores and 64GB of onboard HBM2e was doable for 900-1500 USD.

They’re not Threadrippers or EPYCs,but sort of a in between - server chip that can also make a stout workstation too.

reply
8GB isn't an "amazing" laptop, it's a budget laptop. It's also thermally constrained quite a bit, so not even as "amazing" as it could be.
reply
The point about Apple is that everyone from zoom, slack etc will be forced to optimize for that 8GB. (Same like getting rid of awful flash player).

Many a people need only a basic device for Netflix, YouTube, google docs or email or search/but flights tickets. That will be amazing.

Many have job supplied laptop/desktop for great performance (made rubbish by AV scanners but that's different issue)

reply
>(Same like getting rid of awful flash player).

I was looking up an old video game homepage the other day for some visual design guidance. It was archived on the Wayback Machine, but with Flash gone, so was the site. Ruffle can't account for every edge case.

Flash was good. It was the bedrock of a massive chunk of the Old Net. The only thing awful are the people who pushed and cheered for its demise just so that Apple could justify their walled garden for the few years before webdev caught up. Burning the British Museum to run a steam engine.

reply
If they choke the consumer PC long enough the segment will die
reply
> We'll have a demand crunch

This is what I'm afraid of. As more stuff moves to the cloud helped in part by the current prices of HW, the demand for consumer hardware will drop. This will keep turning the vicious cycle of rising consumer HW prices and more moves to the cloud.

I can already see Nvidia rubbing their hands together in expectation of the massive influx of customers to their cloud gaming platform. If a GPU is so expensive, you move to a rental model and the subsequent drop in demand will make GPUs even more expensive. They're far from the only ones with dollar signs in their eyes, between the money and total control over customers this future could bring.

Being entirely reliant on someone else's software and hardware is a bleak thought for a person used to some degree of independence and self sufficiency in the tech world.

reply
>Being entirely reliant on someone else's software and hardware is a bleak thought for a person used to some degree of independence and self sufficiency in the tech world.

It's also a nightmare from any sort of privacy perspective, in a world that's already becoming too much like a panopticon.

reply
> I can already see Nvidia rubbing their hands together in expectation of the massive influx of customers to their cloud gaming platform.

Roblox is not popular because of its graphics. Younger gamers care more about having fun than having an immersive experience.

reply
I love it when I get my Robloxhead daughter to test drive some of the games I play on my 5090 box. "Ooooh these graphics are unreal" "Can we stop for just a moment and admire this grass" :-D
reply
I think we're talking about 2 different things. I'm not sure where Roblox fits into what I said.

The problem I describe is companies pushing towards the "rent" model vs. "buy to own". Nvidia was just an example by virtue of their size. Microsoft could be another, they're also eying the game streaming market. Once enough buyers become renters, the buying market shrinks and becomes untenable for the rest, pushing more people to rent.

GPUs are so expensive now that many gamers were eying GeForce Now as a viable long term solution for gaming. Just recently there was a discussion on HN about GeForce Now where a lot of comments were "I can pay for 10 years of GeForce Now with the price of a 5090, and that's before counting electricity". All upsides, right?

In parallel Nvidia is probably seeing more money in the datacenter market so would rather focus the available production capacity there. Once enough gamers move away from local compute, the demand is unlikely to come back so future generations of GPUs would get more and more expensive to cater for an ever shrinking market. This is the vicious cycle. Expensive GPU + cheap cloud gaming > shrinking GPU market and higher GPU prices > more of step 1.

Roblox is one example of a game, there are many popular games that aren't graphics intensive or don't rely on eye candy. But what about all the other games that require beefy GPU to run? Gamers will want to play them, and Nvidia like most other companies sees more value in recurring revenue than in one time sales. A GPU you own won't bring Nvidia money later, a subscription keeps doing that.

The price hikes come only after there's no real alternative to renting. Look at the video streaming industry.

reply
Yeah, this gamer conspiracy theory never made sense to me.

Also, if gamers demand infinitely improving graphics so much that they would rather pay for cloud gaming than relax their expectations and be happy with, say, current gen graphics, then that is more a claim about modern self-pwned gamer behavior than megacorp conspiracy.

But I don't buy that either. The biggest games on Steam Charts and Twitch aren't AAA RTX 5090 games.

reply
> then that is more a claim about modern self-pwned gamer behavior than megacorp conspiracy.

Riddle me this: does anyone pursue a self-pwn intentionally?

"Conspiracy theory" is just dehumanizer talk for falling prey to business as usual.

reply
As someone who has been buying computers for 40+ years, including the 1st gen 3dfx card, etc, this is where I NOPE out of the next upgrade cycle. I am not renting hardware. It's bad enough ISPs are renting modems.
reply
The problem is that there is a very large incentive for three large companies to corner the market on computing components, forcing consumers to rent access instead of owning.
reply
> We won't be in a supply crunch forever.

This what always happens in capitalism. Scarcity is almost always followed by glut

reply
I don’t believe we are seeing the investments necessary that would indicate this will happen.

Memory makers, for example, have sold out their inventory for several years, but instead of investing to manufacture more, they’re shutting down their consumer divisions. They’re just transferring their consumer supply to their B2B (read AI) supply instead.

Thats likely because they don’t expect this demand to last past a few years.

reply
They have seen boom and bust cycles previously and are understandably wary of expanding capacity for expected demand that may fizzle. If they stay too conservative, China’s CXMT is chomping at the bit to eat their lunch, backed by the Chinese government, but that’s not going to help until late 2027 at best.
reply
How much capital would you invest in a capacity expansion for a trend that may or may not yet be durable? Now, how much would you invest when there are two major state-backed chinese entities that essentially aren't allowed to go bankrupt and have infinity money are competing with you?
reply
If the demand lasts for a few years, I’m doubtful that all of the consumer capacity will come back.
reply
Consumer demand likely depends on how local models end up working out. Nothing else really needs serious local computing power anymore. My guess is that even high-end games will probably stagnate for a while.

Many users will not want to risk their privacy, data, and workflow on someone else's rapidly-enshittifying AI cloud model. Right now we don't have much choice, but there are signs of progress.

reply
High level games are far from stagnating, when viewed from usable performance.

Many new games cannot run max settings, 4k, 120hz on any modern gpus. We probably need to hit 8k before we max out on the returns higher resolution can provide. Not to mention most game devs are targeting an install base of $500 6 year old consumer hardware, in a world where the 5090 exists.

reply
That's what I mean by stagnating... most players already can't run with max settings, or even close to them. From the developers' point of view there's not much point raising the bar any higher right now, while the best GPU hardware is so far out of reach of your average PC gamer.
reply
The thing is, other than AI stuff, where does a non powerful computer limit you?

My phone has 16gigs of ram and a terabyte of storage, laptops today are ridiculous compared to anything I studied with.

I'm not arguing mind you, just trying to understand the usecases people are thinking of here.

reply
> other than AI stuff, where does a non powerful computer limit you?

Running Electron apps and browsing React-based websites, of course.

reply
For real. Once I've opened Spotify, Slack, Teams, and a browser about 10GB of RAM is in use. I barely have any RAM left over for actual work.
reply
I keep wondering why we can't have 2000s software on today's hardware. Maybe because browsers are de facto required to build apps?
reply
We could, but most of the 2000s developers are gone. Or, we no longer have developers left with 2000s attitudes and approaches to software development.
reply
I think that is a little bit unfair. I think plenty of developers, myself included wouldn't mind or would like to do native applications. Every time someone does those, a mountain of people ask "why" and "this shoulda/coulda been a web app." And some of that is somewhat reasonable. It's easier to achieve decent-ish cross platform. But also tons of consumers also just don't wanna download and install applications unless it comes from an App Store. And even then, it's iffy. Or most often the case, it's a requirement of the founders/upper management/c-suite. And lets be honest, when tons of jobs ask for reactive experience or vue.js, what motivates developers to learn GTK or Qt or Winforms or WinUI3?
reply
Yep. I graduated in 2017 and jobs were already mostly web. I’d love to work on native applications but nobody is hiring for that and of course because nobody is hiring for that I don’t have a job like that and the Qt I learnt in university is not gonna get any more relevant over time but I don’t have a good reason to keep that skill up to date and if I have to solve a problem I might as well write a TUI or CLI application because that’s easier than Qt or whatever…
reply
It's also reasonable from a business point of view to say "we can't justify the investment to optimize our software in the current environment." I assume this is what's happening - people are trying to get their products in customers hands as quickly as possible, and everything else is secondary once it's "good enough." I suspect it's less about developers and more about business needs.

Perhaps the math will change if the hardware market stagnates and people are keeping computers and phones for 10 years. Perhaps it will even become a product differentiator again. Perhaps I'm delusional :).

reply
Real talk.

Well, some of the "old school" has left the market of natural causes since the 2000s.

That only leaves the rest of 'em. Wer dey go, and what are your top 3 reasons for how the values of the 2000s era failed to transmit to the next generation of developers?

reply
There's no market for it.
reply
deleted
reply
That’s why I only run those on work computers (where they are mandated by the company). My personal computers are free of these software.
reply
I rarely doge a chance to shit on Microslop and its horrible products, but you don't use a browser? In fact, running all that junk in a single chromium instance is quite a memory saver compared to individual electron applications.
reply
It's not just electron apps or browsers, as I'd argue modern .NET apps are almost as bad.

I have an example.

I use Logos (a Bible study app, library ecosystem, and tools) partially for my own faith and interests, and partially because I now teach an adult Sunday school class. The desktop version has gotten considerably worse over the last 2-3 years in terms of general performance, and I won't even try to run it under Wine. The mobile versions lack many of the features available for desktop, but even there, they've been plagued by weird UI bugs for both Android and iOS that seem to have been exacerbated since Faithlife switched to a subscription model. Perhaps part of it is their push to include AI-driven features, no longer prioritizing long-standing bugs, but I think it's a growing combination of company priorities and framework choices.

Oh, for simpler days, and I'm not sure I'm saying that to be curmudgeonly!

reply
I use a browser at home, but I don't use the heaviest web sites. There are several options for my hourly weather update, some are worse than others (sadly I haven't found any that are light weight - I just need to know if it would be a thunderstorm when I ride my bike home from work thus meaning I shouldn't ride in now)
reply
Yr.no [1] is free, and available in English. Thanks to Norway. Apps available as well.

https://en.wikipedia.org/wiki/Yr.no

reply
Try Quickweather (with OpenMeteo) if you're on Android. I love it.

https://f-droid.org/en/packages/com.ominous.quickweather/

reply
I'm giving up on weather apps bullshit at this point, and am currently (literally this moment) making myself a Tasker script to feed hourly weather predictions into a calendar so I can see it displayed inline with events on my calendar and most importantly, my watch[0] - i.e. in context it actually matters.

--

[0] - Having https://sectograph.com/ as a watch face is 80%+ of value of having a modern smartwatch to me. Otherwise, I wouldn't bother. I really miss Pebble.

reply
fun fact, you can kill all firefox background processes and basically hand-crash every tab and just reload the page in the morning. I do this every evening before bed. `pkill -f contentproc` and my cpu goes from wheezing to idle, as well as releasing ~8gb of memory on busy days.

("Why don't you just close firefox?" No thanks, I've lost tab state too many times on restart to ever trust its sessionstore. In-memory is much safer.)

reply
Yeah, I found this out the other day when my laptop was toasting. In hindsight, probably related to archive.today or some Firefox extension.

You have to close Firefox every now and then for updates though. The issue you describe seems better dealt with on filesystem level with a CoW filesystem such as ZFS. That way, versioning and snapshots are a breeze, and your whole homedir could benefit.

reply
FWIW: the Tab Stash extension has worked well for me.
reply
Why would I need a browser to play music? Or to send an email? Or to type code? My browser usage is mostly for accessing stuff on someone else’s computer.
reply
The only subscription I have is Spotify, since there's no easy way that I know of to get the discoverability of music in a way that Spotify allows it.

For the rest: I agree with you.

reply
lastfm is still a great way to discover music, countless times I've gotten great recs from a music neighbor.
reply
Plex or Jellyfin client access.
reply
mpv + sshfs is the way.
reply
I kind of hate how the www has become this lowest common denominator software SDK. Web applications are almost always inferior to what you could get if you had an actual native application built just for your platform. But we end up with web apps because web is more convenient for software developers and it's easier to distribute. Everything is about developer convenience. We're also quickly running out of software developers who even know how to develop and distribute native apps.

And when, for whatever reason, having a "desktop application" becomes a priority to developers, what do they do? Write it in Electron and ship a browser engine with their app. Yuuuuuuck!

reply
We have an open, universal application platform. That alone is something to celebrate.
reply
Yeah it's awful. Web apps are slower, they don't integrate well with the system, they are inaccessible if the network is down. A native app has to be truly abysmal to be worse than a web app. But far too many developers simply do not care about making something good any more. There's no pride in one's work, just "web is easier for the developer". And of course the businesses producing software are all about that, because they are run by people with a business ethic of "make the product as cheaply as possible, ignore quality". It's a very sad state of affairs.
reply
Seems like the perfect target for ESG.
reply
Companies love externalizing the costs of making efficient software onto consumers, who need to purchase more powerful computing hardware.
reply
If only. At work I've got a new computer, replacing a lower-end 5-yo model. The new one has four times the cores, twice the RAM, a non-circus-grade ssd, a high-powered cpu as opposed to the "u" series chip the old one has.

I haven't noticed any kind of difference when using Teams. That piece of crap is just as slow and borken as it always was.

reply
> If only. At work I've got a new computer, replacing a lower-end 5-yo model. The new one has four times the cores, twice the RAM, a non-circus-grade ssd, a high-powered cpu as opposed to the "u" series chip the old one has.

> I haven't noticed any kind of difference when using Teams.

If the device is a laptop, also the thermal design (or for laptops that are in use: whether there is dust in the ventilation channels (in other words: clean the fans)) is very important for the computer to actually achieve the performance that the hardware can principally deliver.

reply
Yeah people love to shit on electron and such but they're full of crap. It doesn't matter one bit for anything more powerful than a raspberry pi. Probably not even there. "Oh boo hoo chrome uses 2 gigs of ram" so what you have 16+ it doesn't matter. I swear people have some weird idea that the ideal world is one where 98% of their ram just sits unused, like the whole point of ram is to use it but whenever an application does use it people whine about it. And it's not even like "this makes my pc slow" it's literally just "hurr durr ram usage is x" okay but is there an actual problem? Crickets.
reply
I have no issues with browsers specifically having to use a bunch of resources. They are complicated as fuck software, basically it's own operating system. Same for video games or programs that do heavy data processing.

The issue is with applications that have no business being entitled to large amount of resources. A chat app is a program that runs in the background most of the time and is used to sporadic communication. Same for music players etc. We had these sorts of things since the 90's, where high end consumer PCs hat 16mb RAM.

reply
"chrome uses 2gb of ram"

these days individual _tabs_ are using multiple gb of ram.

reply
Don't know about chrome, but Firefox has an about:memory special page that will let you know which tabs are using the most ram. Of all the sites I use, youtube is the only culprit. When I am done watching a video, I use the about:memory to kill the associated process (doesn't destroy the tab (in case I want to come back to it)). I assume it is all the javascript cruft.
reply
The issue isn't usage, it's waste. Every byte of RAM that's used unnecessarily because of bloated software frameworks used by lazy devs (devs who make the same arguments you're making) is a byte that can't be used by the software that actually needs it, like video editing, data processing, 3D work, CAD, etc. It's incredibly short sighted to think that any consumer application runs in a vacuum with all system resources available to it. This mindset of "but consumers have so much RAM these days" just leads to worse and worse software design instead of programmers actually learning how to do things well. That's not a good direction and it saddens me that making software that minimizes its system footprint has become a niche instead of the mainstream.

tl;dr, no one is looking for their RAM to stay idle. They're looking for their RAM to be available.

reply
I dunno man, I have 32gb and I'm totally fine playing games with 50 browser tabs open along with discord and Spotify and a bunch of other crap.

In not trying to excuse crappy developers making crappy slow ad wasteful apps, I just don't think electron itself is the problem. Nor do I think it's a particularly big deal if an app uses some memory.

reply
You're right, Electron is not inherently bad and apps need RAM. There's no getting around that.

The issue with Electron is that it encourages building desktop apps as self-contained websites. Sure, that makes it easier to distribute apps across systems and OSes, but it also means you've got front end web devs building system applications. Naturally, they'll use what they're used to: usually React, which exacerbates the problem. Plus it means that each app is running a new instance of a web browser, which adds overhead.

In real life, yeah, it's rare that I actually encounter a system slowdown because yet another app is running on Electron. I just think that it's bad practice to assume that all users can spare the memory.

I'll admit that my concern is more of a moral one than a practical one. I build software for a living and I think that optimizing resource usage is one way to show respect to my users (be they consumers, ops people running the infra, or whatever). Not to mention that lean, snappy apps make for a better user experience.

reply
Lazy developers can make bad apps that waste RAM no matter what framework. But even conscientious developers cannot make an app with Electron that compares favorably to a native app. Electron is inherently a problem, even if it isn't the only one.
reply
The problem with having 32gb of RAM is that there is no mechanism to power off part of it when it is unneeded (plus RAM constitutes a significant fraction of a device's total power consumption) so if the device is running off a battery and is designed to keep device weight to a minimum (e.g., battery as small as practical), then battery life is not as good as it would be if the device had only 16gb.

This is why the top model of the previous generation of the iPhone (the iPhone 16 Pro Max) has only 8 GB of RAM, bumped to 12 GB for the current top model (the iPhone 17 Pro Max at the higher tiers of additional storage). If Apple had decided to put more RAM than that into any iPhone, even the models where the price is irrelevant to most buyers, they would not have been serving their customers well.

So, now you have to pay a penalty in either battery life or device weight for the duration of your ownership of any device designed for maximum mobility if you ever want to having a good experience when running Electron apps on the device.

reply
The web browser on my phone instantly gets killed the moment I switch to another app because it eats up so much ram.
reply
I think it's a correlation vs causation type thing. Many Electron apps are extremely, painfully, slow. Teams is pretty much the poster child for this, but even spotify sometimes finds a way to lag, when it's just a freaking list of text.

Are they slow because they're Electron? No idea. But you can't deny that most Electron apps are sluggish for no clear reason. At least if they were pegging a CPU, you'd figure your box is slow. But that's not even what happens. Maybe they would've been sluggish even using native frameworks. Teams seems to do 1M network round-trips on each action, so even if it was perfectly optimized assembly for my specific CPU it would probably make no difference.

reply
Nearly all apps are sluggish for a very clear reason - the average dev is ass. It's possible to make fast apps using electron, just like it's possible to make fast apps using anything else. People complain about react too, react is fast as fuck. I can make react apps snappy as hell. It's just crappy devs.
reply
Yea, these applications are typically not slow just because the use Electron (although it's often a contributor). But the underlying reason why they are slow is the same reason why they are using Electron: developer skill.
reply
The people I trust to give good security recommendations (e.g., the leader of the Secureblue project) tell me I should completely avoid Electron (at least on Linux) because of how insecure it is. E.g., the typical Electron app pulls in many NPM packages, for which Electron does zero sandboxing.
reply
It seems like as hardware gets cheaper, software gets more bloated to compensate. Or maybe it’s vice versa.

I wonder if there’s a computer science law about this. This could be my chance!

reply
Is your name Wirth?
reply
Dangit! Always the bridesmaid, never the bride
reply
deleted
reply
Sorry to burst your bubble:

https://en.wikipedia.org/wiki/Wirth%27s_law

Not exactly the same (it's about power rather than price). But close enough that when you said it, I thought, "oh! there is something like that." There's also more fundamental economics laws at play for supply and demand of a resource / efficiencies at scale / etc. Given our ever increasing demand of compute compared increasing supply (cheaper more powerful compute), I expect the supply will bottleneck before the demand does.

reply
Ah, so you think there’s a point where actually bloat slows because we eventually can’t keep up with demand for compute?

I guess this might be happening with LLMs already

reply
That's actually a good point, haha. The worst-case scenario of computers being thin clients for other people's servers dissolves when you realize that chromium/electron IS, nominally, a thin client for HTTP servers, and it'll gladly eat up as much memory as you throw at it. In the long term, modulo the current RAM shortage, it turns out it's cheaper to ship beefy hardware than it is to ship lean software.
reply
This is the way
reply
The big one for me is ballooning dependency trees in popular npm/cargo frameworks. I had to trade a perfectly good i9-based MacBook Pro up to an M2, just to get compile times under control at work.

The constant increases in website and electron app weight don't feel great either.

reply
3D CAD/CAM is still CPU (and to a lesser extent memory) bound --- I do joinery, and my last attempt at a test joint for a project I'm still working up to was a 1" x 2" x 1" area (two 1" x 1" x 1" halves which mated) which took an entry-level CAM program some 18--20 minutes to calculate and made a ~140MB file including G-code toolpaths.... (really should have tracked memory usage....)
reply
That sounds like pretty degenerate behavior. I typically have CAM toolpaths generate in seconds using Fusion or PrusaSlicer.
reply
It's a very complex joint (which is why it's never been done before that I could find --- hopefully will be patentable), and the tool definition probably wasn't optimal, nor the CAM tool being used appropriate to the task, hence my working on developing the toolpaths more directly.
reply
Is that by convention or is there a good reason that it’s so CPU bound? I don’t have experience with CAD, so I’m not sure if it’s due to entrenched solutions or something else.
reply
> Is that by convention or is there a good reason that it’s so CPU bound?

A lot of commercial CAD software exists for a very long time, and it is important for industrial customers that the backward compatibility is very well kept. So, the vendors don't want to do deep changes in the CAD kernels.

Additionally, such developments are expensive (because novel algorithms have to be invented). I guess CAD applications are not that incredibly profitable that as a vendor you want to invest a huge amount of money into the development of such a feature.

reply
My understanding is that the problems being worked on do not yield to breaking down into parallelizable parts in an efficient/easily-calculated/unambiguous fashion.
reply
Simulation, analysis, rendering... All those gobbles memory, CPU, sometimes graphic card. Real time works also: huge data set in real time — sensor for production line or environmental monitoring for example.

For word processing, basic image manipulation, electron app (well...) even the "cheap" Macbook Neo is good enough, and it's a last year phone CPU. But that's not enough for a lot of use case.

reply
> My phone has 16gigs of ram and a terabyte of storage, laptops today are ridiculous compared to anything I studied with.

Most affordable laptops have exactly that, 16gigs of ram and a terabyte of storage. Think about THAT!

reply
I've never have a personal computer that came even close to powerful enough to do what i want. Compiles that take 15 minutes, is really annoying for instance.
reply
>My phone has 16gigs of ram and a terabyte of storage

That's "non powerful" to you?

reply
The opposite. I meant that if this is what consumer grade looks like nowadays, even with a fraction of current flagships we seem well covered - this was less than 800 bucks.
reply
I’d love it if a clean build and test on the biggest project I work in would finish instantly instead of taking an hour.
reply
[dead]
reply
> "I personally dropped $20k on a high end desktop"

This absolutely boggles my mind. Do you mind if I ask what type of computing you do in order to justify this purchase to yourself?

reply
Any and all. It's not particularly justifiable. It's more like, I'm a software engineer, and this is my home workshop. I run dozens of services, experiment with a bunch of different LLMs, tune my Postgres instance for good performance on large datasets, run ML data prep pipelines. All sorts really.

I'm also into motorcycles. Before I owned a house with a garage, I had to continuously pack my tools up and unpack them the next day. A bigger project meant schlepping parts in and out of the house. I had to keep track of the weather to work on my bikes.

Then, when I got a house, I made sure to get one with a garage and power. It transformed my experience. I was able to leave projects in situ until I had time. I had a place to put all my tools.

The workstation is a lot like that. The alternative would be renting. But then I'd spend a lot of my time schlepping data back and forth, investing in setting things up and tearing them down.

YMMV. I wouldn't dream of trying to universalize my experience.

reply
I'm thinking the same. My total computing purchases in the last 25 years, including desktops, laptops, monitors, phones, and tablets is way under 20k.

I would bet it continues to be more affordable to buy reasonable specs with current consumer hardware, rather than buying a top system once.

reply
I haven't purchased a new computer in, at least, 10 years. I take pride (i.e., I have a sickness) in purchasing used laptops off eBay, beefing them up, and loading Debian on them. My two main computers are a Dell E5440 and a Lenovo ThinkPad T420. I, too, am a software developer, but [apparently] not as much of a rock star software developer at this gentleman. :-D
reply
> I personally dropped $20k on a high end desktop - 768G of RAM, 96 cores, 96 GB Blackwell GPU - last October, before RAM prices spiked […]

768GB of RAM is insane…

Meanwhile, I’ve been going back and forth for over a year about spending $10k on a MacBook Pro with 128GB. I can’t shake the feeling I’d never actually use that much, and that, long term, cloud compute is going to matter more than sinking money into a single, non-upgradable machine anyway.

reply
Your battery is going to suffer because of the extra ram as well.

I don't know your workloads, but for me personally 64 GB is the ceiling buffer on RAM - I can run entire k8s cluster locally with that and the M5 Pro with top cores is same CPU as M5 Max. I don't need the GPU - the local AI story and OSS models are just a toy for my use-cases and I'm always going to shell out for the API/frontier capabilities. I'm even thinking of 48 config because they already have those on 8% discounts/shipped by Amazon and I never hit that even on my workstation with 64 GB.

reply
> Your battery is going to suffer because of the extra ram as well.

No, it won't. The power drain of merely refreshing DRAM is negligible, it's no higher than the drain you'd see in S3 standby over the same time period.

reply
Given the DRAM refresh is part of S3 standby, I'm afraid this is circular reasoning.
reply
I suspect this is one of those "it depends" situations; does the 128gb vs 64gb sku have more chips or denser chips? If "more chips" probably it'll draw a tiny bit more power than the smaller version. If the "denser" chips, it may be "more power draw" but such a tiny difference that it's immaterial.

Similarly, having more cache may mean less SSD activity, which may mean less energy draw overall.

If I had a chip to put on the roulette table of this "what if" I'd put it on the "it won't make a difference in the real world in any meaningful way" square.

reply
I thought my Z620 with 128GB of RAM was excessive! Actually, HP says they support up to 192GB of RAM, but for whatever reason the machine won't POST with more than 128GB (4Rx4) in it. Flawed motherboard?
reply
Look at the way age gating is going in a global coordinated push. Can control of compute be far behind?

It wasn't my primary motivator but it hasn't made me regret my decision.

I hummed and hawed on it for a good few months myself.

reply
Just look at ITAR and the various attempts at legislating 3D printing and CNC machining of firearms parts to see one justification point of that.
reply
> Can control of compute be far behind?

How is this going to work? You need uncontrolled compute for developing software. Any country locking up that ability too much will lose to those who don't.

reply
> How is this going to work? You need uncontrolled compute for developing software.

I've read about companies where all software developers have to RDP to the company's servers to develop software, either to save on costs (sharing a few powerful servers with plenty of RAM and CPU between several developers) or to protect against leaks (since the code and assets never leave the company's Citrix servers).

reply
Even for tiny crews doing nothing of fatal significance, this is unironically superior to "throw it on GitHub"
reply
>You need uncontrolled compute for developing software

Oh you sweet summer child :(

You think our best and brightest aren't already working on that problem?

In fact they've fucking aced it, as has been widely celebrated on this website for years at this point.

All that remains is getting the rest of the world to buy in, hahahaha.

But I laugh unfairly and bitterly; getting people to buy in is in fact easiest.

Just put 'em in the pincer of attention/surveillance economy (make desire mandatory again!).

And then offer their ravaged intellectual and emotional lives the barest semblance of meaning, of progress, of the self-evident truth of reason.

And magic happens.

---

To digress. What you said is not unlike "you need uncontrolled thought for (writing books/recording music/shooting movies/etc)".

That's a sweet sentiment, innit?

Except it's being disproved daily by several global slop-publishing industries that exist since before personal computing.

Making a blockbuster movie, recording a pop hit, or publishing the kind of book you can buy at an airport, all employ millions of people; including many who seem to do nothing particularly comprehensible besides knowing people who know people... It reminds me of the Chinese Brain experiment a great deal.

Incidentally, those industries taught you most of what you know about "how to human"; their products were also a staple in the lives of your parents; and your grandparents... if you're the average bougieprole, anyway.

---

Anyway, what do you think the purpose of LLMs even is?

What's the supposed endgame of this entire coordinated push to stop instructing the computer (with all the "superhuman" exactitude this requires); and instead begin to "build" software by asking nicely?

Btw, no matter how hard we ignore some things, what's happening does not pertain only to software; also affected are prose, sound, video, basically all electronic media... permit yourself your one unfounded generalization for the day, and tell me - do you begin to get where this is going?

Not "compute" (the industrial resource) but computing (the individual activity) is politically sensitive: programming is a hands-on course in epistemics; and epistemics, in turn, teaches fearless disobedience.

There's a lot of money riding on fearless disobedience remaining a niche hobby. And if there's more money riding on anything else in the world right now, I'd like an accredited source to tell me what the hell that would be.

Think for two fucking seconds and once you're done screaming come join the resistance.

reply
> 768GB of RAM is insane.

Before this price spike, it used to be you could get a second-hand rack server with 1TB of DDR4 for about $1000-2000. People were massively underestimating the performance of reasonably priced server hardware.

You can still get that, of course, but it costs a lot more. The recycling company I know is now taking the RAM out of every server and selling it separately.

Apple hardware is incredibly overpriced.

reply
My home server has 512GB RAM, 48 cores, my 4 desktops are 16 cores 128GB, 4060GPU each. Server is second hand and I paid around $2500 for it. Just below $3000 price for desktops when I built them. All prices are in Canadian Pesos
reply
Canadian Pesos?
reply
Jokes because the Canadian dollar’s value isn’t very high right now.

See a $1100 GPU on eBay, but it’s in the US? Actually a $1900 GPU.

A colleague were just talking about how well he timed the purchase of his $700 24GB 3090.

reply
Please, it's actually Cambodian Dollhairs or Canuckistan Pesos.
reply
It is sarcasm. Our dollar which used to be on par with US is no more.
reply
> spending $10k on a MacBook Pro with 128GB.

As someone who just bought a completely maxed out 14" Macbook Pro with an M5 Max and 128GB of RAM and 8TB SSD, it was not $10k, it was only a bit over $7k. Where is this extra $3k going?

reply
Tangential, I bought a nearly identically-spec'd (didn't spring for the 8 TB SSD - in retrospect, had I kept it, I would've been OK with the 4 TB) model, and returned it yesterday due to thermal throttling. I have an M4 Pro w/ 48 GB RAM, and since the M5 Max was touted as being quite a bit faster for various local LLM usages, I decided I'd try it.

Turns out the heatsink in the 14" isn't nearly enough to handle the Max with all cores pegged. I'd get about 30 seconds of full power before frequency would drop like a rock.

reply
I haven't really had a problem with thermal throttling, but my highest compute activity is inferencing. The main performance fall-off I've observed is that the cache/context size to token output rate curve is way more aggressive than I expected given the memory bandwidth compared to GPU-based inferencing I've done on PC. But other than spinning up the fans during prompt processing, I'm able to stay peak CPU usage without clock speed reducing. Generally though this only maintains peak compute utilization for around 2-3 minutes.

I'm wondering if there was something wrong with your particular unit?

reply
It could be a different country?
reply
With the way legislation is going these days, self hosting is becoming ever more important. RAM for zfs + containers on k3s doesn't end up being that crazy if you assuming you need to do everything on your own. (at home I've got 1 1tb ram machine, 1 512gb, 3x 128gb all in a k3s cluster with some various gpus about about a half pb of storage before ~ last sept this wasn't _that_ expensive to do)
reply
> Most consumers are using laptops and laptops are not keeping pace with where the frontier is in a singular compute node.

How can you say this when Apple is releasing extremely fast M5 MacBook Pros? Or the $600 MacBook Neo that has incredible performance for that price point?

Even x86 is getting some interesting options. The Strix Halo platform has become popular with LLM users that the parts are being sold in high numbers for little desktop systems.

reply
They're ultimately laptops, you won't be able to squeeze out the same amount of performance from a laptop compared to a desktop, regardless of the hardware.

If you haven't tried out a desktop CPU in a while, I highly recommend you giving it a try if you're used to only using laptops, even when in the same class the difference is obvious.

reply
I have a recent MacBook Pro and a high end Zen 5 desktop.

For CPU-bound tasks like compiling they’re not that different. For GPU tasks my desktop wins by far but it also consumes many times more power to run the giant GPU.

If you think laptops are behind consumer desktops for normal tasks like compiling code you probably haven’t used a recent MacBook Pro.

reply
> I have a recent MacBook Pro and a high end Zen 5 desktop.

What are the exact CPU models used here though? Since my point was about CPUs in the "same class", and it's really hard to see if this is actually the case here.

And yes, I've played around with the recent Apple CPUs, all the way up to M4 Pro (I think, thinking about it I'm not 100% sure) and still I'd say the same class of CPUs will do better in a desktop rather than a laptop.

If you want to compare it in the Apple ecosystem, compare the CPUs of a laptop to one of the Mac Mini/Mac Studio, and I'm sure you'll still see a difference, albeit maybe smaller than other brands.

reply
> If you want to compare it in the Apple ecosystem, compare the CPUs of a laptop to one of the Mac Mini/Mac Studio, and I'm sure you'll still see a difference, albeit maybe smaller than other brands

The same chip perform basically the same in the different form factors.

For all of the definitive statements you're making in this thread, you don't seem to know much about Apple M-series silicon.

reply
They're fast, but they'll never even remotely reach what a mid-range desktop PC with dedicated graphics burning 500W is able to do.

A 300W GPU released in 2025 is about 10x M5 perf. The difference is going to be smaller for CPU perf, but also not close.

reply
> The difference is going to be smaller for CPU perf, but also not close.

This is not true. The recent MacBook Pros are every bit as fast as my Zen 5 desktop for most tasks like compiling.

For GPU there is a difference because both are constrained by thermal and power requirements where the desktop has a big advantage.

For CPU compute, the laptop can actually be faster for single threaded work and comparable for multi threaded work.

Anyone claiming laptop CPUs can’t keep up with desktop CPUs hasn’t been paying attention. The latest laptops are amazing.

reply
> The recent MacBook Pros are every bit as fast as my Zen 5 desktop for most tasks like compiling.

Bad example. That's highly parallel, so a higher core-count die is going to destroy the base M5 here.

I don't typically compile Linux on my M5, so I don't really care, but at least online available clang benchmarks put it at roughly half the LOC/s of a 9950X, which released in 2024.

Anything single threaded it should match or even edge ahead though.

It gets for worse for multi threaded perf if you leave behind consumer-grade hardware and compare professional/workhorse level CPUs like EPYC/Threadripper/Xeon to Apple's "pro" lines. That's just a slaughter. They're roughly 3x a 9950X die for these kinds of workloads.

reply
> Bad example. That's highly parallel, so a higher core-count die is going to destroy the base M5 here.

The base M5 starts at 10 cores and scales to 18 cores. The performance is similar to high end dekstop consumer CPUs.

> I don't typically compile Linux on my M5, so I don't really care,

If you don't compile large codebases, why do you care then?

I do compile large codebases and I'm speaking from experience with the same codebase on both platforms. Not "LOC/s" benchmarks

reply
I don't compile Linux or other large C projects on my M5 (why would I). The only thing I have numbers for on both desktop and mobile is your typical JS/TypeScript/webpack shitshow that struggles to keep a high core count CPU remotely busy. Might as well do that on the M5.

There's a large C++ codebase I need to compile, but it can't compile/run on OSX in the first place, hence the desktop that I use remotely for that. Since it's also kind of a shitshow, that one has really terrible compile times: up to 15 minutes on a high powered Intel ThinkPad I no longer use, ~2 minutes on desktop.

I could do it in a VM as well, but let's be real: running it on the M5 in front of me is going to be nowhere near as nice as running it on the water cooled desktop under my desk.

reply
deleted
reply
For batch jobs there isn't much competition. 9995wx has 3 to 4x throughput of M5 max.

And then, if your laptop is busy, your machine is occupied - I hate that feeling. I never run heavy software on my laptop. My machine is in the cellar, I connect over ssh. My desktop and my laptop are different machines. I don't want to have to keep my laptop open and running. And I don't want to drag an expensive piece of hardware everywhere.

And then you need to use macOS. I'm not a macOS person.

reply
> For batch jobs there isn't much competition. 9995wx has 3 to 4x throughput of M5 max.

I would hope so, given that you can buy multiple M5 laptops for the price of that CPU alone.

I made a comment about how impressive the M5 laptops were above, so these comments trying to debunk it by comparing to $12,000 CPUs (before building the rest of the system) are kind of an admission that the M5 is rather powerful. If you have to spend 3-4X as much to build something that competes, what are we even talking about any more?

reply
We are on borrowed time, most of the world is running on oil and this resource is not unlimited at all. A lot of countries have gone past their production peak, meaning it's only downhill from here. Everything is gonna be more costly, more expensive, our lavish "democracies" lifestyles are only possible because we have (had) this amazing freely available resource, but without it it's gonna change. Even at a geopolitical scale you can see this pretty obviously, countries that talked about free market, free exchange are now starting to close the doors and play individually. Anyways, my point is, we are in for decades, if not a century of slow decline.
reply
Doubt it. Renewables are expanding much faster than oil output is decreasing. Wind and solar will enable energy to remain cheap everywhere that builds it.
reply
Energy production is only part of the bill, though. The oil shortage is having an effect on a mind-boggling variety of consumer goods where crude oil is used in manufacturing. For many products we don't have good alternatives. A lot of oil is needed to build an electric car.
reply
Malthusians has been sounding the alarm for longer than Protestant revivalists have been claiming the end of world is next month at lunchtime. If there is a predication market for such things, betting on any Malthusian is patently foolish.

(Of course, I don't disagree with the notion that consumerism produces an extraordinary amount of worthless trash, but that's a different matter. The main problem with consumerism is consumerism itself as a spiritual disease; the material devastation is a consequence of that.)

reply
People gloating about Malthusians being wrong keep forgetting that it only takes for them to be right ONCE in the entirety of human history and when they are - you'll be too busy trying to survive rather than posting on internet forums.

The planet has a certain resource-bound carrying capacity. It's a fact of physics. Just because we aren't there yet as of (checks time) 2026-03-27, doesn't mean Malthusians are wrong.

Although to be fair to the other side, I think with abundant renewable energy we'll be able to delay resource depletion for a very long time thanks to recycling (and lower standards of living of course).

reply
There will be very dramatic growing pains with this switch, especially for A: nations manufacturing renewables but still running that manufacturing on oil and B: nations that face political and economic barriers for renewables.

Also C: nations that are both A and B, needlessly causing oil volatility with unplanned military dickheadedness.

reply
Renewables provide electricity only, but planes, boats, trucks, basically all the supply chain, works with oil only for the moment. The ease of use of oil has not been replaced yet. Do you realize how easy it is to handle oil ? You can just put it in a barrel and ship it anywhere in that barrel. No need for wires or complex batteries like for electricity, nor complex pipelines like for gas.

And even if we figured out how to electrify everything (which we didn't as I just said), we would still run into resources shortages for batteries, wires (copper etc.), nuclear fuel (uranium)...

reply
Expanding renewables to the easily replaceable items like power plants, generators, and most consumer vehicles would radically reduce oil usage to where it becomes a minor concern. Also things like biodiesel exist. A more sustainable, renewable-forward, electrified reality is easily possible.

There is not a risk of resource shortage of copper. The doomer and prepper talking points you're parroting are not based in reality.

reply
The risk of copper shortage is a very well know fact https://press.spglobal.com/2026-01-08-Substantial-Shortfall-... You're into renewables, yet you can't grasp the fact that resources are limited on this planet ? That's peculiar.

And I don't even understand your other points to be honest. What do you mean "consumer vehicles" ? Are you taking about individual's cars ? I'm not taking about that, these don't matter that much. I'm taking about trucks, boats, planes, the stuff actually shipping you your lifestyle.

reply
It makes sense that you don't understand the other points. Based on how you approach conversation, I suspect it's an issue you run into frequently.

Look up what it means to have a conversation in "good faith" vs in "bad faith" and you might learn something useful about conversation tools. For example, lying about what someone says and calling it "peculiar" is "bad faith".

reply
[flagged]
reply
Hilarious how comments like this consistently get downvoted, theres a lot of special interest lurkers on this forum
reply
"I personally dropped $20k on a high end desktop . . . "

This is where I think current hackers should be headed. I grew up with lots of family who were backyard mechanics, wrenching on cars and motorcycles. Their investment in tools made my occasional PC purchase look extremely affordable. Based on what I read, senior mechanics often have five-figure US dollar investments in tools. Of course, I guess high quality torque wrenches probably outlast current GPU chips? I'd hate to be stuck making a $10K investment every 24 months on a new GPU . . .

I have been renting GPU resources and running open weight models, but recently my preferred provider simply doesn't have hardware available. I'm now kicking myself a little for not simply making a big purchase last fall when prices were better.

reply
Professional mechanics might do that, but a home mechanic can get very far one one $200 set, and then another $300 spent over years buying several useful things for each project.

I've replaced transmissions, head gaskets, and done all work for our family cars for two decades based on a Costco toolkit, and 20 trips to the autoparts store or Walmart when I needed something to help out.

Maybe I'm being a little forgetful that yes I bought a jack, and Jack stands, and have a random pipe as a breaker bar, and other odds and ends. But you can go very far for $1k as a DIYer.

reply
> Most consumers are using laptops and laptops are not keeping pace with where the frontier is in a singular compute node. Laptops are increasingly just clients for someone else's compute that you rent, or buy a time slice with your eyeballs, much like smartphones pretty much always have been.

It really feels like we're slowly marching back to the era of mainframe computers and dumb terminals. Maybe the democratization of hardware was a temporary aberration.

reply
It seems like you largely agree with the article - people shall own nothing and be happy. Perhaps the artificially induced supply crunch could go on indefinitely.

Also, I wonder how many of us, even here on HN, have the ability to spend that amount of money on computer for personal use. Frankly I wouldn't even know what to do with all the RAM - should I just ramdisk every program I use and every digital thing I made in the last five years?

Anyhow, I suppose for the folks who can't afford hardware (perhaps by design), one ought to own nothing and be happy.

reply
People spend a lot more than that on a car they use less, especially if they're in tech.

The RAM choice was because I have never regretted buying more RAM - it's practically always a better trade than a slightly faster CPU - and 96GB DIMMs were at a sweet spot compared to 128GB DIMMs.

That, and the ability to have big LLMs in memory, for some local inference, even if it's slow mixed CPU/GPU inference, or paged on demand. And if not for big LLMs, then to keep models cached for quick swapping.

reply
I bought a 4 year old car for significantly less than that. And I can get a computer that can do 99% of what your monster can do for like 10% of the price. And if I want LLM inference I can get that for like $20 a month or whatever.

I don't mean to judge, it's your money but to me it seems like an enormous waste. Just like spending $100k on a car when you can get one for $15k that does pretty much exactly the same job.

reply
Sure. You're right, it is my money. And I pay even more for inference on top; I have OpenRouter credits, OpenAI subscription, Claude Max subscription.

It's not so easy to get nice second-hand hardware here in Switzerland, and my HEDT is nice and quiet, doesn't need to be rack-mounted, plugs straight into the wall. I keep it in the basement next to the internet router anyway.

The "sensible" choice is to rent. It's the same with cars; most people these days lease (about 50% of new cars in CH, which will be a majority if you compare it with auto loan and cash purchase).

reply
I don't think leasing cars is sensible. Last time I checked, for cheaper cars mind you, I would essentially pay 60% of the sticker price over a few years and then not have a car at the end of it. Would be better to buy a new car and then sell it after the same time. But what's even better is to not buy a new car, let some other sucker take the huge value loss and then snatch it up at a 30-60% discount a few years later. Then you can sell it a few years after that for not much less than you paid for it. I've had mine a year and right now they're going for more than I paid.

I think leasing might be okayish if you find a really good deal, but it's really not much different than buying new which is just a shit deal no matter how you turn it. A 1-4 year old car is pretty much new anyway, I don't see any reason to buy brand new.

reply
Solarpunk + https://permacomputing.net/

That’s for everyone

reply
I've always went way over on RAM, for the most part. 32, 64, then 128GB of memory.

Never really used it all, usually only about 40%, but it's one of those better to have than not need, and better than selling and re-buying a larger memory machine (when it's something you can't upgrade, like a Mac or certain other laptops)

reply
I believe superficially speaking you could be right. But I think it was realised that causing the scarcity of products and commodities is a power move.

We live in world where we optimised for globalization. Industry in china, oil in middle east, etc...

This approach proved to be fragile on the hands of people with money and/or power enough to tilt the scale

reply
It's not a power move, it's a cartel and they've done this before. Gamers Nexus did a fantastic piece on how where we're at today is very similar to the DRAM price fixing and market manipulation just a couple decades ago [0]. This is the big players taking full advantage of an opportunity for profit.

[0] https://youtu.be/jVzeHTlWIDY?si=cRJ6C7jPxLIpKTyF

reply
This will be me. Bestowing upon my descendants a collection of Mighty Beanz, a few unkillable appliances, and the best consumer computing hardware the early 2020s could buy.

And I fear they will be equally confused and annoyed by disposing of all of them.

reply
Care to share some more detailed specs?
reply
>we're at an inflection point where DC hardware is diverging rapidly from consumer compute.

I thought the trend is the opposite direction, with RTX 5x series converging with server atchitectures (Blackwell-based such as RTX 6000 Pro+). Just less VRAM and fewer tensor cores, artificially.

Where is the divergence happening? Or you don't view RTX 5x as consumer hardware?

reply
Blackwell diverges within Blackwell itself… SM121 on the GB10 vs the RTX 5000 consumer vs the actual full fat B100 hardware all have surprisingly different abilities. The GB10 has been hamstrung by this a bit, too.
reply
i thought i was crazy with a $7k threadripper w/ 128G of ram
reply
You're responding to an LLM authored article that doesn't know anything. "Let that sink in for a moment."
reply
I think you're probably right, but I'm not so confident the supply crunch will end.

Tech feels increasingly fragile with more and more consolidation. We have a huge chunk of advanced chip manufacturing situated on a tiny island off the coast of a rising superpower that hates that island being independent. Fabs in general are so expensive that you need a huge market to justify building one. That market is there, for now. But it doesn't seem like there's much redundancy. If there's an economic shock, like, I dunno, 20% of the world's oil supply suddenly being blockaded, I worry that could tip things into a death spiral instead.

reply
> Laptops are increasingly just clients for someone else's compute

Are you kidding? Apple's mobile chips are now delivering perf that AMD & intel desktop never could or did.

reply
> Apple's mobile chips are now delivering perf that AMD & intel desktop never could or did.

Most applications don't make aggressive use of the SIMD instructions that modern x86 chips offer, thus you get this impression. :-(

reply
Users do not care why the perf they get is what they get. What good is AVX2048 if nobody uses it?
reply
> Laptops are increasingly just clients for someone else's compute that you rent, or buy a time slice with your eyeballs, much like smartphones pretty much always have been.

What are you talking about?

My laptops are, and always have been, primarily places where I do local computing. I write code there, I watch movies there, I listen to music there, I play games there...all with local storage, local compute, and local control (though I do also store a bunch of my movies on a personal media server, housed in my TV stand, because it can hold a lot more). My smartphone is similar.

If you think that the vast majority of the work most people do on their personal computers is moving to LLMs, or cloud gaming, then I think you are operating in a pretty serious bubble. 99.9% of all work that most people do is still best done locally: word processing, spreadsheets, email, writing code, etc. Even in the cases where the application is hosted online (like Google Docs/Sheets), the compute is still primarily local.

The closest to what you're describing that I think makes any sense is the proliferation of streaming media—but again, while they store the vast libraries of content for us, the decoding is done locally, after the content has reached our devices.

It doesn't matter if a cutting-edge AI-optimized server can perform 10, 100, or 1000 times better than my laptop at any particular task: if the speed at which my laptop performs it is faster than I, as a human, can keep up (whatever that means for the particular task), then there's no reason not to do the task locally.

reply
> We won't be in a supply crunch forever.

I don't share the same 1:1 opinion with regards to the article, but it is absolutely clear that RAM prices have gone up enormously. Just compare them. That is fact.

It may be cheaper lateron, but ... when will that happen? Is there a guarantee? Supply crunch can also mean that fewer people can afford something because the prices are now much higher than before. Add to this the oil crisis Trump started and we are now suddenly having to pay more just because a few mafiosi benefit from this. (See Krugman's analysis of the recent stock market flow of money/stocks.)

reply
The increase looks higher because we were at an all-time price low. RAM has been this expensive at least twice before, and it always dropped way down again after.
reply
General predictions are in 3-5 years things will return to normal. 3 years if the current AI crunch is a short term thing, 5 years if it isn't and we have to build new RAM factories.
reply
[dead]
reply
Local is a dead end.

Open source efforts need to give up on local AI and embrace cloud compute.

We need to stop building toy models to run on RTX and instead try to compete with the hyperscalers. We need open weights models that are big and run on H200s. Those are the class of models that will be able to compete.

When the hyperscalers reach take off, we're done for. If we can stay within ~6months, we might be able to slow them down or even break them.

If there was something 80-90% as good as Opus or Seedance or Nano Banana, more of the ecosystem would switch to open source because it offers control and sovereignty. But we don't have that right now.

If we had really competitive open weights models, universities, research teams, other labs, and other companies would be able to collaboratively contribute to the effort.

Everyone in the open source world is trying to shrink these models to fit on their 3090 instead, though, and that's such a wasted effort. It's short term thinking.

An "OpenRunPod/OpenOpenRouter" + one click deploy of models just as good as Gemini will win over LMStudio and ComfyUI trying to hack a solution on your own Nvidia gaming card.

That's such a tiny segment of the market, and the tools are all horrible to use anyway. It's like we learned nothing from "The Year of Linux on Desktop 1999". Only when we realized the data center was our friend did we frame our open source effort appropriately.

reply
> We need open weights models that are big and run on H200s.

We have this class of models already, Kimi 2.5 and GLM-5 are proper SOTA models. Nemotron might also release a larger-sized model at some time in the future. With the new NVMe-based offload being worked on as of late you can even experiment with these models on your own hardware, but of course there's plenty of cheap third-party inference platforms for these too.

reply
> Open source efforts need to give up on local AI and embrace cloud compute.

Oh god no, please not more slop, you're already consuming over 1 percent of human energy output, could you, like, chill a bit?

reply
In a similar vein: seek efficiency.

I.e., /if/ I am going to consume LLM tokens, I figure that a local LLM with 10s of billions of parameters running on commodity hardware at home will still consume far more energy per token than that of a frontier model running on commercial hardware which is very strongly incentivized to be as efficient as possible. Do the math; it isn't even close. (Maybe it'd be closer in your local winter, where your compute heat could offset your heating requirements. But that gets harder to quantify.)

Maybe it's different if you have insane and modern local hardware, but at least in my situation that is not the case.

reply
But commodity hardware that's right-sized for your own private needs is many orders of magnitude cheaper than datacenter hardware that's intended to serve millions of users simultaneously while consuming gigawatts in power. You're mostly paying for that hardware when you buy LLM tokens, not just for power efficiency. And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.
reply
>And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

^ Fair. Yep, I agree the calculus changes if you don't have _any_ local hardware and you're needing to factor in the cost of acquiring such hardware.

When I did this napkin math, I was mostly interested in the energy aspect, using cost as a proxy. I was calculating the $/token (taking into consideration the cost of a KWh from my utility, the measured power draw of my M1 work machine, and the measured tokens per second processed by a ~20BP open-weight model). I then compared this to the published $/token rate of a frontier provider, and it was something like two orders of magnitude in favor of the frontier model. I get it, they're subsidizing, but I've got to imagine there's some truth in the numbers.

I wonder, does (or will) the $/token ratio fall asymptotically toward the cost of electricity? In my mind I'm drawing a parallel to how the value of mined cryptocurrency approximately tracks the cost of electricity... but I might be misremembering that detail.

reply
I doubt it because you aren't going to get the utilisation that a commercial setup would. No point wasting tons of money on hardware that is sat idle most of the time.
reply
If you're running agentic workloads in the background (either some coding agent or personal claw-agent type) that's enough utilization that the hardware won't be sitting idle.
reply
Y'all aren't seeing the same future I am, I guess.

- Our career is reaching the end of the line

- 99.9999% of users will be using the cloud

- if we don't have strong open source models, we're going to be locked into hyperscaler APIs for life

- piddly little home GPUs don't do squat against this

Why are you building for hobby uses?

Build for freedom of the ability to make and scale businesses. To remain competitive. To have options in the future independent of hyperscalers.

We're going to be locked out of the game soon.

Everyone should be panicking about losing the ability to participate.

Play with your RTXes all you like. They might as well be raspberry pis. They're toys.

Our future depends on our ability to run and access large scale, competitive, open weights. Not stuff you run with LM Studio or ComfyUI as a hobby.

reply
I don't agree that we are being left behind with regards to AI, I believe it's simply not worth participating in. I hope it all comes crashing down.
reply
That's not the right perspective to have.

Also, the only thing crashing down will be the economic participation of everyday people if we don't have ownership over the means of creation. Hyperscalers will be just fine.

reply
Man, going to personal computing was a mistake, we should’ve stayed jacked to the mainframes /s
reply
Entire device categories, like smartphones, are locked down. That's our future.

Here's my retort: https://news.ycombinator.com/item?id=47543367

reply
$20k?

People laugh at young men for looksmaxxing. And then there’s this. I dunno. As someone who has been playing computer games since the 70s, I clearly do not understand the culture anymore. But what forces would drive a young man to spend the price of a used car to play a derivative FPS? It seems heartbreaking. Just like the looksmaxxer.

reply
Alas, I'm not a young man any more. And my HEDT is headless, it has no monitor with which to play FPSes.
reply