so it looks fine during basic testing but it scales really bad.
like for example claude/openAI web UIs, they at first would literally lag so bad because they'd just use simple updating mechanisms which would re-render the entire conversation history every time the new response text was updated
and with those console UIs, one thing that might be happening is that it's basically multiple webapps layered (per team/component/product) and they all load the same stuff multiple times etc...
I don't understand though why performance (I.e. using it properly) is not a consideration with these companies that are valued above $100 billion
like, do these poor pitiful big tech companies only have the resources to do so when they hit the 2 trillion mark or something?
One of the reasons Vue has such a loyal community is because the framework continues to improve performance without forcing you to adopt new syntax every 18 months because the framework authors got bored.
It's also not a problem with the react compiler.
The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.
I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.
Rant over.
It’s astonishing how bad the experience was.
It is to do with websites essentially baking in their own browser written in javascript to track as much user behavior as possible.
When it comes to DeepL specifically, I once opened their main page and left my laptop for an hour, only to come back to it being steaming hot. Turns out there's a video around the bottom of the page (the "DeepL AI Labs" section) that got stuck in a SEEKING state, repeatedly triggering a pile of NextJS/React crap which would seek the video back, causing the SEEKING event and thus itself to be triggered again.
I wish Google would add client-side resource use to Web Vitals and start demoting poorly performing pages. I'm afraid this isn't going to change otherwise; with first complaints dating back to mid-2010s, browsers and Electron apps hogging RAM are far from new and yet web developers have only been getting increasingly disconnected from reality.
Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.
Skipping the default Chromium bundle saves an absurd amount of overhead.
Its quite insane
>> AWS has a similar RAM consumption.
Makes no sense to me...
The ‘dashboard’, the ‘interface’? Reminds me of coworkers who used to refer to desktop PC cases as the hard drive, or people who refer to the web as ‘Google’.