Others carefully select the ingredients, construct the parts they don't already have, spend the time to get the temperatures and oxygenation aligned, and then sit down to a humble meal for one.
Not many programmers, these days, do code-reading like baddies, as they should.
However, kids, the more you do it the better you get at it, so there is simply no excuse for shipping someone elses bloat.
Do you know how many blunt pointers are lined up underneath your BigFatFancyFeature, holding it up?
Christ. Drop the greybeard act, man. You’re not getting any trophies for being the most annoying one to chime in.
The savings there would be negligible (in modern terms) but the development cost would be significantly increased.
> Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?
Safety nets are not a waste. They’re a necessary cost of working with modern requirements. For example, If your personal details were stolen from a MITM attack then I’m sure you’d be asking why that piece of software wasn’t encrypting that data.
The real waste in modern software is:
1. Electron: but we are back to the cost of hiring developers
2. Application theming. But few actual users would want to go back to plain Windows 95 style widgets (many, like myself, on HN wouldn’t mind, but we are a niche and not the norm).
> This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.
You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
When you wrote audio data in the 80s, you effectively wrote midi files in machine code. Obviously it wasn’t literally midi, but you’d describe notes, envelopes etc. You’d very very rarely store that audio as a waveform because audio chips then simply don’t support a high enough bitrate to make that audio sound good (nor had the storage space to save it). Whereas these days, PCM (eg WAV, MP3, FLAC, etc) sound waaaay better than midi and are much easier for programmers to work with. But even a 2 second long 16bit mono PCM waveform is going to be more than 4KB.
And modern graphics aren’t limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and you’re increasing the colour depth by literally 32 times. And that’s before you scale again from 64 pixels to thousands of pixels.
You’re then talking exponential memory growth in all dimensions.
I’ve written software for those 80s systems and modern systems too. And it’s simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
Software 30 years ago was more amenable to theming. The more system widgets you use, the more effective theming works by swapping them.
Now, we have grudging dark-mode toggles that aren't consistent or universal, not even rising to the level of configurabilty you got with Windows 3.1 themes, let alone things like libXaw3d or libneXtaw where the fundamental widget-drawing code could be swapped out silently.
I get the impression that since about 2005, theming has been on the downturn. Windows XP and OSX both were very close to having first class, user-facing theming systems, but both sort of chickened out at the last minute, and ever since, we've seen less and less control every release.
I think what you're describing as "theming" is more "custom UI". It used to be reserved for games, where stock Windows widgets broke immersion in a medieval fantasy strategy simulator and you were legally obliged to make the cursor a gauntlet or sword. But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!" when your application is a smart-bulb controller or a text editor that could perfectly well fit with native widgets.
This also isn’t a trend that Electron started. Software has been shipping with bespoke UIs for nearly as long as UI toolkits have been a thing.
TBH this sounds pretty medieval too.
A word of praise for Go: it is pretty performant, while using very little memory. I inherited a few Django apps, and each thread just grows to 1GB. Running something like celery quickly eats up all memory and start thrashing. My Go replacements idle at around 20MB, and are a lot faster. It really works.
...and this effort and small savings here and there is what brings the massive savings at the end of the day. Electron is what "4KB here and there won't hurt", "JS is a very dynamic language so we can move fast", and "time to market is king, software is cheap, network is reliable, YOLO!" banged together. It's a big "Leeroy Jenkins!" move in the worst possible sense, making users pay everyday with resources and lost productivity to save a developer a couple of hours at most.
Users are not cattle to milk, they and their time/resources also deserve respect. Electron is doing none of that.
> You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors. All are hardware accelerated, too.
> And modern graphics aren’t limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and you’re increasing the colour depth by literally 32 times. And that’s before you scale again from 64 pixels to thousands of pixels.
Sorry to say that, but I know what graphics and high performance programming entails. Had two friends develop their own engines, and I manage HPC systems. I know how much memory matrices need, because everything is matrices after some point.
> Safety nets are not a waste.
I didn't say they are waste. That quote is out of context. Quoting my comment's first paragraph, which directly supports the part you quoted: "Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need."
So, what I argue is, you don't have to bring in everything and the kitchen sink if all you need is a knife and a cutting board. Bring in the countertop and some steel gloves to prevent cutting yourself.
> I’ve written software for those 80s systems and modern systems too. And it’s simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
Me too. I also record music and work on high performance code. While they are not moving much, I take photos and work on them too, so I know what happens under the hood.
Just watch the demos. It's worth your time.
I agree. I even said Electron was one piece of bloat I didn’t agree with my my comment. So it wasn’t factored into the calculations I was presenting to you.
> Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors.
You mean the ones you added after I replied?
> I didn't say they are waste. That quote is out of context.
Every part of your comment was quoted in my comment. Bar the stuff you added after I commented.
> Had two friends develop their own engines
I have friends who are doctors but that doesn’t mean I should be giving out medical advice ;)
> Just watch the demos. It's worth your time.
I’m familiar with the demo scene. I know what’s possible with a lot of effort. But writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
I’m also not advocating that software should be written in Electron. My point was modern software, even without Electron, is still going to be orders of magnitude larger in size and for the reasons I outlined.
> writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
The point is not "cool effects" and "infinite time" though. If we continue about talking farbrausch, they are not bunch of nerds which pump out raw assembly for effects. They have their own framework, libraries and whatnot. Not dissimilar to business software development. So, their code is not that different from a business software package.
For the size, while you can't fit a whole business software package to 64kB, you don't need to choose the biggest and most inefficient library "just because". Spending a couple of hours more, you might find a better library/tool which might allow you to create a much better software package, after all.
Again, for the third time, while safety nets and other doodads make software packages bigger, cargo culting and worshipping deadlines and ROI more than the product itself contributes more to software bloat. That's my point.
Oh I overlooked this gem:
> I have friends who are doctors but that doesn’t mean I should be giving out medical advice ;)
Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
IOW, yep, I didn't wrote one, but I was neck deep in both of them, for years.
Which isn’t the same thing as what I said.
I’m not suggesting you did it maliciously, but the fact remains they were added afterwards so it’s understandable I missed them.
> Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
That is quite a bit different from your original comment though. This would imply you also worked on game engines and it wasn’t just your friends.
That C64 demo doing sprite wizardy and 8088MPH comes to my mind. The latter one, as you most probably know, can't be emulated since it (ab)uses hardware directly. :D
As a trivia: After watching .the .product, I declared "if a computer can do this with a 64kB binary, and people can make a computer do this, I can do this", and high performance/efficient programming became my passion.
From any mundane utility to something performance sensitive, that demo is my northern star. The code I write shall be as small, performant and efficient as possible while cutting no corners. This doesn't mean everything is written in assembly, but utmost care is given how something I wrote works and feels while it's running.
They are highly dynamic programs, and not very different from game engines on that regard.
> misleadingly minimalistic.
That's the magic of these programs or demoscene in general. No misleading. That's the goal.
If your point is that it uses gigabytes of VRAM instead of system memory, then I think that is an extremely weak argument for how modern software doesn’t need much memory because all you’re doing is shifting that cost from one stack of silicon to a a different stack silicon. But the cost is still the same.
The only way around that is to dynamically generate those assets on the fly and streaming them to the video card. But then you’re sacrificing CPU efficiency for memory efficiency. So the cost is still there.
And I’ve already discussed how data compresses better as vectors than as bitmaps and PCM but is significantly harder to work with than bitmaps and waveforms. using vectors / trackers are another big trick for demos that aren’t really practical for a lot of day to day development because they take a little more effort and the savings in file sizes are negligible for people with multi-GB (not even TB!!!) disks.
As the saying goes: there’s no such thing as a free lunch.
Instead, as you guessed, these demos generate assets on the fly and stream to the respective devices. You cite inefficiencies. I tell they run at more than 60 FPS on these constrained systems. Remember, these are early 2000s systems. They are not that powerful by today’s standards, yet these small binaries use these systems efficiently and generate real time rendered CG on the fly.
Nothing about them is inefficient or poor. Instead they are marvels.
That’s not what I said. I said you’re trading memory footprint for CPU footprint.
This is the correct way to design a demo but absolutely the wrong way to design a desktop application.
They are marvels, I agree. But, and as I said before, there’s no such things as a free lunch. at risk of stating the obvious; If there wasn’t a trade off to be made then all software would be written that way already.