upvote
What happens when I send an extremely high throughput of data and the scheduler decides to pause garbage collection due to there being too many interrupts to my process sending network events? (a common way network data is handed off to an application in many linux distros)

Are there any concerns that the extra array overhead will make the application even more vulnerable to out of memory errors while it holds off on GC to process the big stream (or multiple streams)?

I am mostly curious, maybe this is not a problem for JS engines, but I have sometimes seen GC get paused on high throughput systems in GoLang, C#, and Java, which causes a lot of headaches.

reply
Yeah I don't think that's generally a problem for JS engines because of the incremental garbage collector.

If you make all your memory usage patterns possible for the incremental collector to collect, you won't experience noticeable hangups because the incremental collector doesn't stop the world. This was already pretty important for JS since full collections would (do) show up as hiccups in the responsiveness of the UI.

reply
Interesting, thanks for the info, I'll do some reading on what you're saying. I agree, you're right about JS having issues with hiccups in the UI due to scheduling on a single process thread.

Makes a lot of sense, cool that the garbage collector can run independently of the call stack and function scheduler.

reply
It's not blazingly fast, no, but it's not as much overhead as people think either when they're imagining what it would cost to do the same thing with malloc. TC39 knew all this when they picked { step, done } as the API for iteration and they still picked it, so I'm not really introducing new risk but rather trusting that they knew what they were doing when they designed string iterators.

At the moment the consensus seems to be that these language features haven't been worth investing much in optimizing because they aren't widely used in perf-critical pathways. So there's a chicken and egg problem, but one that gives me some hope that these APIs will actually get faster as their usage becomes more common and important, which it should if we adopt one of these proposed solutions to the current DevX problems

reply