The difference between 2ms and 0.2ms might sound unneeded, or even silly to you. But somebody, somewhere, is doing stream processing of TB-sized JSON objects, and they will care. These news are for them.
People would say, "Why use this when it's harder to read and only saves N ms?" He'd reply that you'd care about those ms when you had to read a database from 500 remote servers (I'm paraphrasing. He probably had a much better example.)
Turns out, he wrote a book that I later purchased. It appears to have been taken over by a different author, but the first release was all him and I bought it immediately when I recognized the name / unix.com handle. Though it was over my head when I first bought it, I later learned enough to love it. I hope he's on HN and knows that someone loved his posts / book.
https://www.amazon.com/Pro-Bash-Programming-Scripting-Expert...
Also performance improvements on heavy used systems unlocks:
Cost savings
Stability
Higher reliability
Higher throughput
Fewer incidents
Lower scaling out requirements.
For example, doing dangerous thing might be faster (no bound checks, weaker consistency guarantee, etc), but it clearly tend to be a reliability regression.
And how does performance improve reliability? Well, a more performant service is harder to overwhelm with a flood of requests.
Of course this is a very artificial and almost nonsensical example, but that is how you optimize bounds checks away - you just make it impossible for the bounds to be exceeded through means other than explicitly checking.
That's crazy to think about. My JSON files can be measured in bytes. :-D
https://developer.nvidia.com/blog/accelerating-json-processi...
So went not compare that case directly? We'd also want to see the performance of the assumed overheads i.e. how it scales.
Either way, I have really big doubts that there will be ever a significant amount of people who'd choose jq for that.
It’s the same sentiment as “Individuals don’t matter, look at how tiny my contribution is.”. Society is made up of individuals, so everybody has to do their part.
> 9/10 whatever tooling you are using now will be perfectly fine.
It is not though. Software is getting slower faster than hardware is getting quicker. We have computers that are easily 3–4+ orders of magnitudes faster than what we had 40 years ago, yet everything has somehow gotten slower.
"Fast enough" will always bug me. "Still ahead of network latency" will always sound like the dog ate your homework. I understand the perils of premature optimization, but not a refusal to optimize.
And I doubt I'm alone.
Out of curiosity, have you read the jq manpage? The first 500 words explain more or less the entire language and how it works. Not the syntax or the functions, but what the language itself is/does. The rest follows fairly easily from that.
If I/you was working with JSON of that size where this was important, id say you probably need to stop using JSON! and some other binary or structured format... so long as it has some kinda tooling support.
And further if you are doing important stuff in the CLI needing a big chain of commands, you probably should be programming something to do it anyways...
that's even before we get to the whole JSON isn't really a good data format whatsoever... and there are many better ways. The old ways or the new ways. One day I will get to use my XSLT skills again :D
Say a number; make a real argument. Don't just wave your hand and say "just imagine how right I could be about this vague notion if we only knew the facts"
I don't think I remember one case where jq wasn't fast enough
Now what I'd really want is a jq that's more intuitive and easier to understand
Unfortunately I don’t recall the name, but there was something submitted to HN not too long ago (I think it was still 2026) which was like jq but used JavaScript syntax.
>
> 9/10 whatever tooling you are using now will be perfectly fine
Are you working in frontend? On non-trivial webapps? Because this is entirely wrong in my experience. Performance issues are the #1 complaint of everyone on the frontend team. Be that in compiling, testing or (to a lesser extend) the actual app.
Either the team I worked at was horrible, or you are from Google/Meta/Walmart where either everyone is smart or frondend performance is directly related to $$.
It is. Company size is moot. See https://wpostats.com for starters.
From that I completely agree with your statement - however, you're not addressing the point he makes which kinda makes your statement completely unrelated to his point
99.99% of all performance issues in the frontend are caused by devs doing dumb shit at this point
The frameworks performance benefits are not going to meaningfully impact this issue anymore, hence no matter how performant yours is, that's still going to be their primary complaint across almost all complex rwcs
And the other issue is that we've decided that complex transpiling is the way to go in the frontend (typescript) - without that, all built time issues would magically go away too. But I guess that's another story.
It was a different story back when eg meteorjs was the default, but nowadays they're all fast enough to not be the source of the performance issues
Opencode, ClaudeCode, etc, feel slow. Whatever make them faster is a win :)
The vast majority of Linux kernel performance improvement patches probably have way less of a real world impact than this.
unlikely given that the number they are multiplying by every improvement is far higher than "times jq is run in some pipeline". Even 0.1% improvement in kernel is probably far far higher impact than this