From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?
That sounds like a perfectly functional project, to me.
Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.
So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.
So, that would be an awesome project!
Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.
And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations. ng-spice (and Spice3 and Spice2) were not written for programming ease; they were written to get a real job worth real money done.
In addition, any change you make to that code needs to be run back through numerical regression tests to make sure you didn't break things since this is software that people expect to get correct answers.
However, if the legacy seems to bother you so much, perhaps you should look at Xyce from Sandia?
They sound like an amateur at circuit design, not software engineering (which is how I'd describe myself too).
The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.
I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.
That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.
Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.
However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).
If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.
Solving sets of differential equations is something that's parallelizable though
See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.
Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.
A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.
These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.
Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.
Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.
I bet that just compiler optimizations that LLVM could do with clean code gonna be faster
But that's exactly the sort of exotic domain knowledge that AI models have that I don't.
Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.
Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.
how long does it take to compile?
@jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
https://x.com/jarredsumner/status/2053050239423312035Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.
Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)
haven't used zig...(only used rust)
but zig doesn't solve those problems?
I am of the opinion that it is horses for courses and not a universal better proposition.
Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…
While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.
1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.
2) defer[0] allows you to collocate the the freeing of resources with code.
That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.
I was working on some eBPF code in C and did really miss zig.
For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.
I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.
E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.
If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.
It isn't as binary as you think.
It's true that safe wrappers around unsafe code sometimes have bugs in them, but it's orders of magnitude easier to get the abstraction right once than to use unsafe correctly in many places sprawled across a large codebase.
His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).
Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).
I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').
If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.
What are you asking for exactly?
I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?
You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.
And you can't forget to type defer
The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.
When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)
>The fact that you can explicitly invoke the destructor to happen later
You don't specify where the `defer`-red "destructor" will be invoked.
It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.
Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.
you can make a no-op function that gets compiled out but survives AIR
> rust knows when it can Drop.
and its possible to cause problems if you aren't aware where rust picks to dropp.
> And rust can put noalias everywhere in emitted code.
zig has noalias and it should be posssible to do alias tracking as a refinement.
> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.
don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.
Can you give some examples? I've never ran into problems due to this.
> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage
Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.
I wish you luck in your project! If you land somewhere interesting, I hope you write it up.
If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.
thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D
Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.
The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.
Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.
Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.
This maneuver was arguably obfuscated by the anti-LLM stance and finger pointing at Microsoft, but nevertheless, many still have noticed. Zig, for a long time, had been falling behind and doing poorly on their open to close ratio for resolving issues. It should be embarrassing to leave so many issues open.
Even if not accepting new GitHub issues, they have demonstrated an inability to resolve existing issues, except at an extremely slow pace. Considering there are just about no new issues on their GitHub repo, it is understandable if there are those that find the pace to close and amount of issues unacceptable or questionable, in addition to the clearly bad open to close ratio.
Bun: Hold my beer
These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.
> In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.
In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.
Insert something about monkeys, typewriters, and Shakespeare here.
The AI companies and their associates are beginning to surpass that level of denials and lies.
I would have agreed with this like 15 years ago, but the very existence of Twitter (and the acquisition saga) proves this to not be true.
How does one accomplish change? Even being a martyr doesn't get traction. As far as I can tell, you need to already be powerful. Nobody lets you into that group if you're not aligned with said group.
Protests (at least in their current form) don't work. Trying to assassinate someone doesn't move the needle (also not the play, I don't support murder), vocal grassroots leaders are no longer relevant at all, if they ever were.
How does one accomplish any change?
Protests don’t immediately solve everything, but I think looking at 2026 and concluding they don’t move the needle at all is a weird take. There are recent examples of protest movements (especially long-term ones) working all over the world.
> It’s engineering.
Significantly, but not totally. The marketing value can't be ignored.
More handwaving about the LLM hype machine is incredibly boring and enough of it is spewed everywhere that whatever social good it was going to accomplish must have already happened by now. If you want to inject reality into the situation, talk about reality (like Anthropic is at least pretending to).
So cash out before that.
Also I already cashed out, jokes on you.
Saying you have no intention of doing something then doing it is not engineering, it's being dishonest. He could have said "well decide when we see the results", why didn't he?
I'm guessing that if I said it ... that we have no intention of re-writing in rust ... that what I mean is "we have no intention of spending the extreme cost it would take to rewrite". When I discover the cost model is completely different that changes things.
Saying you don't intend to do something and then doing it is free will.
It's also lying. They are not mutually exclusive.
One must stick to old assertions forever!
Giant foot is gonna squish us!"
...this forum is as bad as a single backwater sub Reddit.
I am so sick of emotionally frail software engineers. I don't know why I keep bothering floating back here every once in a while to see what is up.
Same old rustled jimmies over technology evolution like back during the emacs and vi! tabs vs spaces! Sysv init vs systemd!
Super hero power scaling message boards are more engaging than this site.
AI save us from these needlessly economically empowered labor exploiting non-contributor script kiddies. Such an unserious community.
Changing your mind is okay, for example if someone said it was impossible to do the migration with current LLMs and it turns out they did it in four days, that person can and should admit they were wrong. That's not what he did though. What he did is say he had no intention of doing it, and then did it. That is lying. If he was testing and he didn't know if the change was going to be worth it, he could have said for example:
"This branch is a test, it's not a given it will work so until we see the results we won't decide if we'll be migrating or not."
He didn't say anything like that though, he basically said:
"We have no intention to migrate."
Why did he said the latter and not the former? Because he wasn't being honest, he was just trying to get people off his back, and so he didn't say what he was doing, the best for his own interest. We have a saying in my country: "it's easier to catch a liar than someone who's lame".
Also, before you come and say but he said he had no "intention" not that he wasn't gonna do it. A five year old might think that's a valid argument, but this person is an adult and we're all adults here, so it's not, it's equivocation and it's a logical fallacy.
> I am so sick of emotionally frail software engineers.
Then don't look in the mirror, you're probably being the biggest crybaby in this thread so far.
What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.
I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.
Even if it passed the full test suite there are a ton of software qualities that are not captured by tests and I think it's unlikely the AI made the right trade-off in every such case.
* We haven't seen the benchmarks yet.
* It hasn't seen wide usage. Zig Bun has had tons of bugs ironed out, Rust Bun has a different set of bugs to iron out.
* The developers know the zig codebase well, they don't know the rust code base.
would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?
they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.
zig adopting a strict 'no LLM' policy affects the LLM vendors.
Jared, the hacker is now replaced by Jared, the millionaire soon to be billionaire as Anthropic valuation keeps going up.
I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.
Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.
The marketing opportunity here is in promoting Claude Code, not giving a smackdown to Andrew Kelley (who vanishingly few people who throw around millions of dollars on AI contracts have heard of).
> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.
We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.
Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.
Believe it or not, for some of us it’s not “the whole damn point”.
I don't think it is fair to claim computers are about putting people out of jobs.
Computer used to mean "human who does math". Before machine computers, we had human computers. Machine computers replaced all of these human computers.
- Video games
- Medical device firmware
- Synthesizers
- Detailed universe-scale physics simulations
- Mars rover control software
- The Linux kernel
- Medical device firmware - hardware control layer for medical devices, which are used to aid in medical procedures.
- Synthesizers - help to make music.
- Detailed universe-scale physics simulations - help to make certain physics problems more tractable.
- Mars rover control software - helps to remote control rovers.
- The Linux kernel - control layer that sits between firmware and actual applications, pretty much just a common shared library so apps don't have to each ship with a full stack.
I don't really see your point here. None of these examples counter the argument that software is created to automate human labour as much as is practical.
Video games are an interesting category since they're entirely enabled by software: I can't imagine anyone driving a video game manually (note I don't consider things like Chess, etc software to be video games in this context; more things like FPS, racing, etc). I do remember as a kid I thought that there were actually little people doing the stuff in video games though.
All of these things existed in pre computer form.
A scheduler used to be a person putting punch cards into a machine.
It’s not that anthropic/google/openai/etc are unavoidable
Every tech you mentioned is absolutely governed by multibillion dollar companies. Something like 75-85% of OSS code is contributed by employees doing their day job. Most Linux and Postgres contributions come from those same employees. HTTP and TCP/IP are managed by standard bodies and industry working groups that, you guessed it, are governed by multibillion dollar companies. Red Hat and IBM are responsible for 40-60% of contributions to Qemu.
Some of the inner circle move to corporations to increase their power and are joined by corporate developers (sometimes their bosses) to take over the project.
A lot of corporate OSS development are entirely unnecessary rewrites or simple things like release management. So I'd put the number of useful code by employees much lower.
But governed, hell yeah, I agree. The corporations crack the whip and oppress real contributors.
Let's take this to a different domain, self driving cars. Would you equally argue for human driving? I'm pretty sure over time it will become clear to everyone that machines will be able to outperform humans consistently at this task, to the degree that human driving will become illegal. But for now the press likes to focus on any failure of machine driving, taking for granted human drivers are the largest or second largest cause of premature death in many countries.
Coding (in many ways, but not all) is a more open ended and versatile task than driving, so it's natural that current iterations seem untrustworthy, but ignoring the trajectory is erring on conservatism, and doesn't seem to me to be grounded in any sound reasoning.
Seems like that would make open source entirely controlled by open ai, anthropic et al.
That is actually a very plausible scenario!
Your kind of negativity is pathological.
This is one of my problems with academia: people only sharing results when they're positive and complete. I want to hear about what people tried that didn't work, and see the string of failures. People are already inclined to avoid sharing their work out of concern that they'll be judged--let's not encourage that behavior, please.
Underestimating how quickly a non-trivial project will come together is an almost unheard of phenomenon. It used to invariably be the other way around, to the point that there are laws about it, like Hofstadter's Law, which says that projects always take longer than anticipated, even when accounting for the law itself. Or Fred Brooks' work, which puts limits on how much the development of software projects can be sped up.
The sane takeaway here is that if what's being reported is true (keeping in mind it's coming from a newly minted Anthropic employee), it implies an astonishing, unheard of improvement in software development speed, at least for certain kinds of tasks, enabled by LLMs.
To somehow twist that into "experts may not be as skilled and knowledgeable as they appear" or "not skilled in the tools they’re using" makes me think of the Charles Babbage quote, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such [an opinion]."