upvote
As an amateur in the space: I download on Mac, run `ngspice`, "Error: Can't open display: :0". I look in the code - hardcoded X11-era assumptions. Not exactly modern affordances...

Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.

So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.

reply
This explains a lot. But you merely need to look into the family of spice forks to realise, given the way that they're strangely limited to certain operating systems and embedded inside certain proprietary IDEs, that's there's something very wrong with the code architecture.

So, that would be an awesome project!

reply
> As an amateur in the space

Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations. ng-spice (and Spice3 and Spice2) were not written for programming ease; they were written to get a real job worth real money done.

In addition, any change you make to that code needs to be run back through numerical regression tests to make sure you didn't break things since this is software that people expect to get correct answers.

However, if the legacy seems to bother you so much, perhaps you should look at Xyce from Sandia?

reply
> Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

They sound like an amateur at circuit design, not software engineering (which is how I'd describe myself too).

reply
KiCad is still the preferred interface.

The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.

I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.

reply
deleted
reply
I see "sourceforge" and immediately I think "this project is way behind time and is going to pose a lot of issues to new users, if it's still active".
reply
I could have linked Github repo which has been abandoned for 11 years and ranks higher on Google than the sourceforge page, but that would have maybe been disingenuous. (https://github.com/ngspice/ngspice)
reply
+1, a project presenting at FOSDEM certainly does not need a "revive".
reply
The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable
reply
That's not a revive though, revive (at least to me) implies it's dead.
reply
> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.

reply
> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.

reply
Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.

reply
To be specific, a linear solver can be (as in I have done) written in a week.

A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.

reply
The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.
reply
> That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.

I bet that just compiler optimizations that LLVM could do with clean code gonna be faster

reply
That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.
reply
> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.

reply