That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.
Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.
However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).
If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.
Solving sets of differential equations is something that's parallelizable though
See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.
Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.
A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.
These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.
Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.
Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.
I bet that just compiler optimizations that LLVM could do with clean code gonna be faster
But that's exactly the sort of exotic domain knowledge that AI models have that I don't.