Solving sets of differential equations is something that's parallelizable though
See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.
Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.
A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.
These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.
Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.
Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.
I bet that just compiler optimizations that LLVM could do with clean code gonna be faster
But that's exactly the sort of exotic domain knowledge that AI models have that I don't.