Are we not teaching kids how to publish desktop applications these days or what?
For cross platform development we barely have any decent, free development tools. It's a lot easier to find JavaScript developers in most places than c++/c# developers.
A PCB can be reworked by hand on site. And those revs can be incorporated cheaply as you say. So the need to do all this programmatically is lowered below the economic threshold to make it all plausible in most cases. This presupposes that modern PCB tooling is itself semi-automated and includes simulation capabilities, but an expert operator is doing a lot of the decision making.
I don't agree with this. Circuits aren't really anything more complex than anything else humanity has had to figure out. Most knowledge in this area seems solvable.
Maxwell's equations have been known for a century.
For whatever reason, Software Engineering and Hardware Engineering even though they rely upon the same fundamental physics, are so very different? And apparently can't be reconciled? No. I don't believe it.
High speed boards cannot be simulated well. Because they are far from deterministic. That's what makes them so different from coding.
A lot of automagic "AGI achieved" LLM projects has this same problem, that it is assumed that brief literal prompt shall fully constrain the end result so long it is well thought out. And it's just not how it - the reality, or animal brains - works.
It's not even been three years since Github Copilot was released to developers. And now we're all complaining about "vibe-coding".
I once thought software constraints were so hard a machine would never be able to program it.
But on the other hand, there are tons of circuit boards designed day after day. If it was super hard, we'd not be able to have the tens of thousands of high speed motherboards that come out year after year.
Software and hardware are fundamentally different in the ability of the engineer to isolate working segments. You can take a piece of code and set up unit tests for it, and if you feel good about your test suite, you can be fairly certain that it will serve your engineering and product goals.
In hardware engineering, that kind of isolation is a liability. As a billing electrical design engineer, you should be working tightly with your mechanical and SW/FW/GW teams to optimize what you're building. The massive context and knowledge base you collectively synthesize a design from is a huge benefit, and things like your phone or laptop, or any piece of spaceflight hardware, would not be possible without it.
Example - you can take something like a motor controller. Easy peasy, you say. Grab the best stocked and reasonably priced TI IC off of Digikey and slap its reference design into your copy of Altium Designer. If you give it its own power, thermal, and packaging solution, you can absolutely silo that component and hand it off to an AI agent that builds that piece for you.
Congrats, you've built a standalone motor control module, which you can also buy off of Digikey for a reasonable price that is much cheaper than the time you spent thinking about this.
Also congrats, systems engineering wants your head on a pike and mechanical engineering has taped a picture of your face to a football and is kicking it around in the parking lot.
If you're designing into a product, you're working with the mech and systems teams to create an integrated product that meets the systems/module requirements. The context for this includes not just circuit function, thermal performance, what the EMI situation is, whether there's some room to push back on systems and product as you weigh thermal performance and device longevity against module volume, global industrial geopolitics and the effect on part availability (there's a tariff tickbox in Digi-key now, and during COVID I had to redesign parts several times before being able to actually build them because parts became unavailable overnight due to panic buying).... the list is huge.
The cost of "compiling and running against the test suite" is also huge, because it involves typically weeks of answering questions/issues from the fab/assy, waiting for them to build and ship it, doing electrical bring up, actually running the tests you care about...
It is also hard to catch design issues in schematic or layout reviews. We don't have comprehensive and ubiquitous models for electronic devices, so we can't economically simulate this stuff.
This huge cost means "mashing GO until LLM spits out the right code" can't work, at all.
If you really do want to apply AI to EDA software, I think there's actually a really good use case in being able to catch small issues in a board, things that are too small to address in design reviews but have a meaningful impact on bring-up timelines for R&D test articles - stupid things like having a footprint flipped, or drawing the schematic symbol for a slightly different version of the part that has subtly different power pin configurations (my latest fuck-up). That is a fairly tightly containable problem, because our schematics all have links to vendor data and PDF datasheets that should be easily ingestible, and in practice there's often a lot of EDA users copying pins configs into their tools. I think AI would actually be good at catching the "dumb" errors that are sort of hard to see for humans.
Your response doesn't really add to the conversation so I'll stop here.
Software engineering isn't a thing besides being an ego title.
Software is "ship now, patch later"
Hardware is engineered, it must be correctly designed from the beginning and cannot be easily modified in the field
They are completely different. Software is pure mathematics: you know exactly what goes in, you know exactly what operations it is going to do, and you know exactly what will come out. There are no surprises here, it's just a mechanical translation. If you want to, you can even prove that your code is doing the right thing.
Hardware is physical. Your components don't neatly follow mathematical models - the model is just a rough approximation. Everything interacts with everything else: a signal in one trace will impact a signal in the next trace over - or even on the other side of the board. Your PCB will behave differently if you hold your hand above it - without even touching it. Worst of all, most of your components are black boxes, and you don't have accurate models describing them! What good are Maxwell's equations if there's no way you're ever going to solve them?
You can make a reasonable estimate of how a PCB is going to behave, and you can occasionally do a reasonably-close simulation of some part of your circuit-to-be in isolation. But you need to physically manufacture it to find out whether it behaves the same in practice, and it takes weeks of time and thousands of dollars to manufacture a single prototype. You can't "move fast and break things". You can't afford to do the usual "hit a bug, change tiny thing, recompile, check" cycle you're used to from software programming, and some fancy tooling isn't going to change that reality.