upvote
I don't know why anyone wants to shove big heavy applications into browsers. Are they imagining you'd use your phone for this?

Are we not teaching kids how to publish desktop applications these days or what?

reply
It's just not that we are imagining that people will be using phone to build PCB's, we also have a cli which perform better than the browser playground!

https://docs.tscircuit.com/intro/quickstart-cli

reply
My guess is the cross platform story.

For cross platform development we barely have any decent, free development tools. It's a lot easier to find JavaScript developers in most places than c++/c# developers.

reply
It's probably a simple economics thing. You can hire out a contract PCB design for a reasonable cost and the long poll is getting back physical prototypes. Contrast to HDLs displacing schematic based designs for ASICs and programmable logic, where simulation allows for rapid development.
reply
I'd say it's significantly different because you can get a physical PCB prototype in days for a few dollars while an ASIC prototype takes months and millions.
reply
I'm not sure if you missed the point. HDL design work is done by simulation for countless iterations before ever making it to a physical prototype. ASIC prototypes come very late in the cycle and are usually a low digit number of revs. So the point is simulation carries you through most of the design feedback cycle, and a significant economic and technical effort went into industry automatic place and route, DRC/LVS, etc. I am also ignoring the human heavy side of layout especially around analog and RF which is more like PCB design still.

A PCB can be reworked by hand on site. And those revs can be incorporated cheaply as you say. So the need to do all this programmatically is lowered below the economic threshold to make it all plausible in most cases. This presupposes that modern PCB tooling is itself semi-automated and includes simulation capabilities, but an expert operator is doing a lot of the decision making.

reply
> I don't see circuit-as-code taking off with humans anytime soon

I don't agree with this. Circuits aren't really anything more complex than anything else humanity has had to figure out. Most knowledge in this area seems solvable.

Maxwell's equations have been known for a century.

For whatever reason, Software Engineering and Hardware Engineering even though they rely upon the same fundamental physics, are so very different? And apparently can't be reconciled? No. I don't believe it.

reply
PCB layout is as much art and black magic as it is science. I'm not sure why you dismiss the complexity so easily, this definitely is not just a matter of applying Maxwell's equations.
reply
Layout is a puzzle, especially with particularly high density layouts, but some of this is ameliorated by high layer count and fine trace/space boards becoming cheaper. Definitely not black magic. RF layout is black magic, let's not steal their thunder here.
reply
High speed PCBs are RF. At high enough frequencies, traces become waveguides, and the result cannot be predicted analytically. Simulation is your only light in this mess.
reply
I have been lucky to not have to lay out anything that had frequencies of interest over 1Ghz or so. What's your experience been? E.g. types of signals, frequency range, issues you ran into?
reply
Signals that arrive faster than what the speed of light should physically allow for that trace length because you made the corners too sharp and then instead of flowing along your path the electricity creates a magnetic field which then induces a current and that allows the signal to tunnel through non-conductive walls.

High speed boards cannot be simulated well. Because they are far from deterministic. That's what makes them so different from coding.

reply
What was the context you had that issue in? RAM bus?
reply
It's just across-modal. The list of components are linear list, connections between components are graphs, placements are geometrically constrained, and overall shape is both geometric and external to the board. So you can't just mechanically derive the board from mere linear textual descriptions of it.

A lot of automagic "AGI achieved" LLM projects has this same problem, that it is assumed that brief literal prompt shall fully constrain the end result so long it is well thought out. And it's just not how it - the reality, or animal brains - works.

reply
You need a LOT of context about what the components are and how they're being used in order to route them. Extreme case is an FPGA where a GPIO might be a DAC output or one half of a SERDES diff pair.
reply
Doesn't even have to be that extreme: there is no way port placements of a Mac Mini can be mathematically derived from a plain English natural language prompt, and yet that's what they're trying to do. It's just the reality that not everything happen or could be done in literal languages. I guess it takes few more years before everyone accepts that.
reply
There's nothing new in EE under the sun. Hasn't been for 40 years really. EE's min/max a bunch of mathematical equations. There's a lot of them, but it's not nearly as difficult as people think it is. They end up being design constraints, which can be coded, measured, and fed back into the AI.

It's not even been three years since Github Copilot was released to developers. And now we're all complaining about "vibe-coding".

reply
Design constraints that have so many factors that people still don't use autorouters for most stuff. You're not getting it, drawing the wires isn't the hard part, understanding the constraints is.
reply
I think we agree with that part.

I once thought software constraints were so hard a machine would never be able to program it.

But on the other hand, there are tons of circuit boards designed day after day. If it was super hard, we'd not be able to have the tens of thousands of high speed motherboards that come out year after year.

reply
Are you in HW design by chance?

Software and hardware are fundamentally different in the ability of the engineer to isolate working segments. You can take a piece of code and set up unit tests for it, and if you feel good about your test suite, you can be fairly certain that it will serve your engineering and product goals.

In hardware engineering, that kind of isolation is a liability. As a billing electrical design engineer, you should be working tightly with your mechanical and SW/FW/GW teams to optimize what you're building. The massive context and knowledge base you collectively synthesize a design from is a huge benefit, and things like your phone or laptop, or any piece of spaceflight hardware, would not be possible without it.

Example - you can take something like a motor controller. Easy peasy, you say. Grab the best stocked and reasonably priced TI IC off of Digikey and slap its reference design into your copy of Altium Designer. If you give it its own power, thermal, and packaging solution, you can absolutely silo that component and hand it off to an AI agent that builds that piece for you.

Congrats, you've built a standalone motor control module, which you can also buy off of Digikey for a reasonable price that is much cheaper than the time you spent thinking about this.

Also congrats, systems engineering wants your head on a pike and mechanical engineering has taped a picture of your face to a football and is kicking it around in the parking lot.

If you're designing into a product, you're working with the mech and systems teams to create an integrated product that meets the systems/module requirements. The context for this includes not just circuit function, thermal performance, what the EMI situation is, whether there's some room to push back on systems and product as you weigh thermal performance and device longevity against module volume, global industrial geopolitics and the effect on part availability (there's a tariff tickbox in Digi-key now, and during COVID I had to redesign parts several times before being able to actually build them because parts became unavailable overnight due to panic buying).... the list is huge.

The cost of "compiling and running against the test suite" is also huge, because it involves typically weeks of answering questions/issues from the fab/assy, waiting for them to build and ship it, doing electrical bring up, actually running the tests you care about...

It is also hard to catch design issues in schematic or layout reviews. We don't have comprehensive and ubiquitous models for electronic devices, so we can't economically simulate this stuff.

This huge cost means "mashing GO until LLM spits out the right code" can't work, at all.

If you really do want to apply AI to EDA software, I think there's actually a really good use case in being able to catch small issues in a board, things that are too small to address in design reviews but have a meaningful impact on bring-up timelines for R&D test articles - stupid things like having a footprint flipped, or drawing the schematic symbol for a slightly different version of the part that has subtly different power pin configurations (my latest fuck-up). That is a fairly tightly containable problem, because our schematics all have links to vendor data and PDF datasheets that should be easily ingestible, and in practice there's often a lot of EDA users copying pins configs into their tools. I think AI would actually be good at catching the "dumb" errors that are sort of hard to see for humans.

reply
So "not everything happen or could be done in literal languages" is the part that got you?
reply
18ghz circuits were around since 1973 was the part that got me.

Your response doesn't really add to the conversation so I'll stop here.

reply
> For whatever reason, Software Engineering and Hardware Engineering even though they rely upon the same fundamental physics

Software engineering isn't a thing besides being an ego title.

Software is "ship now, patch later"

Hardware is engineered, it must be correctly designed from the beginning and cannot be easily modified in the field

reply
> For whatever reason, Software Engineering and Hardware Engineering even though they rely upon the same fundamental physics

They are completely different. Software is pure mathematics: you know exactly what goes in, you know exactly what operations it is going to do, and you know exactly what will come out. There are no surprises here, it's just a mechanical translation. If you want to, you can even prove that your code is doing the right thing.

Hardware is physical. Your components don't neatly follow mathematical models - the model is just a rough approximation. Everything interacts with everything else: a signal in one trace will impact a signal in the next trace over - or even on the other side of the board. Your PCB will behave differently if you hold your hand above it - without even touching it. Worst of all, most of your components are black boxes, and you don't have accurate models describing them! What good are Maxwell's equations if there's no way you're ever going to solve them?

You can make a reasonable estimate of how a PCB is going to behave, and you can occasionally do a reasonably-close simulation of some part of your circuit-to-be in isolation. But you need to physically manufacture it to find out whether it behaves the same in practice, and it takes weeks of time and thousands of dollars to manufacture a single prototype. You can't "move fast and break things". You can't afford to do the usual "hit a bug, change tiny thing, recompile, check" cycle you're used to from software programming, and some fancy tooling isn't going to change that reality.

reply