upvote
Or: depth-first search of the solution space vs breadth-first (or balanced) search of the solution space.

> Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier

The ideal solution increasingly seems to be encoding everything that differentiates a good engineer from a bad engineer into your prompt.

But at that point the LLM isn’t really the model as much as the medium. And I have some doubts that LLMs are the ideal medium for encoding expertise.

reply
I really don't relate to this...

The way you apply the expert loop is to be the expert. "Can we try this...", "have you checked that...", "but what about...".

To some degree you can try to get agents to work like this themselves, but it's also totally fine (good, actually) to be nudging the work actively.

reply
As you practice it will be apparent, you simply keep working on the application architecture yourself.
reply
> However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out

The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.

reply
> However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out.

That's not my experience... mostly it's about first interrogating the actual problem with the customer and conditions under which it occurs. Maybe we even have appropriate logging in our production application? We usually do, because you know, we usually need to debug things that have already happened.

(If it's new/unreleased code, sure fine, let's find a debugger.)

reply