> Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier
The ideal solution increasingly seems to be encoding everything that differentiates a good engineer from a bad engineer into your prompt.
But at that point the LLM isn’t really the model as much as the medium. And I have some doubts that LLMs are the ideal medium for encoding expertise.
The way you apply the expert loop is to be the expert. "Can we try this...", "have you checked that...", "but what about...".
To some degree you can try to get agents to work like this themselves, but it's also totally fine (good, actually) to be nudging the work actively.
The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.
That's not my experience... mostly it's about first interrogating the actual problem with the customer and conditions under which it occurs. Maybe we even have appropriate logging in our production application? We usually do, because you know, we usually need to debug things that have already happened.
(If it's new/unreleased code, sure fine, let's find a debugger.)