That's not entirely true. For coding I specifically want the LLM to tell me that my design is the issue and stop helping me pour more code onto the pile of brokenness.
Ideally sure, the LLM could point out that your line of questioning is a result of bad design, but has anyone ever experienced that?
How would it know if any reasoning fails to terminate at all?
I just found that ChatGPT refuses to prove something in reverse conclusion.