upvote
That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.
reply
> Your objective has explicit instruction that car has to be present for a wash.

Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?

reply
In this case, with such a simple task, why even bother to prompt it?

The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.

reply