upvote
Your objective has explicit instruction that car has to be present for a wash. Quite a difference from the original phrasing where the model has to figure it out.
reply
That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.
reply
> Your objective has explicit instruction that car has to be present for a wash.

Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?

reply
In this case, with such a simple task, why even bother to prompt it?

The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.

reply
Interesting, what were the instructions if you don't mind sharing?
reply
"You're holding it wrong."
reply