points
Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?
The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.