I've been wondering for years how to make whatever LLM ask me stuff instead of just filling holes with assumptions and sprinting off.
User-configurable agent instructions haven't worked consistently. System prompts might actually contain instructions to not ask questions.
Sure there's a practical limit to how much clarification it ought to request, but not asking ever is just annoying.