points
I mean, only enabling trusted tools does not help defend against prompt injection, does it?
The vector isn't the tool, after all, it's the LLM itself.