points
Unless you are running a local model, your prompts are almost certainly logged by your inference provider, and would only be a subpoena away?