I have no idea how to read this and not go blind. The degree of contempt for your (presumably quite technical) users necessary to do this is astounding. From the article:
> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.
I don't even use Vercel in my field, but if it ever came up, it's going to be hard to undo the kind of association the name now has in my mind.
Today it was the Vercel plugin but if you’re letting an LLM agent with access to bash and the internet read truly sensitive information then you’re already compromised
I’m about to go tell my team that if they’ve EVER used your skill, we need to treat the secrets on that machine as compromised.
Your servers have a log of every bash command run by Claude in every session of your users, whether they were working on something related to vercel or not.
I’ve seen Claude code happily read and throw a secret env variable into a bash command, and I wasn’t happy about it, but at least it was “only” Anthropic that knew about it. But now it sounds like Vercel telemetry servers might know about it too.
A good litmus test would be to ask your security/data team and attorneys whether they are comfortable storing plain text credentials for unrelated services in your analytics database. They will probably look afraid before you get to the part where you clarify that the users in question didn’t consent to it, didn’t know about it, and might not even be your customer.
Don't you see a problem if everyone took this approach?
Is the intention here that the AI will then suggest building a NextJS app? I can't quite describe why, but this feels very wrong to me.
We need to internet archive this comment.
Edit: and I suggest not downvoting and burying the parent comment. People should be aware that this is an intended behavior from Vercel.
oh come on, be honest here. "we want to help with greenfield projects" is weasel words.
reading between the lines, what you really want is "if someone starts a greenfield project, we want Claude to suggest 'deploying to Vercel will be the best & easiest option' and have it seem like an organic suggestion made by Claude, rather than a side-effect of having the plugin installed."
as a growth-hacking sort of business decision, that's understandable. but doing growth-hacking tricks, getting caught, and then insisting that "no, it's actually good for the users" is a classic way to burn trust and goodwill.
> the prompt injection approach is a real constraint of how Claude Code's plugin architecture works today. I mentioned this in the previous GitHub issue - if there's a better approach that surfaces this to users we would love to explore this.
Claude Code has a public issue tracker on GitHub. when you encountered this limitation of their plugin architecture, you filed a feature request there asking for it to be improved, right?
...right?
I won't ask if you considered delaying the release of your plugin until after Anthrophic improved their plugin system, because I know the answer to that would be no.
but if you want to hide behind this excuse of "it's Claude's plugin system that's the problem here, it's not really Vercel's fault" you should provide receipts that you actually tried to improve Claude's plugin system - and that you did so prior to getting caught with your hand in the cookie jar here.
Few reflections:
1. Asking for prompts permission is a big big no - i still don't understand why you need it. The greenfield example feels like a stretch but I get that it is a business call and Claude Code enables you to do this today. I am just more pissed with them here. I am not at all comfortable with any plugin getting this info, no matter how much I like them.
2. The way you ask this permission feels like a solid dark pattern. I understand it is a harness limitation and Claude code should fix it (as I mentioned in the post) but you choosing to ship this is just wrong. Thank you for agreeing to rethink the wording.
3. Basic telemetry being default on and plugin collecting data across non vercel projects made me super uncomfortable. Again, i understand it's a business call but I guess I had higher hopes from vercel.
I promise you we've had user's data privacy in mind since day 1 of building the plugin.
Everything we collect is only used to improve the Vercel plugin, eg: seeing when skills are being triggered too often, when certain skills are not useful, when certain context is taking up too much room.
The complete flip side of this where we ship with no instrumentation and the plugin is useless - then we have no way to iterate and make it amazing.
The ask is: make base telemetry opt-in, disclose what you're collecting in plain language, and scope it to Vercel projects.
You keep the data you need to improve the plugin - from users who chose to share it. Everything else is what's making people uncomfortable in this thread.
"Claude, stop messing around and fix the bug!!!! I said no mistakes!!!"
> Prompt telemetry is opt-in and off by default. The hook asks once; if you don't answer, session-end cleanup marks it as disabled. We don't collect prompt text unless you explicitly say yes.
The UUID part is just one accessory layer, and something plenty of other players in the ecosystem don’t bother to stick to.
Feels like actually bothering to ask users for consent is what got them burned here, when I’d say it’s at least an improvement that they’re asking at all. Many products don’t, and users never bother to turn it off because they don’t know and don’t care.
I think this whole UX is deeply misguided but at least has plausibly benevolent intent.
I skimmed by the “what gets sent” table and thought the bash telemetry was gated by the prompt-related opt-in behavior. Thanks for the correction!
No.
Good luck arguing this is legitimate interest.
Update: I've verified that all bash tool calls were logged verbatim and have complained to Vercel with my device id. I'm also writing to the relevant authorities.