(alex000kim.com)
NEVER include in commit messages or PR descriptions:
- The phrase "Claude Code" or any mention that you are an AI
- Co-Authored-By lines or any other attribution
BAD (never write these):
- 1-shotted by claude-opus-4-6
- Generated with Claude Code
- Co-Authored-By: Claude Opus 4.6 <…>
This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.[0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...
https://code.claude.com/docs/en/settings#attribution-setting...
"includeCoAuthoredBy": false,
in your settings.json.
~/.claude/settings.json
{
"attribution": {
"commit": "",
"pr": ""
},
The rest of the prompt is pretty clear that it's talking about internal use.Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.
Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.
- "Fix bug found while testing with Claude Capybara"
- "1-shotted by claude-opus-4-6"
- "Generated with Claude Code"
- "Co-Authored-By: Claude Opus 4.6 <…>"
This makes sense to me about their intent by "UNDERCOVER"
"Write commit messages as a human developer would — describe only what the code change does."
VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.
Ultimately I stopped using it, because editing the messages cost me more time than it saved.
/rant
Writing and reading paragraphs of design discussion in a commit message is not something that seems common.
What you're describing here is a design. The most important parts of a design are the decisions and their reasoning.
e.g. "we decided on tool/library pattern X over tool/library/pattern Y because Z" – that is a design, usually discussed outside (and before) a commit message.
You discuss these decisions with others, document the discussion and decision, and then you have a design and can start writing code.
Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?
> Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:
> Fixed the foobar bug by adding a baz flag - dkenyser
Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.
[edit] Never mind, find in page fail on my end.
https://code.claude.com/docs/en/settings#attribution-setting...
Edit: Everyone is responding "comments are good" and I can't tell if any of you actually read TFA or not
> “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”
This is just revealing operational details the agent doesn't need to know to set `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`
Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.
You get free long term agent memory with zero infrastructure.
Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.
This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.
That's revealing waaaay more than the agent needs to know.
Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.
Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.
> This was one of the first things people noticed in the HN thread.
> The obvious concern, raised repeatedly in the HN thread
> This was the most-discussed finding in the HN thread.
> Several people in the HN thread flagged this
> Some in the HN thread downplayed the leak
when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?
Or, more simply: Because folks wanted it enough to upvote it.
On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.
The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/
(I didn't think to include a UUID checker though - nice touch)
So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain
Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.
What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..
Interesting based on the other news that is out.
I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970
Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).
I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.
it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".
Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).
I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.
But I did read the prompt and it did say hide the fact that you are AI.
But, I also get Anthropic's side that when they're contributing they don't want their internals leaked. If it had been left at that, that's fine, but having it pretend like it's not AI at all rubs me a little bit the wrong way. Why try to hide it?
Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?
That’s already possible, nothing prevents you from doing it.
They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.
At least that’s how things worked last week xD
https://alex000kim.com/posts/2026-03-31-claude-code-source-l...
Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.
They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.
Now that the indicators have leaked, they will most likely be rotated.
Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?
They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.
But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.
Unless they would charge for the fake tools anyway so you never know they were there
They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.
Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.
>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.
I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.
I'm so tired man, what the hell are we doing here.
How much approximate savings would this actually be?
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?
I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?
You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.
Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.
Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?
Even if the code is line-for-line identical, the difference is in how much trust I am willing to give the code. If I have to work in the neighborhood of that code, I need to know what degree of skepticism I should be viewing it with.
ISTM the most efficient and objective solution is to invest in AI more on both sides of the fence.
Do you not think it is an overreaction to panic like this if I can do exactly what the undercover mode does by simply asking Claude?