upvote
I cringe every time I see Claude trying to co-author a commit. The git history is expected to track accountability and ownership, not your Bill of Tools. Should I also co-author my PRs with my linter, intellisense and IDE?
reply
A whole lot of people find LLM code to be strictly objectionable, for a variety of reasons. We can debate the validity of those reasons, but I think that even if those reasons were all invalid, it would still be unethical to deceive people by a deliberate lie of omission. I don't turn it off, and I don't think other people should either.
reply
My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. At least at my workplace though, it's just assumed now that you are using the tools.
reply
For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that.
reply
My last commit is literally authored by dependabot.
reply
If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit.
reply
Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them.
reply
Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself.
reply
The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes.
reply
You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.
reply
I guess if enough people use it, doesn’t the tag become kind of redundant?

Almost like writing “Code was created with the help of IntelliSense”.

reply
Well is it actually being used as a tool where the author has full knowledge and mental grasp of what is being checked in, or has the person invoked the AI and ceded thought and judgment to the AI? I.e., I think in many cases the AI really is the author, or at least co-author. I want to know that for attribution and understanding what went into the commit. (I agree with you if it's just a tool.)
reply
I have worked with quite a few people committing code they didn't fully understand.

I don't meant this as a drive by bazinga either, the practice of copying code or thinking you understand it when you don't is nothing new

reply
Pre-LLM, it was much easier for reviewers to discern that. Now, the AI-generated code can look like it was well thought out by somebody competent, when it wasn't.
reply
Have you ever reviewed an AI-generated commit from someone with insufficient competence that was more compelling than their work would be if it was done unassisted? In my experience it’s exactly the opposite. AI-generation aggravates existing blindspots. This is because, excluding malicious incompetence, devs will generally try to understand what they’re doing if they’re doing it without AI
reply
I think the issue is not that the patches are more compelling but that they're significantly larger and more frequent
reply
I try to understand what the llm is doing when it generates code. I understand that I'm still responsible for the code I commit even if it's llm generated so I may as well own it.
reply
If you accept the code generated by them nearly verbatim, absolutely.

I don't understand why people consider Claude-generated code to be their own. You authored the prompts, not the code. Somehow this was never a problem with pre-LLM codegen tools, like macro expanders, IPC glue, or type bundle generators. I don't recall anybody desperately removing the "auto-generated do not edit" comments those tools would nearly always slap at the top of each file or taking offense when someone called that code auto-generated. Back in the day we even used to publish the "real" human-written source for those, along with build scripts!

reply
[delayed]
reply
Sent from my iPhone
reply
> Should I also co-author my PRs with my linter, intellisense and IDE?

Absolutely. That would be hilarious.

reply
I suspect vibe coders might actually want you to consider turning to Claude for accountability and ownership rather than the human orchestrator.

If your linter is able to action requests, then it probably makes sense to add too.

reply
You have copyright to a commit authored by you. You (almost certainly) don't have copyright (nobody has) to a commit authored by Claude.
reply
Where is there any legal precedent for that?

In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.

reply
It's beyond obvious that a LLM cannot have copyright, any more than a cat or a rock can. The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law. As far as I can see, it depends on the extent of the user's creative effort in controlling the LLM's output.
reply
It is not "beyond obvious" that a cat cannot have copyright, given the lawsuit about a monkey holding copyright [1], and the way PETA tried to used that case as precedent to establish that any animal can hold copyright.

[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

reply
It may be obvious to you, but it has lead to at least one protracted court case in the US: Thaler v. Perlmutter.

> The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law.

Its is going to vary with copyright law. In the UK the question of computer generated works is addressed by copyright law and the answer is "the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken"

Its also not a simple case of LLM generated vs human authored. How much work did the human do? What creative input was there? How detailed were the prompts?

In jurisdictions where there are doubts about the question, I think code is a tricky one. If the argument that prompts are just instructions to generate code, therefore the code is not covered by copyright, then you could also argue that code is instructions to a compiler to generate code and the resulting binary is not covered by copyright.

reply
According to the law, if I use Claude to generate something, I hold the copyright granted Claude didn’t verbatim copy another project.
reply
>Where is there any legal precedent for that?

Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.

And in the US constitution,

https://constitution.congress.gov/browse/article-1/section-8...

Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.

reply
The Thaler ruling addresses a different point.

The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.

the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.

reply
Guidance on AI is unambiguous.

https://www.copyright.gov/ai/

AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.

reply
can you tell me where exactly in the documents you link to it says that?
reply
Anthropic could at least make a compelling case for the copyright.

It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).

I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.

reply
Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.

Anthropic's user agreement does not have a similar agreement.

reply
I think all you need to do is claim that your girlfriend is your laptop. /s
reply
Eh, there are some very good reasons[0] that you would do better to track your usage of LLM derived code (primarily for legal reasons)

[0]: https://www.jvt.me/posts/2026/02/25/llm-attribute/

reply
legally speaking.. if you're not sure of the risk- you don't document it.
reply
[dead]
reply
I would have expected people (maybe a small minority, but that includes myself) to have already instructed Claude to do this. It’s a trivial instruction to add to your CLAUDE.md file.
reply
It's a config setting (probably the same end result though):

https://code.claude.com/docs/en/settings#attribution-setting...

reply
I always assumed that if I tried to tell it, that it'd say "I'm sorry, Dave. I'm afraid I can't do that"
reply
I guess our system prompt didn't work. If folks are having to add it manually into their own Claude.md files...
reply
My mistake - it was the configuration setting that did it. Nevertheless, you can control many other aspects of its behavior by tuning the CLAUDE.md prompt.
reply
It's less about pretending to be a human and more about not inviting scrutiny and ridicule toward Claude if the code quality is bad. They want the real human to appear to be responsible for accepting Claud's poor output.
reply
That’s how I’d want it to be honestly. LLMs are tools and I’d hope we’re going to keep the people using them responsible. Just like any other tools we use.
reply
It’s also pretty damn obvious when LLMs write code. Nobody out here commenting every method in perfect punctuation and grammar.
reply
That’s ultimately the right answer, isn’t it? Bad code is bad code, whether a human wrote it all, or whether an agent assisted in the endeavor.
reply
Yeah in my team everyone knows everyone is using LLMs. We just have a rule. Don't commit slop. Using an LLM is no excuse for committing bad code.
reply
The code has a stated goal of avoiding leaks, but then the actual implementation becomes broader than that. I see two possible explanations:

* The authors made the code very broad to improve its ability to achieve the stated goal

* The authors have an unstated goal

I think it's healthy to be skeptical but what I'm seeing is that the skeptics are pushing the boundaries of what's actually in the source. For example, you say "says on the tin" that it "pretends to be human" but it simply does not say that on the tin. It does say "Write commit messages as a human developer would" which is not the same thing as "Try to trick people into believing you're human." To convince people of your skepticism, it's best to stick to the facts.

reply
By "says on the tin," I was referring to the name ("undercover mode") and the instruction to "not blow your cover." If pretending to be a human is not the cover here, what is? Additionally, does Claude code still admit that it's a LLM when this prompt is active as you suggest, or does it pretend to be a human like the prompt tells it to?
reply
Why are you assuming the actual implementation was authored by a human?
reply
My comment makes no such assumption.
reply
Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that
reply
But I want to see Claude on the contributor list so that I immediately know if I should give the rest of the repo any attention!
reply
So turn it off.

"includeCoAuthoredBy": false,

in your settings.json.

reply
They changed it to `attribution`, but yes you can customize this
reply
Why not? What's wrong with honesty?
reply
Yeah I much prefer it commit the agent, and I would also like if it committed the model I was using at the time.
reply
I guess Im sometimes dishonest when it suits me
reply
deleted
reply
My first reaction is that they are using this to take advantage of OSS reviewers for in the wild evals.
reply
You can already turn off "Co-Authored-By" via Claude Code config. This is what their docs show:

~/.claude/settings.json

    {
      "attribution": {
        "commit": "",
        "pr": ""
    },
The rest of the prompt is pretty clear that it's talking about internal use.

Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.

Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.

reply
None of this is really worrying, this is a pattern implemented in a similar way by every single developer using AI to write commit messages after noticing how exceptionally noisy they are to self-attribute things. Anthropics views on AI safety and alignment with human interests dont suddenly get thrown out with the bathwater because of leaked internal tooling of which is functionally identical to a basic prompt in a mere interface (and not a model). I dont really buy all the forced "skepticism" on this thread tbh.
reply
People make fun that we should say magic words in interaction with LLMs. How frustrated can Claude be? /s
reply