upvote
For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that.
reply
My last commit is literally authored by dependabot.
reply
well you know 100% know what dependabot does
reply
Leaves you open to vulnerabilities in overnight builds of NPM packages that increasingly happen due to LLM slop?
reply
My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. Why would I add noise to my data to cater to someone's neuroticism?

At least at my workplace though, it's just assumed now that you are using the tools.

reply
well if I know a specific LLM has certain tendencies (eg. some model is likely to introduce off-by-one errors), I would know what to look for in code-review

I mean, of course I would read most of the code during review, but as a human, I often skip things by mistake

reply
If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit.
reply
A linter or a formatter does not open you up to compliance and copyright issues.
reply
Linters and formatters are different tools then LLMs. There is a general understanding that linters and formatters don’t alter the behavior of your program. And even still most projects require a particular linter and a formatter to pass before a PR is accepted, and will flag a PR as part of the CI pipeline if a particular linter or a particular formatter fails on the code you wrote. This particular linter and formatter is very likely to be mentioned somewhere in the configuration or at least in the README of the project.
reply
Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them.
reply
> frying a veggie burger in bacon grease

hmm gotta try that

reply
I'm not really sure that's any of their business.
reply
Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself.
reply
So if I use Claude to write the first pass at the code, make a few changes myself, ask it to make an additional change, change another thing myself, then commit it — what exactly do you expect to see then?
reply
The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes.
reply
You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.
reply
+1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”).

But at this point i am more curious if git will continue to be the best tool.

reply
I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them.

But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.)

What are you imagining VCS software doing differently that might play nicer with LLM agents?

reply
I guess if enough people use it, doesn’t the tag become kind of redundant?

Almost like writing “Code was created with the help of IntelliSense”.

reply