upvote
If only we had this tech back when managers were looking at how many lines of code you were committing weekly as a performance metric.
reply
Now they're looking at your token consumption, which is even more gameable (and stupid).
reply
That is a skill issue though. I have rules for my agents to write compositional, reusable, modular, small files and to avoid any sort of boilerplate etc. Being config driven, single source of truth, having other agents review that rules are followed, etc. Any API or UI or any sort of entry points very light, just proxying to the modular logic basically, so this logic could be reused by any entrypoint easily.

UI components always presentational only logic abstracted modularly, etc...

reply
How do you make it so that the model doesn't forget to follow those rules and skills? How do you make it actually understand the architecture and constraints? You can't, current models don't work that way to make it happen.
reply
Can you share your rules and some of the example PRs that it auto generates and reviews?

The number of times I’ve seen Claude say “this test was failing already so is ignored” when it _wasnt_ despite me telling it to never do that makes me doubt.

reply
Ah, the make_no_mistakes.md
reply
I mean quite frankly I have seen enough code that was definitely written by humans that had exactly this "style".

Then again I don't want to pay for AI to give me the coding style of the worst I ever worked with either.

reply