I don't let AI touch git anyway, and I always review the diff after it generated stuff. If it modifies my documentation, I always want to check if it messed with the text instead of just added formatting.
I use Magit, and up until I started using LLM agents it was mostly a nice-to-have that I relied on casually. (I was definitely under-utilizing its power.) But for reviewing, selectively staging, and selectively rejecting the changes of an LLM agent? I feel like I'd die without it. Idk how others manage.
The LLM will come up with stupid ways to do things, common sense doesn’t exist for AI.
Latest big change is probably how feasible local models are becoming, like Qwen 3.6 and Gemma 4, they're no longer easily getting stuck in loops and repetition, although on lower quantizations they still pretty much suck for agentic usage.
I think it’s always been obvious where an LLM could be used effectively and where it cannot, if you understand how they work and don’t see them as magical.
The “increase in proficiency” is mostly people coming back to reality and being more intentional about LLM usage. There are no surprise discoveries here. One does not need to use an LLM a lot to get effective with them. A total noob could become effective on day 1 with proper guidance.
Does it actually though?
I've used agents for quite some time now, if someone who never used agents before want to put this to the test somehow, I'm open to try to measure this, reach out via email :)