upvote
> And for in-place edits, you can review "git diff" for surprises.

I don't let AI touch git anyway, and I always review the diff after it generated stuff. If it modifies my documentation, I always want to check if it messed with the text instead of just added formatting.

reply
This. I know the LLM agents often have their own little diff viewers and edit approval workflows, but for a high volume of code, I cannot imagine actually reviewing everything without leaning on much more capable Git tooling.

I use Magit, and up until I started using LLM agents it was mostly a nice-to-have that I relied on casually. (I was definitely under-utilizing its power.) But for reviewing, selectively staging, and selectively rejecting the changes of an LLM agent? I feel like I'd die without it. Idk how others manage.

reply
If you’re using LLMs for agentic work it is absolutely essential that you have a robust set of tools for them to use and the correct instructions to prompt their use.

The LLM will come up with stupid ways to do things, common sense doesn’t exist for AI.

reply
Isn't this the whole reason they became viable in the last 6 months? The system prompt and harness is improving. It's less and less essential every day to roll your own.
reply
I don't think there is a single reason. Models are improving, so are the harnesses, prompts and we who use them a lot also get more proficient and learn where they can be used effectively vs not, so lots of improvements all over the ecosystem, brought together.

Latest big change is probably how feasible local models are becoming, like Qwen 3.6 and Gemma 4, they're no longer easily getting stuck in loops and repetition, although on lower quantizations they still pretty much suck for agentic usage.

reply
> we who use them a lot also get more proficient and learn where they can be used effectively vs not

I think it’s always been obvious where an LLM could be used effectively and where it cannot, if you understand how they work and don’t see them as magical.

The “increase in proficiency” is mostly people coming back to reality and being more intentional about LLM usage. There are no surprise discoveries here. One does not need to use an LLM a lot to get effective with them. A total noob could become effective on day 1 with proper guidance.

reply
I think you hit the nail on the head. I had been in this space for a little bit before it really became popular. I haven’t seen incredible gains in model competency. What I have seen though is people figuring out what works and what doesn’t.
reply
It’s pretty telling that ignoring LLMs entirely for a few years and then jumping in last minute after everyone has struggled through figuring out how to use them still puts you on the same level very quickly.
reply
> then jumping in last minute after everyone has struggled through figuring out how to use them still puts you on the same level very quickly

Does it actually though?

I've used agents for quite some time now, if someone who never used agents before want to put this to the test somehow, I'm open to try to measure this, reach out via email :)

reply
The models also have far more intelligence built in. For example, the pi.dev agent harness has a system prompt which fits on a single page, and includes only 4 or 5 tools. Running with a small coding model like Qwen3.6 27B, this setup is completely capable of agentic coding.
reply
They still aren't viable. Nothing changed within the last 6 months.
reply
My favorite is when Claude will build a completely new application to load and inspect a .dll file using reflection instead of just googling the library's interfaces.
reply
It did this for during one of the recent outrage periods. It was unjarring deps left and right instead of googling for it. What an easy way for me to own the tokenmaxxing leaderboard I remember thinking
reply
“Use all of the tools at your disposal, including searching the internet” is my claude-specific common instruction.
reply