upvote
I agree with most of what you wrote except for this:

>Frequent LLM users already know not to do that.

And I think that’s the biggest problem. Amidst the current push to utilize LLMs across orgs and groups there are a large (if even say majority) of people that are using them every day but who have never approached anything as technical as a “harness” before let alone an entire setup.

For them the behavior mentioned here is a major issue.

reply
Exactly - I am a lawyer and we are told to use dedicated AI products as much and however we want. There will be errors made
reply
Much to the often-reported chagrin of judges across the country.
reply
Exactly. When I use a scissor, I don't want the scissor to not work just because I'm not a "frequent scissor user," and then get told by someone who makes their breakfast with scissors that I'm doing it wrong. Most people will not be "frequent" anything users.
reply
Most people also understand that, because they're not "frequent" users of a thing, they absolutely suck at using it, and set their expectations accordingly. In particular, they realize that doing anything non-trivial with the thing requires them to spend some learning and practice time, or asking/hiring a "frequent" user to do it for them.

So the reasonable response to being told you're holding your scissors wrong is to realize that yes, you most likely are holding your scissors wrong[0], and ask the other person for advice (or just to do the thing), or look up a YouTube video and learn, or sign up to a class, or such.

Expecting mastery in 30 seconds is not a reasonable attitude, but it's unfortunately the lie that software industry tried to sell to people for the past 15 years or so.

--

[0] - There's much more to it than one would think.

reply
I’m interested in the “non-trivial” point as well, this seems to be a common refrain from the anti-LLM tech crowd, “LLMs aren’t good at doing anything non-trivial”, well is that really the case or is it just harder and one needs to put in more practice for more complicated tasks?

I don’t have an example off hand, but I know that it’s easy to dismiss something an LLM does as trivial if your work is extremely marginal. Most devs aren’t creating their own programming languages. I can’t help but think people who hold this opinion also think the work most software professionals do is “trivial” (“you’re just moving strings around, that’s not impressive/trivial”)

reply
If you make the example any more complicated, it makes sense.

A lathe operator isn’t any good if they don’t frequently operate lathes.

A articulated robot implementer needs frequent experience implementing robots to be any good.

That doesn’t mean lathes or robots are useless. Nor does it mean they have failed as products because they require expertise.

I do think it raises questions as to whether vast swathes of the population will be effective at using LLMs. Are they scissors, or a lathe?

reply
Everybody seems to want them to be scissors, or at least to treat them as such, but even still the reason everyone can use scissors so well is because they’ve practiced with them, right? You’re probably a lot better at using scissors now than the first time you did it, the functionality is just so simple it’s harder to notice.

To me learning to use LLMs is the same as doing anything else, you have to practice and put in the hours to get good. Maybe some harnesses will eventually allow LLMs to function more as scissors than lathes. This seems to be what Microsoft is trying to do by embedding Copilot in all their products and saying “choose the UI that works best for you”. If that doesn’t end up working we’ll need another paradigm for “non-technical” users to effectively operate computer assistants

reply
Only sort of related, but I would love to see a harness with ed as the primary file editing / reading tool. Half the bash Claude runs seems to be sed anyway, having some state persist in ed would seem to help.

What does one do when a full editor consumes too much bandwidth^H tokens? Use ed, the standard editor!

reply
It's worth noting that Claude Code itself doesn't use the `insert` tool. (It also uses custom edit tool not the suite's predefined str_replace)

Also as a person developing agentic code tools since before Claude Code, I'm skeptical if str_replace provides accuracy improvement over just full rewrite.

Back in the day when SOTA models would do lazy coding like `// ... rest of the code ...`, full rewrite wasn't easy. Search/replace was fast, efficient and without the lazy coding. However, it came with slight accuracy drop.

Today that accuracy drop might be minimal/absent, but I'm not sure if it could lead to improvements like preventing doc corruption.

reply
I've tested this extensively in a workflow (not agentic) context, and you're right, the underlying models are both good at full rewrite of code files, and at doing search/replace.

They've been decent at full rewrite for 2 years. I don't think they were good at search/replace until a year ago, but I'm not so sure.

It's true that the models 2 years ago would sometimes make errors in whole rewrite - e.g removing comments was fairly common. But I've never seen one randomly remove one character or anything like that. These days they're really good.

Main reason agentic harnesses use search/replace is speed and cost, surely! Whole file output is expensive for small changes.

reply
I think your argument makes sense but my understanding is that adding the document to the context and spitting it back is prone to corruption in any scenario.

I think this is closely related to other sources saying that even if you have huge context the attention mechanism itself is not back referencing thus any tasks related to bigger contexts are prone to errors.

because I have some preconception of this maybe I am assuming this is what they were saying. Am I missing something ?

reply
Any rando can publish research nowadays. It means nothing. Just like "X country published N research papers last year". It is noise. In a world where it was required to attach age, experience level, and country of origin to every comment, research paper, or post on the internet, it would shatter the conviction we mistakenly have towards the information we receive.

This team is inexperienced and it shows.

The noise to signal ratio will get worse, even in "academia". Brace yourselves. The kids are growing up in this new world.

reply
Yeah, this is a bit of a strawman of an LLM task.

On editing tasks, one should only allow programmatic editing commands, the text shouldn't flow through the LLM at all. The LLM should analyze the text and emit commands to achieve a feedback directed goal.

reply
People love to interpret the results in the most negative way possible because it's a threat to their occupation and identity. I refer to HN specifically.

The fact of the matter is, if you want to edit a document by reading the document and then regurgitating the entire document with said edits... a human will DO worse then a 25% degradation. It's possible for a human to achieve 0% degradation but the human will have to ingest the document hundreds of times to achieve a state called "memorization". The equivalent in an LLM is called training. If you train a document into an LLM you can get parity with the memorized human edit in this case.

But the above is irrelevant. The point is LLMs have certain similarities with humans. You need to design a harness such that an LLM edits a document the same way a human would: Search and surgical edits. All coding agents edit this way, so this paper isn't relevant.

reply
> People love to interpret the results in the most negative way possible because it's a threat to their occupation and identity.

OR it could be because their concerns are genuine but are ignored in favour of a good sounding story.

reply
But no one in this thread addressed the inaccuracy of the experiment. The experiment did not test the actuality of HOW LLMs are used in reality.

So that is definitively a biased interpretation. This is independent of how accurate my POV or your POV is on whether LLMs degrade documents. I am simply saying the experiment conducted is COMPLETELY DIFFERENT from how LLMs AND humans edit papers.

reply
> a human will DO worse then a 25% degradation.

* than

reply
See that’s an example of degradation by a human. Not even an LLM wil make that kinda mistake.
reply
deleted
reply
[flagged]
reply
> a human will DO worse then a 25% degradation

As I was reading this article, a similar thought occurred to me: "I wonder if that's better or worse than a human?" Unfortunately, there was no human baseline in this study. That said, there are studies that compare LLM to human performance. Usually, humans perform much better (like 5-7x better) at long-running tasks.

In other words, a human would probably do better than an LLM on this task.

Humans lose to LLMs in narrow, well-specified text/symbolic reasoning tasks where the model can exploit breadth, speed, and search. Usually, the LLM performed ~15% better than humans, but I saw studies that were as high as 80%. To my surprise, these studies were usually about "soft skills" like creativity and persuasion.

reply
You can do a baseline study right now. Read this entire thread and make an edit of changing every E to an I.

Show your edit by regurgitating this entire thread by hand on a paper. Don't use any additional tools like Find and replace.

Boom there's your baseline. I can simulate the result in my head.

Guys I'm basically saying the experiment is innaccurate to the practical reality of how LLMs are actually used.

reply
[flagged]
reply
[flagged]
reply
[dead]
reply
The incomprehensible methodology due to resource constraints or straight up for simplicity's sake make these papers worthless unfortunately
reply
It could also be that much like most large orgs now you've made LLMs your entire personality, so you don't see the inherent bias.

Most LLM users who are not touching code are certainly not going to be using a harness. They're going to take all the documents, slam all those tokens into the context window, see they have only used 500k out of their 1M tokens and say "summarize".

reply
Wouldn't they be more likely to give ChatGPT access to a Google Drive folder or some such? The tools the agent has for editing documents will be whatever the app they used implemented.
reply