upvote
In my experience, it happens with each edit of the document, whether or not you clear the context window.

You can somewhat mitigate this, at the same moment you ask for the new edit, by adding new info or specifying the lost meaning you want to add back. But other things will still get washed out.

Nuances will drift, sharp corners will be ablated. You're doing a Xerox copy of your latest Xerox copy, so even if you add your comments with a sharpie, anything that was there right before will be slightly blurrier in the next version.

reply
Which is why I think AI assisted writing is better then just letting it write the full text (if you care about the quality of the result). The act of writing isn't just the production of text, it is about wrangling a topic, rotating it in your mind and finding the perfect expression for a thought you have and that you want to convey to others. Some of those things can't be known by the LLM since you don't know them yourself by the point you started out.

Often that thinking bit itself provides value to the person doing it, beyond the text itself. By letting a LLM do it for you, you rob yourself of the change of thought and the new findings you may encounter.

Working with LLMs just makes it quicker to get going, bit you need to be a ruthless editor.

reply
Each edit, even with unrelated edits. I had a README referring to something as "the cathedral of s*t" (some HN commentators don't care for the swearing, which is systemically bad news but w/e) and the robot would lift that phrase out in drive-bys, repeatedly.

Occasionally it would report the action, sometimes it would not bother to report it. It never reached into the README on an unrelated doc edit, but if it was touching the README, that line was getting excised.

reply
That kind of passive-aggressive pseudo-moralizing is a common feature of all the current 'frontier' models. Try to do something like get one to summarize A Song of Ice and Fire text and it's likely to try and covertly sand off all the 'offensive' rough edges without even saying it's doing so.
reply