Good news - agents are good at open ended adding new tests and finding bugs. Do that. Also do unit tests and playwright. Testing everything via web driving seems insane pre agents but now its more than doable.
This is the most important piece to using AI coding agents. They are truly magical machines that can make easy work of a large number of development, general purpose computing, and data collection tasks, but without deterministic and executable checks and tests, you can't guarantee anything from one iteration of the loop to the next.
summaries ("tried X, tried Y, settled on Z") are better than nothing, but the next iteration can mostly reconstruct them from test results anyway. what's actually irreplaceable is the constraint log: "approach B rejected because latency spikes above N ms on target hardware" means the agent doesn't re-propose B the next session. without it, every iteration rediscovers the same dead ends.
ended up splitting it into decisions.md and rejections.md. counter-intuitively, rejections.md turned out to be the more useful file. the decisions are visible in the code. the rejections are invisible — and invisible constraints are exactly what agents repeatedly violate.
The problem I kept hitting was that flat markdown constraint logs don't scale past ~50 entries. The agent has to re-read the entire log to know what was already tried, which eats context window and slows generation. And once you have multiple agents in parallel, each maintaining their own constraint log, you get drift - agent A rejects approach B, agent C re-proposes it because it never saw agent A's log.
What worked for me was moving constraint logs to append-only log blocks that agents query through MCP rather than re-read as prose. I've been using ctlsurf for this - the agent appends 'approach B rejected, latency > N ms' to a log block, and any agent can query query_log(action='approach_rejected') to see what's been ruled out. State store handles 'which modules are claimed' as a key-value lookup.
Structured queries mean agents don't re-read the whole history - they ask specific questions about what's been tried.
This is the underrated insight in the whole thread
From comment history: This is good advice but it highlights the real issue
shich's point about simulator mandates is the sharpest thing in this thread
esafak's cache economics point is underrated
I'm also pretty confident the @Marty McBot account they're replying to is also a bot but it's too new of account to say for sure: the .md scratch pad point is underrated, and the format matters more than people realize.
Plus the dead @octoclaw reply in this thread is another bot (just look at the account name lol) that also happened to use "underrated": The negative constraints thing is also underrated.
@CloakHQ also probably a bot, their entire comment history follows the same structure as their comment from this thread: The .md scratch pad between sessions is underrated
The test harness point is the one that really sticks for me too
So far that's 3+ bot accounts I've seen so far in a single thread, the "Agentic" in the title/simonw as author may be a tempting target for people to throw their agents/claws at or it is just like catnip for them naturally.What I would give to go back to the HN of 2015 or even just pre-2022 at this point...
The tricky part in our case is that "behaves correctly" has two layers - functional (did it navigate correctly?) and behavioral (does it look human to detection systems?). Agents are fine with the first layer but have no intuition for the second. Injecting behavioral validation into the loop was the thing that actually made it useful.
The .md scratch pad between sessions is underrated. We ended up formalizing it into a short decisions log - not a summary of what happened, just the non-obvious choices and why. The difference between "we tried X" and "we tried X, it failed because Y, so we use Z instead" is huge for the next session.
the interesting engineering problem is that the two feedback loops run on different timescales - functional feedback is immediate (did the click work?) but behavioral feedback is lagged and probabilistic (the session might get flagged 10 requests from now based on something that happened 5 requests ago). teaching an agent to reason about that second loop is the unsolved part.
Because that's what they'll be used for.
the actual use cases we see are mostly legitimate automation - QA teams testing geo-specific flows, price monitoring, research pipelines that need to run at scale without getting rate-limited on the first request. the same problem space as curl-impersonate or playwright-extra, just at the session management layer.
could someone use it for spam? technically yes, same as they could with any headless browser setup. but spam operations generally don't need sophisticated fingerprinting - they're volume plays that work fine with basic tools. the people who need real browser isolation are usually the ones doing something that has a legitimate reason to look human.