upvote
I read the post — the plain-text accounting analogy is a great fit. The “unit test for knowledge” idea (a checker like bean-check, but for schema + links) feels like the missing feedback loop for most AI-assisted note workflows, and using git/history as the trust layer makes it much more auditable.

I also like the idempotency/provenance angle (unique IDs/links, checkpointing) and the “commit processing” workflow — that’s a concrete example of turning ongoing work into structured, queryable knowledge without a ton of manual ceremony.

Curious: in practice, what improved your quality the most — schema validation catching broken links/types, or constraining the entity set so the agent can’t invent structure? Also, do you find yourself actually using the query language day-to-day, or is it mostly agent-driven retrieval?

I’m exploring a similar closed-loop (context → retrieval → suggestion → human review → write-back) and summarized my angle in my HN profile/bio if you want to compare approaches.

reply