Graph edges carry scope. Alice ceo_of Acme and Andy ceo_of Amazon are two edges with different src/dst — conflict scanner looks for (src, rel_type) → ≥2 dsts, so Garman/Jassy don't false-flag if edges are modeled. Gap: most agents just write raw sentences and never call relate().
Temporal decay handles "previous vs current" weakly. half_life × importance attenuates old memories. But that's fade, not logical supersession — the DB doesn't know time-of-validity, only time-of-writing.
Namespaces segregate scope when the agent uses them. Leans on the agent.
Honest result from a bench I ran today (same HN thread): seeded 6 genuine contradictions in 59 memories, think() flagged 60. ~54 are noise-or-ambiguous exactly in the ways you listed. Filed as issue #3.
Design stance: contradictions are surfaced, not resolved. yantrikdb_conflicts returns a review queue; the agent has conversation context, the DB doesn't. "These two may be in tension" not "these are contradictory." That doesn't fix your point — it admits the DB can't make that call alone. Co-CEOs, subsidiaries, temporal supersession need typed-relations + time-of-validity schema work. That's v0.6, not v0.5.
Top-quality AI slop. I hate this.
To the author: project aside, it's not a good look to let an LLM drive your HN profile.
This is like 95% of the memory systems I see posted here. Someone comes up with arbitrary configuration of tools that sound like they'll solve the problem then completely ignores how the system actually works.
In most cases, they're getting these systems to work because of some other prompt they've written that'd probably work better with a normal file system.
No LLM for this post. Promise.