upvote
I really want to try this approach. I'm curious because this has not been my experience at all. I created https://github.com/flippyhead/ai-brain mostly just for myself and a few friends use it. But so far, telling the AI (via CLAUDE.md) to look for relevant memories and to think about when and how to save them has worked very well. It can create structures based on decided priorities, notes for the future, that feel like they'd be very different if it was just trying to summarize everything.
reply
I use Claude code hooks to prompt and store memories. It’s taken a lot of iterations mostly on the definition of “significant” events being stored in memory. Indeed, it works very well now but I’m hesitant to start from scratch on some guys tool. I think demos are going to need reviews here on out. Vibe coded projects look too legit but it’s a waste of time to test the 100 that come out each day
reply
I hear you. I've been slowly building up my own tool (linked above) and keep feeling like someone is going to soon release something that a lot of people will agree should be an independent standard. I'm reluctant to host it with someone else so it needs to be opensource. But then again what I've got is working well for me.
reply
The biggest issue for me is recalling during conversation context, not jotting information down. I've solved this by including a tag for when to nudge the agent to recall something.

ie: "$recall words"

it works but its clunky

reply
Oh my pi does it. And it does it really well.
reply
That's an interesting concept. So it's like if you're an agent chatting with a user, you have an army of assistants who overhear the conversation and record important facts, or search relevant facts on some database and decide on the fly when to interrupt you with "this memory X looks relevant". Sounds easy enough if tokens were free, but an interesting problem to do it efficiently.
reply
I agree. silent agent doing agentic things async is what would be helpful, not requiring a modification to the main prompt
reply
Yeah. The other advantage is a summary-based memory also just… “pieces together” things that a “store”/“remember” memory wouldn’t, because they’re things that the actual main agent would not think to store. i.e. small disconnected things across conversations that alone, would not end up in memory because they’re insignificant. But when there’s an agent looking at multiple conversations at once it can actually reason about this stuff and piece it together.
reply