Would you see this as something that is sort of turn-key, where a central database is hosted and secured to your group?
Or would you require something more DIY like a local network storage device?
And similarly would you be open to having the summaries generated by a frontier model? Or would you again need it to be something that you hosted locally?
Thank you for the feedback and interest.
But maybe it starts local with an app like yours anyway. I do a lot of solo hacking I don’t want to share with the team too. Then there is some sort of way to push up subsets of data.
What I've found using this contextify-query cli in talking to my project(s) CLI AI history is substantial detail and context that represents the journey of a feature (or lack thereof).
In high velocity agentic coding, git practices seem to almost be cast aside by many. The reason I say that is Claude Code's esc-esc has a file reversion behavior that doesn't presume "responsible" use of git at all!
What I find interesting is that neither Anthropic nor OpenAI have seized on this, it is somewhat meta to the mainline interpreting requests correctly. That said, insights into what you've done and why can save a ton of unnecessary implementation cycles (and wasted tokens ta-boot).
Any thoughts on the above?
If you're open to giving the app a try, and enable updates on the DMG, the query service + CC skill should drop here in a few days. It's pretty dope.
Another alternative for update notifications is to watch the public repo where I'm publishing DMG releases: https://github.com/PeterPym/contextify/releases
Anyhow, this is really cool feedback and I appreciate the exchange you provided here. Thank you. If you have any further thoughts you want to share I'll keep an eye on this thread or can be reached at rob@contextify.sh