upvote
That makes total sense. Latency kills the workflow, and “advertised as AI” carries baggage even if the underlying technique is harmless.

One nuance: the way I’m thinking about this isn’t “you type a query and wait for an LLM”. It’s more like local indexing/pre-computation so retrieval is instant, and any heavier processing happens ahead of time (or during a scheduled review) so it never blocks you. Then you can consume it as a pull-based view or a tiny daily digest—no interruptions, no spinning cursor.

If you could pick one: would you prefer a deterministic synonyms/alias layer (instant, fully controllable), or a local semantic index that improves recall but still feels “tool-like” rather than “AI”?

I’m exploring a similar local-first, low-noise approach (more context in my HN profile/bio if you’re curious).

reply