upvote
In theory I have nothing against some sort of a tiny local LLM model that will index them and make the search a little bit more powerful, but to be honest if the feature is advertised as "AI", I doubt I will care enough to look into details.

I'd rather take a dumb "synonyms" plugin that I have complete control over and renders results "instantly" than invoking any sort of LLM where I have to wait more than 3 seconds for a result.

reply
That makes total sense. Latency kills the workflow, and “advertised as AI” carries baggage even if the underlying technique is harmless.

One nuance: the way I’m thinking about this isn’t “you type a query and wait for an LLM”. It’s more like local indexing/pre-computation so retrieval is instant, and any heavier processing happens ahead of time (or during a scheduled review) so it never blocks you. Then you can consume it as a pull-based view or a tiny daily digest—no interruptions, no spinning cursor.

If you could pick one: would you prefer a deterministic synonyms/alias layer (instant, fully controllable), or a local semantic index that improves recall but still feels “tool-like” rather than “AI”?

I’m exploring a similar local-first, low-noise approach (more context in my HN profile/bio if you’re curious).

reply