upvote
Do you mean reset as reopening? How can I get those lists?

I would like AI to tell me "this note looks like an incomprehensible brain dump and needs review before you forget today's meetings"

reply
> Do you mean reset as reopening?

Yes, I have two vaults (one work-oriented, one completely personal) and frequently switch between them. Whenever I do so, I use a homepage plugin that always opens the same "root" note. You can vibe-code this plugin within minutes if you prefer, it's literally all that it does. Or you can have that note pinned to the sidebar and skip the plugins entirely, up to you really.

> How can I get those lists?

You need to be able to embed queries into your notes. Either you use Bases (first-party plugin) or Dataview (third-party plugin). The second one is a little more ironed out as of now, so I keep using that (but will probably migrate in the future). For the first two lists you create queries that simply look at file's creation/modification time. For the third one, Obsidian gives you an option to "star" a note, so you query for that.

reply
That’s a great breakdown — thank you. The “root note as a homepage” + three lists feels like the simplest re-entry surface.

Quick question: do you keep those lists purely time-based (recently updated/created), or do you also include any “active project” signal (e.g. notes linked from a project hub / kanban) so the homepage reflects what you’re actually working on rather than what was last touched?

reply
Appreciate the clear boundary.

If we reframe it as non-generative assistance (pure local indexing + better retrieval, no writing), would that still be a “no”, or is the hard line specifically about model processing?

reply
In theory I have nothing against some sort of a tiny local LLM model that will index them and make the search a little bit more powerful, but to be honest if the feature is advertised as "AI", I doubt I will care enough to look into details.

I'd rather take a dumb "synonyms" plugin that I have complete control over and renders results "instantly" than invoking any sort of LLM where I have to wait more than 3 seconds for a result.

reply
That makes total sense. Latency kills the workflow, and “advertised as AI” carries baggage even if the underlying technique is harmless.

One nuance: the way I’m thinking about this isn’t “you type a query and wait for an LLM”. It’s more like local indexing/pre-computation so retrieval is instant, and any heavier processing happens ahead of time (or during a scheduled review) so it never blocks you. Then you can consume it as a pull-based view or a tiny daily digest—no interruptions, no spinning cursor.

If you could pick one: would you prefer a deterministic synonyms/alias layer (instant, fully controllable), or a local semantic index that improves recall but still feels “tool-like” rather than “AI”?

I’m exploring a similar local-first, low-noise approach (more context in my HN profile/bio if you’re curious).

reply