A note is not an intention. It commits to memory, not to action. I really don't care about having a whole searchable, tagged database; I hardly ever look at those notes again.
At work I have topic-based Markdown notes. Sometimes I collect information about a topic for a few weeks or months, and eventually turn it into a proper guide (making guides is my job).
I also LOVE paper notebooks, because they become a beautiful timeline of sketches, to-do lists, thoughts and plans. When I finish a notebook, I scan it then throw it in a drawer.
I also use Obsidian daily notes to journal, mostly because it's easier to open an app than to write in a notebook. I don't do anything special with those notes, unless I'm trying to "debug" something happening in my life.
[0] https://strangestloop.io/essays/things-that-arent-doing-the-...
It seems you're more accustomed to treating stored information as memories rather than a knowledge base. That's perfectly fine. However, I personally believe that the progress humans and AI have made to reach our current stage likely stems largely from the accumulation of knowledge, combined with the evolution driven by new challenges!
It's only useful in the context of understanding myself.
“Knowledge base” can imply objective truth and completeness, which creates pressure. For a lot of us, what we store is really a snapshot of attention, curiosity, anxiety, and identity at a moment in time — more like a personal log than a database.
One framing that helps me is: the archive isn’t “truth”, it’s “evidence of what I cared about”, and it’s only useful when it reduces friction for a real moment (re-entry, reflection, or a concrete next step). Otherwise it’s just noise.
Do you find it more useful as a mirror (patterns about yourself), or as a tool (helping you make decisions / take action)?
In other cases these notes are a complement to my photos. They show a different aspect of my life at a given moment. My photos don't show that on September 4, 2015, I had a massive crush on someone. I have built a timeline that combines my journals, photos, sketches, geolocation, Google searches and other things. It reveals a far more nuanced picture of me at a given time.
I also have more technical notes. Those are a bit of a "collection of facts". It's a bit like putting all the parts on the table, and slowly organising them into a coherent structure. This is how I approach bigger topics before I understand them fully. Then my notes act as a sort of medium-term memory. When I finish a project, I usually have a bunch of leftover notes and todos that I don't intend to ever finish. That's why I say that notes are not an obligation.
The “notes aren’t an obligation” line also resonates — treating them as medium-term memory rather than a forever archive removes a lot of pressure.
When you finish a project and you have leftover notes/todos you don’t intend to finish, do you actively prune/close them (mark done/obsolete), or do you just let them fade and trust that what matters will resurface naturally?
I also recommend reading this comment:
I even got to a point where I made an "anti-memory system" - a MCP tool that just calls a model without context from the full conversation or past conversations, to get a fresh perspective. And I instruct the host model to reveal only partially the information we discussed, explaining that creativity is sparked when LLMs get to see not too much, not too little - like a sexy dress.
When it comes to stimulating AI creativity, it may indeed be better to impose fewer constraints. However, in most scenarios, problems are likely still solved through simple information aggregation, refinement, analysis, and planning, right?
I hate the concept of a “second brain” but this isn’t necessarily true. Those guides you make at work seemingly precede the action of someone else, right? And even if the notes you write for yourself aren’t needed afterwards, don’t they contribute to “doing the thing”?
1. most recently updated notes
2. most recently created notes
3. notes I've added to my favourites
On top of a search bar that doesn't suck, this is pretty much all I ever need.
As for "AI", it's never going anywhere near my notes. It's supposed to be my second brain filled with content I've bothered to write down for myself. "AI" doesn't write it, "AI" doesn't process it, and I will never be convinced to change that. I use "AI" extensively, but this is a hard line that I will never cross.
I would like AI to tell me "this note looks like an incomprehensible brain dump and needs review before you forget today's meetings"
Yes, I have two vaults (one work-oriented, one completely personal) and frequently switch between them. Whenever I do so, I use a homepage plugin that always opens the same "root" note. You can vibe-code this plugin within minutes if you prefer, it's literally all that it does. Or you can have that note pinned to the sidebar and skip the plugins entirely, up to you really.
> How can I get those lists?
You need to be able to embed queries into your notes. Either you use Bases (first-party plugin) or Dataview (third-party plugin). The second one is a little more ironed out as of now, so I keep using that (but will probably migrate in the future). For the first two lists you create queries that simply look at file's creation/modification time. For the third one, Obsidian gives you an option to "star" a note, so you query for that.
Quick question: do you keep those lists purely time-based (recently updated/created), or do you also include any “active project” signal (e.g. notes linked from a project hub / kanban) so the homepage reflects what you’re actually working on rather than what was last touched?
If we reframe it as non-generative assistance (pure local indexing + better retrieval, no writing), would that still be a “no”, or is the hard line specifically about model processing?
I'd rather take a dumb "synonyms" plugin that I have complete control over and renders results "instantly" than invoking any sort of LLM where I have to wait more than 3 seconds for a result.
One nuance: the way I’m thinking about this isn’t “you type a query and wait for an LLM”. It’s more like local indexing/pre-computation so retrieval is instant, and any heavier processing happens ahead of time (or during a scheduled review) so it never blocks you. Then you can consume it as a pull-based view or a tiny daily digest—no interruptions, no spinning cursor.
If you could pick one: would you prefer a deterministic synonyms/alias layer (instant, fully controllable), or a local semantic index that improves recall but still feels “tool-like” rather than “AI”?
I’m exploring a similar local-first, low-noise approach (more context in my HN profile/bio if you’re curious).
It is basically a server/data store and client agents, currently the agent is only for linux end user devices. The agent records evdev events (keystrokes/mouse movement), currently active window, clipboard history, shell commands issued and browsing behaviour. It runs as its own user and different functionality is compartmentalized into their own processes. Data is encrypted at rest. I'm still looking into how to best handle sensitive data in-memory at runtime.
It stores these events in a persistent queue on the clients and one-way syncs it to the server. If a client is offline for a bit it syncs it when it comes back online. As such, I am also trying to minimize storage used.
The idea is that rather than overwhelmingly linking stuff manually, e.g. with obsidian, locality of reference seems more useful as a baseline. In this data set, links by time are valued the most. In the future I'd like to add also the screenshot/video feature using hashes and perceptual hashes or an RDP like way to store as little data as possible.
For now I'm mostly in the architecting phase but I do have an early working version. Really looking for suggestions architecture wise too. So far I came up with my own binary format to save events on the clients but I'm unsure if it's the right way to go. There are many challenges to be thought about, such as changing hardware configuration (display plugged in), protecting against statistical analysis (e.g. keystroke bursts), deletion of data across sources if required, how to make sure the system can run smoothly for a decade, etc.
Actually, I'm not an expert in this area, but I feel the challenge may not lie in data collection itself, but rather in ensuring the data remains secure, usable, and easy to maintain over many years.
A custom binary format can work, but it could be a long-term maintenance commitment (schema evolution, tooling, corruption recovery).
I collect interesting links/pages/stuff by emailing myself notes about them. I never actually _do_ anything with these notes, but from time to time I open the "Notes To Self" folder and skim through them. Anything that seems worthless I delete, anything that seems obvious I delete, the rest just sit there.
And that's more useful than you'd think - by reviewing them semi-regularly, you're indirectly memorizing their contents and refreshing their presence in your short term memory. And that to me is the benefit - not "copy this cool thing", but "feed my mind cool ideas until it has digested them and incorporated them into the gestalt.
So for me, an AI that suggests stuff would be annoying. An AI that could take some vague search terms and my history and could pull old information out of notes that don't necessarily have the keywords I enter, using the context provided by my history might be useful. So for example, I may remember I happened across a design for the DSP algorithms in guitar pedals, but the URL or note may not even mention DSP, so something that could turn a search for "guitar pedal DSP" into finding a link for an audio processing web page I visited would be useful. The AI would probably have to scan all the web pages I visit to be able to store enough context to be useful for a search like that. Doing this for 20 years or more might run into some scalability/cost issues.
Context, long-term memory, and storage issues are indeed challenges currently facing AI and large language models (LLMs), and they're not unique to the scenarios we're discussing. However, with technological advancements, I believe these issues will eventually be resolved.
Based on your requirements, it seems you need to grant the AI sufficiently broad access and privacy permissions. That said, leveraging what you've already stored, achieving the fuzzy retrieval functionality you mentioned should be feasible.
It's impressive that you can recall exactly where to find your notes! That said, active retrieval still requires you to remember what you're looking for. I think your fuzzy retrieval concept shares some similarities with what I'm aiming for—where the AI offers suggestions based on your intended actions, even when you haven't consciously decided what to search for. Wouldn't you welcome that kind of assistance?
Google used to have one IIRC when they still sold the local Google search appliance and it was glorious.
The key is using it to solve problems you actually have, rather than problems you want to have.
I was losing track of people's contact details --> I made an addressbook in obsidian.
I wanted to track my exercise to find out how much I was running each week --> make graphs
And so on. Your obsidian should get a bit messy before you try to impose order on it. Use it to solve a problem badly (Just writing down how far I run in a daily diary note) then improve (Writing a query to turn all of those notes into a graph).
Personally I don't use any AI with my knowledge base. Good searching tools and a little bit of organization are the most useful thing for me.
Personally, I think keeping lots of notes/links is a kind of digital hoarding. Just like real hoarding, it's an emotional problem not an organizational problem. If you can work out what emotional need hoarding links is fulfilling for you then you're on the way to working out how to get that emotional need fulfilled by something else.
One thing I’m trying to learn is whether the “fix” is actually less intake / better filters (so you don’t hoard), versus better retrieval/action tooling after the fact.
For you personally: what has helped more — changing the capture habit (rules/quotas/digests), or having a ruthless review/delete loop? And what triggers you to save in the first place (fear of forgetting, future usefulness, perfectionism, etc.)?
On the other hand, for work that I do day to day, I do take notes and those are a different type. Those are tied to actions I'm taking and I'll sprinkle them with actual to-do lists that I check off in the notes. I'll link ideas that are related and document things, but for my own projects, I don't try to make it too formal or strict. The notes aren't the goal, they're sort of a scratchpad for day to day operations.
I'm also very curious, what makes a retrieval moment “great” and how often it happens.If someone could help you increase the likelihood of it happening, would you find that valuable?
1. I actually saved it in the one place I should save links
2. The thing that I remember seeing actually fits with my current train of thought or understanding
3. I tagged the link meaningfully.
I have tried a lot of twin brain, second mind, etc etc. but I find myself doing the same thing no matter what layer is on top of the link fetching.
When you tried the twin brain/second mind approaches, what specifically failed for you? Was it capture overhead, inconsistent tagging, not knowing where to put things, or simply that nothing resurfaced at the right moment without you searching?
Also, what did “tagged meaningfully” look like in your system — topic tags, project tags, or “why I saved this” tags?
I’m exploring an approach centered on “active targets/projects as the context signal” to improve resurfacing without more organization work (more context in my HN profile/bio if you want to compare).
One of the things I keep is the list of stuff I did during the day for when the standup call comes. Sometimes I forgot of mention a lot of stuff.
Then a few weeks ago I build the MVP of a note taking app specifically for this purpose: what did I do, what am I doing next, am I blocked by something?
No backend, data is stored in browser local storage, quick to load, a weekly summary and data export. No tracking whatsoever.
Not ready yet, which is why I didn’t do a Show HN feature yet, but has been useful to me even in its current state.
It lives at https://tinyleaps.app
If you are ok to help, check my bio for more.
Focus is supposed to mean you have a clear idea of who you are and what you need to work on, and also what you don't.
So I've taken to follow a (bespoke) process where I identify what my own personal principles are, and what priorities and efforts they imply. Then, of all the "oh I could/should do this" potential tasks that occur to me, I have an out: if it doesn't align with my own personal focus, then I can delete it.
One idea I’m exploring with *Concerns* is making that constraint explicit: when you set “active goals/projects”, you can only keep a *small fixed number* (e.g. 3–5). Anything else becomes “not active”, so the system won’t surface it or turn it into tasks.
Curious: what’s your number—3, 5, or 10—and what rule do you use to decide what gets to be “active”?
I also have similar thoughts on turning writing into action and re-entrance, would be interested to hear your thoughts:
https://blog.sao.dev/2025-threads/
This has proven to work well for me, but I’m chafing with git and agentic coding abstractions and looking for a unifying concept. Agent of empires doesn’t feel quite right, but is in the right direction.
One thought: it also gives you a natural closed-loop signal — Log + DoD changes are the feedback, not vague “AI memory”. A tool could surface notes as *options* only when a thread is active, then write back only via diffs to `Next steps` / `Log` (human-approved), keeping it deterministic.
For the git/agentic abstraction itch: what do you want as the single source of truth — the thread file, issues, or git events?
Something that did work well recently, was creating a node script to gather all text under a given wiki link and copy to a doc with some formatting modifications, and then feed the document to an LLM for consolidation and a summary of everything I have recorded for a given subject.
If you were to take this one step further, would you want the output to be:
1.a consolidated brief you can re-enter later, or
2.a small set of next-actions / open questions extracted from the brief?
This “export + consolidate” pattern is very close to what I’m exploring (details in my HN bio/profile if you’re curious).
A simple example is YouTube. I save videos to watch later because I am not in the right headspace at the time. Then I avoid them completely. I think part of the resistance is that I know watching them properly will demand attention and probably lead to follow-up work, and I am rarely in a mode where I want that interruption.
I have thought about the whole “second brain” idea, but I worry it would just become a dumping ground. Nothing would really resurface when it actually matters. I would mostly be relying on myself to remember that I once made a note about something when I happen to be working on a related problem.
Lately I have been thinking about the idea of a passive, radio style feed that summarises the information I have collected and plays it back to me, so I can at least consume each item once.
You see those TV shows about people who hoard. They cannot throw things away because they might be important one day. This feels uncomfortably similar.
Maybe the real problem is not how we store information. Maybe it is that we aren't filtering hard enough on what is actually worth keeping in the first place.
I think you’re pointing at two separate problems that get tangled:
1. Re-entry: how to resurface the right item when you’re actually in the right mode
2. Filtering: deciding what’s worth keeping so the backlog doesn’t become guilt
The “radio-style passive feed” is interesting because it changes the contract: you’re not promising yourself you’ll do deep work, you’re just letting the system replay what you captured at a low cognitive cost. If it worked, it could also become a filter: only the stuff that still feels valuable on playback deserves a second pass.
One question: if you had a “listen/read later” mode, would you prefer it to be time-based (10 minutes a day) or context-based (only when you mark yourself as in a “curious/exploration” headspace)?
Details in my HN profile/bio if you want to compare this to the “active projects + pull-based resurfacing” angle I’m validating.
That headspace is not always there. A good example is when I sit down to watch something on a streaming service and end up browsing for ages instead of committing to anything. In theory, that would be a perfect moment to actively review things I have saved, but in practice I am not convinced my neurodiverse brain would reliably cooperate.
So to your question, I think I would lean much more toward a context-based mode than a time-based one. A fixed daily slot would quickly turn into another obligation. A lightweight “I am in curiosity mode right now” switch feels closer to how my brain actually works, especially if the radio-style playback keeps the cost of re-entry low.
If I were to design around your constraints, it would look like:
* a manual toggle for “curiosity mode”
* a queue that plays 1–3 small “snack” insights (not full summaries)
* and a single “save this to revisit” action that you can do in 1 second, so you don’t lose it while driving
One question: when you hear something interesting in that mode, what’s the most natural next step for you later—open the original link/video, add it to an “active project/topic”, or capture a single note like “try X / look up Y”? (More context on the direction I’m validating is in my HN profile/bio if you want to compare.)
- Workflowy is great for taking notes in meetings, allowing ad-hoc moving things around. It’s also great for reference material (what was that long command SQL query I use). But yes it’s also a graveyard.
- AirTable worked somewhat to keep moving projects forward, without growing unbounded. But only when there is a workflow. That looks like: dump tasks into rows, then create the steps as views of those tasks with different filters. So tasks essentially move systematically from uncategorized, no time estimate, no schedule, to getting tagged with all of that, and then I can narrow it down to see just what’s on the agenda for today’s date. I also have it show the sum of estimated time per date, because I inevitably end up scheduling 30 hours of tasks for a day, so that helps keep me honest on what’s achievable. I did the same thing in Workflowy with custom JavaScript but AirTable seemed more effective for this. Tasks also get linked to project buckets, and I basically then just try to keep every bucket moving forward (don’t let any active bucket get starved).
- I could throw all of this into an LLM and have it tell me what I should be working on, remind me about what I’m forgetting, and so on. But I’m basically not interested, because I’d have to give it additional context that would be beyond what I’m interested or allowed to share. Like, I’ll ask a generic question for advice to an LLM but if an LLM is going to remind me to ”call Robert about Project Mayhem, then it needs to know about Robert and Project Mayhem.
On the LLM point, I agree with your hesitation. For anything that touches real people/projects, the default needs to be privacy-first: either local-only, or scoped so the model never needs sensitive identifiers. One approach I’m exploring is separating “private entities” from “public knowledge”: let the system operate on generic project states and action types, and only you see the real names. Another is: no pushy assistant at all—just a pull-based daily view that helps you move buckets forward with the workflow you already trust.
If you had to pick one improvement that doesn’t require sharing sensitive context, would you want:
1. better workflow scaffolding (turn uncategorized into scheduled + estimated + bucketed faster), or
2. a way to attach lightweight “done/decision logs” back to tasks so the graveyard stops growing?
Details in my HN profile/bio if you’re curious what I’m validating.
Two things I’m curious about:
1.When you say you “mainly slip up when I write something in the wrong document” — is that mostly a friction/UX issue (too many similar places), or a missing “active project” surface that tells you where you are right now?
2.In grad school mode, what changes fastest for you: the set of active projects, or the kinds of inputs (papers/notes/emails/reading list) you’re trying to connect?
I’m exploring a goal-first workflow where you keep a small number of active targets/projects and let that drive re-entry and resurfacing (details in my HN profile/bio if you’re curious).
My understanding is that Obsidian is pretty similar? The point of my PKM isn't to turn my notes into shipped things. The point of my PKM is that when I do want to work on something, I don't have to repeat all my old mistakes to get back to where I was before, or reinvent all my own wheels.
And yes, Logseq/Obsidian-style wikilinks are really good at building that personal context graph. The thing I’m trying to validate isn’t “everyone should convert notes into tasks”, it’s whether there’s a subset of people who also want help with re-entry when they do decide to work on something: resurfacing the few most relevant past notes/links/emails/posts for the current project, in a way that stays lightweight and doesn’t require changing their PKM.
For your workflow, what’s the ideal re-entry experience when you pick up a topic again:
1. a “brief” that consolidates what you previously learned (with links back), or
2. just fast navigation/recall via links and search (no consolidation), or
3. something else entirely?
Details in my HN profile/bio if you’re curious what I’m validating.
Out of curiosity: do you find Logseq’s block hierarchy alone is enough for re-entry, or do you still rely heavily on consistent wikilink naming/tags to avoid the “I swear I linked this but used a different term” problem?
Details in my HN profile/bio if you want the angle I’m exploring around minimizing organization overhead while improving re-entry.
Finally extracted the data for these hashtags and fed it to an LLM to organize. I'm happy with the result https://xenodium.com/film-tv-bookmarks-chaos-resolved
Curious: what part created the most friction before you automated it — capturing, tagging consistency, or resurfacing when you actually want to watch something?
Also, after the LLM organized it, did you find yourself maintaining that structure, or do you plan to re-run organization periodically?
I "hoard" ideas and articles because it's a good way for me to offload them from my brain
As a designer, I absolutely DO scroll through my swipe files once in a while to get inspiration; sometimes I'll also go through saved github repos to borrow an implementation
E.g. that's how I ended up using a lot of libraries like Immer, Svelte, ended up loving Observable / d3js, etc.
Idk about all y'all, but notes are absolutely useful for me.
Concerns isn’t trying to turn everything into more notes. The premise is: most useful context already exists in lots of places (articles, emails, posts, links, repos), and forcing all of it into a single note system is extra work. I’m exploring a connector approach where those sources can stay where they are, and the system only surfaces/reconnects the right items when they’re relevant to an active project — without you having to rewrite them as notes.
Curious: for your workflow, what’s the most valuable “resurface moment” — when starting a new project, when you’re stuck mid-way, or during review/polish?
I organize notes by tags, folders, and links from tree of "map of content" notes. Those documented as rules for AI. All notes came to "Inbox" folder, and from time to time I run special script that checks inbox, formats notes, tags them, and put in the most appropriate place. "git diff" to check results and fix mistakes, reset if it went wrong.
As notes organized by the limited number of well defined rules, they became easy to search and navigate by AI. Claude Code easily finds requested notes, working as advanced search engine, and they became a starting point for "deep research" : find relevant notes, follow links, detect gaps, search internet. Repeat until reach required confidence level.
The most advanced workflow so far is combination of TRIZ (Theory of Inventive Problem Solving) + First Principles Framework. Former generates ideas and hypotheses, later validates them and converge on final answer.
Also +1 on “limited rules” being more important than fancy models — once the structure is predictable, Claude becomes a genuinely useful retrieval and research assistant.
Out of curiosity, where does the loop close for you today? After “deep research” produces a conclusion, do you write back into a MOC / decision log / project note, or does it mostly live in chat + commits? I’m exploring a similar loop (context → retrieval → suggestion → human review → write-back) with the same constraint: keep it auditable and reversible (diff-friendly). Details are in my HN profile/bio if you want to compare approaches.
My main problem with bookmarks / notes that I forget about them. I don't need a bookmark keeping service, I need one which would bring them forward when I look for something, based on context too. Something like which also makes a plain text searchable snapshot of the page.
It's an opinionated app so it might not fit everyone's needs but that's my dream productivity app: LLM*(notes+tasks+rss+flashcards+routines). So basically an all in one app with LLM actions and workflows. no subscription, optional cloud service (can be self hosted too).
Here's a very early landing page: https://getmetis.app
Do you think the better fix is stronger filtering at capture time (keep less), or a lightweight resurfacing habit (e.g. a weekly 10-minute review / 1–2 items per day digest) so more of it gets a fair second look?
I’m exploring this exact “offload vs resurfacing” problem (more context in my HN profile/bio if you’re curious).
Out of curiosity, what’s the bigger win for you: full-text search itself, or the tagging/metadata layer that helps narrow results when your memory is fuzzy? And do you mostly search by keywords, or by “context” (project/topic you’re working on)?
I’m validating a similar retrieval-first angle (summarized in my HN profile/bio if you want to compare notes).
I don't manually tag any entries - the automatic AI tags just add extra keywords I can search for that are not included in the original article text. So I mostly search by keywords, yes. Not sure what the difference is between "keywords" and "topic you're working on".
See also https://mymind.com, which takes the AI tagging even further. Potentially similar to what you're building (although, again, your landing page contains a lot of AI generated metaphors and nothing that explains what your product actually does)
This idea stems from my own pain points, and I genuinely hope that while solving my own issues, it might also address broader needs.
Regarding your response: It's interesting that AI tagging primarily aids by adding extra searchable keywords. However, I'd prefer broader content and semantic search/matching capabilities without relying solely on tags—though tagging remains a viable implementation approach. Thanks for the mymind reference—I'll explore it.
PS. Did you perceive an AI-driven approach because I used translation software?
Are you using an LLM-based translation tool? I perceived your comment as AI mostly based on the first paragraph:
> That makes sense — treating it as a personal search engine is a real, high-ROI use case. Full-text search covers the “I remember the idea but not where I saw it” problem really well.
This is very much an LLM-style "That's a great idea!" type response. I usually don't even notice when something is LLM generated, but this part really stood out even to me.
The mymind you recommended has made significant strides toward tackling “information overload” and “organization fatigue.” However, I feel it remains fundamentally a storage solution—reducing the effort of organizing and facilitating retrieval—but doesn't directly align with my target.
It also reminds me of another product, youmind (https://youmind.com/), though it's primarily geared toward creation rather than PKM. Perhaps I could pay to try its advanced AI features.
Not sophisticated, but it moves me forward.
Stuff that is relevant for things I am currently busy with are recent so like last couple weeks. Stuff that I don’t remember touching in those weeks gets deleted.
For the non-note stuff, do you have a “recently touched” equivalent, or do you rely on different rules (e.g. archiving/search for email, starred threads for chat, etc.)?
[0] https://thalo.rejot.dev/blog/plain-text-knowledge-management
I also like the idempotency/provenance angle (unique IDs/links, checkpointing) and the “commit processing” workflow — that’s a concrete example of turning ongoing work into structured, queryable knowledge without a ton of manual ceremony.
Curious: in practice, what improved your quality the most — schema validation catching broken links/types, or constraining the entity set so the agent can’t invent structure? Also, do you find yourself actually using the query language day-to-day, or is it mostly agent-driven retrieval?
I’m exploring a similar closed-loop (context → retrieval → suggestion → human review → write-back) and summarized my angle in my HN profile/bio if you want to compare approaches.
At work my 2nd brain is confluence. Most info goes into shared spaces.
Often search doesn't find what I need easily so search becomes a context switch (sub mission)
Rovo can be helpful though. Where I work has good culture of documenting things which helps.
2.What best represents “active project context” for you today?
Jira task status 80%; Plus Slack save for laters 15%; Then a confluence todo list 5%
Which one would you actually allow a tool to read?
All of them
3.What’s your hard “no” for an AI that suggests actions from your notes/links? (pick 1–2)
Activation energy to get a 3rd party AI approved in my org for compliance is enormous. Plus we dogfood our own.
It won't happen until you become the next Cursor or Loveable and even then maybe not. (we can't run CC lol!)
On the compliance point: totally fair. To clarify, I’m not assuming a company-wide deployment — I’m primarily thinking about a personal tool/workflow where you control what it can read (and for many people that means local-only or only non-sensitive sources). Your environment is a good reminder that “enterprise-ready” integrations are a different game.
If you could improve your personal workflow, what would save you the most time: pulling the right Confluence page when a Jira task is active, extracting a short “what’s the current state + next step” from scattered Slack threads, or something else?
More context on what I’m validating is in my HN profile/bio if you’re curious.
So with compliance even connecting a tool I download to an approved LLM is difficult. I need to get approval. If the tool is just a tool and doesn't use AI (and thus send out private data) it is easier. I think that is a problem they should solve i.e. give a safe LLM endpoint and let me choose my tools but alas.
I think what saves me time is difficult to say. Well organized docs OR an AI that can do that to AGI levels of intellegence. Fuzzy isn't helpful (I already have lots of fuzzy options). I need bulletproof correct info.
The pain isn't in the clicks to find info it is in understanding what I am reading and if it is relevant.
Something like this can be somewhat useful (not saying I would pay though!)
I would like to have 1000 or so vetted docs (can manually or AI vet). E.g. public API doc > internal API doc > Internal RFC > Some guys internal note they made public.
Take RFC and higher links and surface the ones I need for thw project. Chuck them in the Jira ticket.
That would be handy. But it isn't my biggest problem. So not sure how that squares up. With AI I can build this internally in a bespoke way (this is the general disruption AI has on any SaaS idea lol!) so not sure what sauce you would need.
The other AI problem is you are fighting the bitter lesson. By October CC might do this as a one sentence one shot.
Just to clarify my scope: I’m starting with a personal, individual workflow (toC) where you control the sources end-to-end — local files, bookmarks, email, personal docs, etc. I’m not assuming company integrations, approval flows, or “drop into Jira” as the primary surface (those are a different product/compliance game).
That said, your “vetted docs + provenance + surface into the place you already work” framing is still useful in the personal setting too: a small trusted set of sources, always show citations/snippets, and a low-friction output surface (e.g. a task/project note you already use).
If you were applying the same idea personally, what would your “output surface” be — a todo app, calendar, a project doc, or just a weekly review note?
When I say “notes/links/docs”, I mean scattered personal inputs in general: emails, chats, bookmarks, posts, documents, repos, meeting notes, etc. The problem I’m validating is re-entry: surfacing the right context when something becomes relevant, without forcing everything to be rewritten as “notes” or adding another inbox.
If you’ve dealt with this, which source is the worst “graveyard” for you (email/chat/bookmarks/posts/docs/repos...), and do you prefer better recall/search, a pull-based digest, or a review ritual that extracts 1–3 concrete next actions?
If someone could make an AI tool that takes all of my bookmarks and surfaces one or two insights from them to me per day, I.E., "hey you bookmarked the wikipedia page for this movie director, did you know one of his movies was just added to netflix?" or "Hey, you bookmarked the Kotlin website, want to try making a Kotlin project? Here are some app recommendations based on your other bookmarks..."
The “1–2 insights per day” idea is exactly the shape that feels sane to me: pull-based, low volume, and designed to create momentum rather than dump more content. I also like your examples because they’re not just summaries — they’re context + a suggested next step.
If you were to try something like this, which would you want it optimized for:
1. novelty (interesting facts / “did you know”)
2. action (a small project/task you can do in 15–30 minutes)
3. relevance to what you’re doing this week
Details in my HN profile/bio if you want the direction I’m validating.
Unfortunately I am a "write to remember" type person
My phone color notes (Android) and 70% of my screen is a note widget. This is always in my face so stuff I want to get done is at the top. Then I write on top of it kind of pushing stuff/the stack downwards.
I have a Twilio number I text to remind myself of stuff in the future by minutes/hours/days.
I've written a lot of random note-taking apps whether it's desktop, web, chrome extension... at some point I would like to unify them/central data storage
The “git docs dir / pkm fragment” idea is exactly the kind of wedge that feels realistic to me: a small, scoped corpus with clear boundaries, where an LLM can be useful as a collaborator (RAG, summarizing, filling gaps) without you committing your whole life to a system.
If you were to try a small fragment, what would you pick as the smallest useful scope: a single project docs folder, meeting notes for one team, or a personal “decisions log”?
There's a cost to recording what you're working on, so usually the only people who track it in a fine grained way are those that need exact numbers for billing. It's not worth the time otherwise.
There are hints to what people are working on. Connecting to a database means SQL may happen, but maybe not.
It's a big issue with personal assistant ideas in general. It's very difficult to get any real context on things. Even data that seems firm like calendar appointments, isn't in practice. Look at people's calendars, and you'll see them triple booked.
One direction I’m exploring is to stop treating context as a single ground-truth stream and instead use cheap, probabilistic hints: a small “active projects” list (explicit but constrained), plus weak signals (recent files, open tabs, issue activity), and then ask for confirmation only when confidence is low. Calendars are a great example of why “seems firm” isn’t actually firm.
If you were designing this, which would you rather tolerate: (a) fewer suggestions but higher precision, or (b) more coverage with lightweight confirmation prompts?
And yes — this is exactly the problem I’m outlining in my HN bio/profile if you want the longer version.
The ability to describe a workflow - or a production pipeline, or whatever you want to call it - but lets say, workflow, is very important in these kinds of automation systems.
You could generalise workflows such that the user is prompted to define and enforce their own flows of work, as a matter of UI/UX interaction, and see if you don't start collecting a lot of successfully executed projects...
One direction I’m exploring is a constrained “active projects” list (only a few at a time) plus lightweight workflow templates, so the system can map incoming info to a specific stage/next-step rather than spraying suggestions.
If you had to pick, what’s the minimum workflow schema that’s still useful: stages (e.g. research/draft/review), clear Definition of Done, or explicit next-action ownership? (If helpful, I wrote up the idea in my HN profile/bio.)
To answer your questions:
1. Retrieval. 90% of my notes never get touched the second time, and I can't remember them at the right time.
2. On my head + a simple task list I made.
3. Hallucinations and pricing.
Also on hallucinations: would you trust suggestions more if each proposed action came with a quoted “evidence snippet” + source link?
If you are open to it, I put some context + a short survey in my profile if you want to take a look.
> 1.Where does your “second brain” break down the most?
First and foremost, remembering to write notes or to review them. What do I want? Is there a timeline that requires things to be done before it becomes invalid or increases liability?
Secondly, remembering to do actions instead of sitting down and doing something that gives some dopamine. I decide I want to work on a project on my computer. So I go sit down in front of the computer and... I've already forgotten the project, now it's time to play a game or read Hacker News instead.
Lastly -- it's the things that I "don't know". Let's say I want to build a robotic lawnmower. There are plenty of robotic lawnmowers already but I want to build my own. I know where to find the source code (or I can make my own). I don't know where to find the tools, where to source components, or who to ask for help assembling heavy things; I don't know how to assess risks (what happens if this thing catches fire on my lawn while I'm in the kitchen? what happens if it drives into the street and hits a car? or worse, what happens if it drives into the neighbor's kids?).
> 2.What best represents “active project context” for you today?
In my head, mostly. Documents in random places like ~/Documents/<project-name>, or a todo.md in the project root. Hard to remember what <project-name> is for or when I last did anything of value for it though.
> 3.What’s your hard “no” for an AI that
If the AI does not run 100% on my machine, then it's not getting anything important. That means no notes, no personal projects that have business value. Business value includes comments or ideas to improve other peoples' products! I've seen too many times my comments end up turning into someone else's pay-me project and I see none of the rewards. Speaking of which, here I am giving you valuable information for free.
After that, it's pricing. If I spend $20 on a weekend project, that's fine. If I have to spend $20 for every task, then it'll be yet another project that is only ever half-finished.
As far as failure mode -- missing suggestions is rather unavoidable, isn't it? So occasional wrong suggestions would be fun as long as they're clearly shown. Something like "I'm not sure, but there might be a solution with..." would be ideal; it lets me explore knowing that it might be a wrong direction.
Re pricing: one-time purchase or BYO model makes a lot of sense for this kind of personal workflow tool.
More context on the direction I’m validating is in my HN profile/bio if you want to compare notes.
Though I don't really have a system for storing them effectively as of yet, and as someone with a strong preference for open source on my critical workflows, I never got on the Obsidian train myself. Current experiment is Silverbullet.md, because I do very much like raw Markdown and file-based notes, but that's different from having a meaningfully fleshed-out setup haha
The open-source + file-based constraint is a strong signal. One direction I’m exploring is to keep everything as raw sources (links/files) and build a local index over them, so you can pull a “what do I already have on X?” brief on demand, without needing a fully curated Obsidian-style setup.
For Silverbullet specifically: what would “meaningfully useful” look like for you as a first step?
1. better recall (fuzzy/semantic search over saved links + notes)
2. periodic resurfacing (a digest of “you saved this months ago, might matter now”)
3. extracting a lightweight summary + a few key takeaways per link
Details in my HN profile/bio if you want the longer context.
Also to clarify: I’m not focused on Obsidian specifically. “Notes” here includes anything you stash for later—notes.txt files, links, emails, chat snippets, tickets, bookmarks, random scratchpad windows. The thing I’m exploring is whether there’s demand for making that scattered reference material easier to resurface when it matters, without forcing a heavier system.
If all you want is slightly better search, what would “better” mean for you?
1. fuzzy/semantic search (find it without the exact keyword)
2. ranking by project/context (show what’s relevant to the folder you’re in)
3. cross-format search (txt + markdown + links + email/chat)
4. fast local-only indexing with zero setup
Details in my HN profile/bio if you’re curious what I’m validating, but your “grep + promote when needed” workflow is exactly the kind of counterexample I’m trying to understand.
3 would be the hardest but most useful thing. The problem is that it's scattered around different computers and networks that don't talk to eachother. We could have a file in SharePoint on one system referencing a file on an SMB share on a completely different network. It's a big pain and very difficult to work with but it's not something I expect software running on my computer with access to a subset of the information to be able to solve.
And I hear you on cross-network fragmentation — in a lot of real environments the hardest part isn’t search quality, it’s that data lives on different machines, different networks, and you only have partial visibility at any given time.
If you had to pick, would you rather have:
1.instant local indexing over whatever is reachable right now (even if incomplete), or
2.a lightweight distributed approach that can index in-place on each machine/network and only share metadata/results across boundaries?
I’m exploring this “latency + partial visibility” constraint as a first-class requirement (more context in my HN profile/bio if you want to compare notes).
The tension I’m trying to understand is that in a lot of real setups the “corpus” isn’t voluntarily curated — it’s fragmented across machines/networks/tools, and the opportunity cost of “move everything into one place” is exactly why people fall back to grep and ad-hoc search.
Do you think the right answer is always “accept the constraint and curate harder”, or is there a middle ground where you can keep sources where they are but still get reliable re-entry (even if it’s incomplete/partial)?
I’m collecting constraints like this as the core design input (more context in my HN profile/bio if you want to compare notes).
Could you tell me what the main and specific challenges or difficulties were during your implementation process?
If you could fix one limitation of the current Obsidian/PKM plugin approach, what would it be (UX, reliability, on-device, or integration with tasks)?
I’m thinking the safest/lowest-friction version would treat those TextEdit drafts as an ephemeral “inbox”: index them locally, never rename/move/touch the files/windows, and only generate summaries or “possible action items” when you explicitly ask. If it ever became noisy, it should default to doing less, not more.
Out of curiosity: would you want this as a one-command “summarize all open drafts” tool, or something you run against a selected subset (last 7 days / containing a keyword)? Also, which matters more to you: preserving privacy (strictly local) or preserving the exact TextEdit workflow (no export step)?
Details in my HN profile/bio if you want the longer idea.
Out of curiosity: what makes something worthy of going into your tickler file versus just letting it go? Is it time-bound commitments, high leverage ideas, or anything with an external dependency?
Do I struggle to turn them into actions? No.
Do I struggle to keep them organized for later reference? All the time.
Do I use Obsidian? No.
I actually use Joplin, which I switched to after deciding I needed to dump Evernote. And before then (and somewhat simultaneously with), I used a pile of disorganized text files (sometimes shared via DropBox).
If the goal is later reference (not task generation), the most useful thing to validate for me is: what “organized” means in practice for you. Is the biggest failure mode:
1. you can’t find it because you don’t remember the right keywords, or
2. you remember it exists but it’s scattered across too many places/notebooks, or
3. you find it but it’s missing the surrounding context?
Details in my HN profile/bio if you want to see the angle I’m exploring (it’s not only Obsidian-specific).
- keep an “inbox / snippets” note (or a single folder) where every orphan snippet goes
- give each snippet a short, searchable handle (one line title)
- add 1–2 lightweight links: related topics, and optionally “why it matters / when I’d need this”
Then when you’re in a top-level doc, you can embed/query “snippets linked to this topic” instead of trying to decide the perfect location for each one.
In your case, are those “things to remember” mostly time-bound (follow up, renew, schedule), or more like evergreen reference (commands, ideas, reminders)?
Just get filtered digests now. Needed less input, not better retrieval.
- task project (Todoist/Things/Reminders)
- issues/boards (GitHub/Linear/Jira)
- a doc/wiki page (Notion/Docs)
- calendar
- "in my head"
Which one would you actually allow a tool to read?
None. Unless self-hosted and open source.Working with other people gives you good habits against hording because you have a sense of the audience and what might be useful to them.
We also support the kanban plugin so that works well to track and share what we're working on.
Kanban as a shared representation of “active work” also feels like the cleanest project-context signal: it’s explicit, lightweight, and already part of how the team coordinates.
Curious: in your experience with relay.md, what actually changes behavior the most?
1. social accountability (others will see messy notes)
2. having a shared kanban/project board
3. conventions/templates for how notes get promoted from “rough” to “reference”
Details in my HN profile/bio if you want more context on the “active projects as constraints” angle I’m exploring.
My cofounder actually has a bunch of skills with claude code that surface context into our daily notes (from our meeting notes, transcripts, crm, gmail, etc), but it's sort of on him to show that it is useful... so while he is still "hoarding" outside of the shared context it is with an eye toward delivering actual value inside of it.
Feels pretty different from the fauxductivity traps of solo second brain stuff.
And your cofounder’s setup is interesting because it’s not “PKM for PKM’s sake”, it’s context injection tied to an actual delivery surface (daily notes). That feels like the right wedge: the system earns its keep only if it helps someone ship something this week, not just accumulate.
Curious: what’s the single best signal that his context surfacing is “working”? Fewer missed follow-ups, faster re-entry into threads, or just less time spent searching across Gmail/CRM/transcripts?
Out of curiosity, when you look at the journal later, what do you want it to do for you most: help you remember context, help you spot patterns, or help you extract a few actionable follow-ups back into the Kanban?
Details in my HN profile/bio if you’re curious what I’m exploring around that handoff.
I also like that fact that I can just scroll down a long list of entries, when I'm not sure about the exact words I used in this situation to note something down, or when I just remember the rough timeframe something happened.
1.On-demand recall & retrieval is the core pain: people capture a lot but can’t reliably resurface the right note/link at the right time; they want stronger search (fuzzy/semantic), snapshots/context, and “pull-based” recall when needed.
2.Privacy/local-first is a hard requirement for many: “no cloud, no third-party access,” ideally open-source and self-hostable; any AI must run fully on-device to be trusted.
3.Low-friction matters more than perfect organization: users prefer systems that don’t force structure or add maintenance overhead—messy-first, iterate only when a real problem appears.
4.Avoid interruption by default: many dislike proactive “AI suggestions”; they want controlled resurfacing (opt-in prompts), not constant nudges.
5.Different goals coexist: for many, notes are for memory/inspiration/reference (not turning into tasks), while others want action workflows—tools should respect both modes.
6.Cost and scalability must be predictable: long-term indexing (years of notes/history) can get expensive; pricing needs to be transparent and not “per task,” and context signals across tools are often noisy/unreliable.
One simple fix I’ve seen work is adding an explicit “done marker” ritual: when you implement something, append a one-line outcome at the top of the note (Implemented on YYYY-MM-DD + where), or move the item into a “Done/Archive” section so old notes don’t masquerade as open loops.
Do you keep notes alongside tasks (so completion is tracked), or are they mostly free-form notes without a clear “done” state?
I guess, I don't really need a TODO list; what I care more about are details; I want to capture my nuanced reasoning about a problem in case I don't address it for a while; to save myself from making a bad decision in the future (when the relevant info won't be so fresh in my mind).
I like to scope out a problem and identify some possible solutions as soon as I encounter it; the optimal time. I find it helps to decouple the work of 1. Understanding the problem. 2. Coming up with possible solutions; and 3. Choosing a solution and implementing.
One nuance I’m exploring is making that weekly “fix notes” time the primary UX: during the review, help you pick the few items worth distilling, link them to a small set of active projects, and extract 1–2 concrete next steps. Outside that window, stay quiet so it doesn’t become another inbox.
What cadence has actually stuck for you in practice: a short daily pass, or a deeper weekly review?
Two small things that often help without changing your whole setup:
Keep a single “Meeting Inbox” note/page where all meetings land first (even if messy), and only promote items out when they turn into actions.
Add a 2-minute “end of meeting” ritual: write 1 decision, 1 next step, 1 owner/date at the top of the note.
Curious: what’s the main reason you don’t go back—time, forgetting the note exists, or the notes don’t feel actionable when you do open them?
The question I’m curious about is what happens after the summary: do you want it to end as “good to know”, or do you sometimes want it to turn into something concrete (a bookmark tagged to an active topic, a short brief, or a next action)?
If you’re open to one more detail: how do you consume the summaries today — a daily digest you pull when you have time, or do you generate them only when you’re searching for something?
My only nuance is: “sparks ideas” is one slice. A lot of what we stash isn’t inspirational at all — it’s obligations, decisions, constraints, receipts, meeting notes, “I’ll need this later” references. So the goal (for what I’m exploring) isn’t only to spark, but to make the right thing reappear at the right moment: sometimes that’s an idea, sometimes it’s a decision you already made, sometimes it’s the one constraint that prevents rework.
Curious: in your own system, what kind of captured info most often pays off later — inspiration/ideas, or concrete reference (decisions, specs, links, context)?
One thing I’m exploring is treating chats/projects as an inbox that only becomes useful when you force a tiny extraction step: turn each chat into either (a) one concrete next action, or (b) one reusable artifact (template/checklist/snippet), and ignore the rest until you explicitly pull it.
What’s the biggest blocker for you: picking the single next step, or remembering which chat had the good idea when the moment comes? Details in my HN profile/bio if you want the angle I’m validating.
hope this will help you
Student + “retrieval” + “projects in my head” + “no interruptions” is a very clear profile. My understanding the right shape is a push notifications of “daily priority digest" rather than a pull-based task warehouse.
What’s the one source of truth you’d be willing to use as the input for “today’s important work” — your calendar, a task list, or your school LMS/deadlines?
Details in my HN profile/bio if you want to see the idea I’m validating.
What I’m exploring is the step after summarization: take the summary and explicitly link it to an active goal/project, then force a small decision:
1. ignore it
2. save as reference for Project X
3. extract 1 concrete next action (with a reason and a link back)
If you don’t use Obsidian, what would make this actually work for you: a daily “priority digest” that you pull on your own time, or a lightweight way to attach summaries to your current projects (calendar/tasks) so they resurface later?
Details in my HN profile/bio if you’re curious.
The flow I’m exploring is: you define a small number of active targets (e.g. “ship feature X”, “prepare for interview Y”). Then when you save/read something, the system searches your existing library (notes/links/email/posts/etc.) against that target and suggests a few candidate next steps or plans that are specifically useful for that target. You pick one (or dismiss them), so it’s more “menu of options” than “AI tells you what to do”.
Example 1 (technical): target = “build a small Kotlin app”. From a Kotlin article + your saved repos, it might suggest: “start with template A”, “try library B for state management”, or “do a 30-min spike to validate architecture C”.
Example 2 (research/learning): target = “write a short brief on topic Z”. From your saved posts, it might propose: “3 key claims + 2 counterpoints”, plus a short outline you can accept/edit.
So “action” = a target-linked next step or plan proposal, chosen by you — not turning every summary into a task.
And you’re right that many people don’t collect enough personally — that’s why I’m also considering a hybrid where your own saves provide personalization, but a shared/managed collection (or public sources) fills the gaps.
In your case, would you find this useful if the output was “one good plan/next step per target” even when your personal saves are sparse, or do you prefer it to be entirely web-driven unless you opt in?
But it could just be that I don't collect that kind of notes or something. Maybe this would work for somebody else.
That being said, when I first switch to Obsidian from Evernote I noted that there is a giant community of users who use Obsidian to obsess over the perfect Obsidian setup. They don’t have any tasks to add because the only thing they “do” is micromanage Obsidian, as a hobby, to share with other hobbyists. I bet if you’re looking for an AI grift to create, this would be the group to target.
That variety is exactly why I’m trying to map the different “profiles” before building anything. Details in my HN profile/bio if you’re curious what buckets are emerging.
do you know if such a project already exists?
“semantic search via embeddings” plugins (Obsidian Semantic Search is one example)
local-first assistants via Ollama (e.g. obsidian-local-gpt)
full “RAG over your vault” projects (I’ve seen ObsidianRAG-style setups)
My takeaway so far is similar to what others said here: reliability + noise + UX are the hard parts, not just “having embeddings”. What would “good enough” look like for you: speed, recall quality, or tight integration into daily notes?
I built my own AI agent for that.
I do use it but no idea if it's an habit that will stick.
What made you actually keep using it so far: low friction, reliable time saved, or tight integration into something you already check daily?
I wrote up the core loop I’m exploring in my HN profile/bio — would love to hear how your agent differs (especially: where it pulls context from, how it outputs actions, and how it avoids becoming noisy).
What I’m exploring is designing the tool around that ritual: make the review session the first-class UX, and keep everything else quiet. For example, during a scheduled review it can help you: identify the few notes worth acting on, extract 1–3 next actions, and link them to a small set of active projects—then get out of the way.
In your experience, what cadence actually sticks: daily 10 minutes, or a deeper weekly review?
Details in my HN profile/bio if you’re curious how I’m thinking about “ritual-first” design.
I encourage new tool development, I’m more calling attention to Tool optimizers who are continuously migrating task systems and obsessing over “productivity”.
A daily pen and paper journal with weekly check in would suffice.
And +1 on not rewarding tool-churn. The goal isn’t a more elaborate system, it’s a simple ritual that reliably produces real output. If a pen-and-paper journal plus a weekly check-in works, that’s already the whole game.
What does your weekly check look like in practice: are you mainly pruning (delete/ignore), distilling (rewrite what matters), or committing (pick 1–3 actions for the next week)?
1. search & execution The problem is that if your goal is to accumulate knowledge or take notes, and you then want to use that knowledge to do something, you have to search for it yourself, and you have to think about the text in the first place. The search may not find anything that has a similar meaning to the text, or similar events, and as a result, you are unable to put it into action.
2. calendar
3. migration cost.
Good luck.
YAML/JSON as a local knowledge store is a nice wedge too: it keeps things portable and makes “AI suggestions” easier to reason about.
Re the waitlist: the landing page is up now (https://concerns.vercel.app), and you can leave an email there. I’ll make sure you get an update when there’s something testable.
One quick follow-up: if you could get only one capability first, would you pick
a) semantic recall (find related knowledge without exact keywords), or
b) a daily “top 3” plan generated from calendar + your knowledge store (pulled on demand, no interruptions)?
Details are in my HN profile/bio if you want more context.
Of those two, I'd choose A.
I'm looking forward to hearing more.
A few ruby scripts help a bit automatically cleaning them up, keeping track of their status and what not - but at the end of the day they are just text files really. I would not want to make this more complicated than that. My brain kind of is the real decider what is the main priority.
Totally agree the final decider is the human. The question I’m validating is: would it help if the system could surface “inputs you didn’t think to search for” at the moment you’re deciding what to do next, without making the workflow heavier?
For example: you open your todo.txt, and it quietly shows a small “related context” sidebar like:
* “You bookmarked X related to this task 2 months ago”
* “This old note mentions the same constraint”
* “You already solved something similar in file Y”
No auto-writing, no new system, just better recall.
If that existed as an optional, local-only command, would you ever use it? Or would even that feel like unwanted complexity? (Details in my HN profile/bio if you’re curious what I’m exploring.)
A lightweight trick that doesn’t require a whole new system: When you act on something, add a single closure line at the top of the note: Done: <what happened> — <date> — <where to find it>
It turns the note from “still open?” into “closed loop” in 5 seconds, and future-you stops re-processing it.
If you had to pick one, which is more ADHD-painful for you: too many open loops, or losing track of where the finished thing ended up?
If you could improve just one thing without abandoning grep, would it be:
1. fuzzy/semantic search when you don’t remember the exact term, or
2. better resurfacing (a lightweight way to recall relevant notes when a project becomes active)?
All my notes for a project are usually under one or two org files.
An idea I've had though, which I hope somebody steals, is for a search engine that follows up all the links in my notes (and one or two degrees of separation from there), and allows me to end up in those places again next time I search for something.
Two quick questions:
1. Are your links mostly local files, web URLs, or a mix? 2. When you say “end up in those places again”, do you want it to save a trail/session automatically (like a breadcrumb graph), or just learn “these links co-occur with this query” over time?
I’m exploring a similar “context neighborhood” retrieval loop (more context in my HN profile/bio if you want to compare notes).
Out of curiosity, what’s the category that still slips through that approach (if any): recurring obligations, long-horizon projects, or “someday maybe” research?
Or, is there anything that has never been deleted over the long term?
First off, it doesn't seem to matter whether you maintain a Zetelkasten or an org mode system or follow GTD. There have been some very productive people that have used these systems, and there are people we still talk about 2,500 years after their death who definitely didn't use these systems.
I know a girl who knows pretty much nothing about personal knowledge management systems and puts todos as notes in her apple notes app. She's a baker. Recently she used some random LLM app generation platform to launch an app wherein you take pictures of your nails and then can put random designs on it to see what your nails would look like with that design. Last I heard, she had several thousand downloads already, in just a week or so.
I know a guy worth multiple tens of millions because he thought Bitcoin was going to be the global currency by 2016. He's otherwise unremarkable and spends his time going from regional Burn to regional Burn, getting high, and playing videogames.
I'm not sure why we use personal knowledge management systems or try to optimize our lives, I guess it probably doesn't matter. For me I'm also not sure; maybe to be as actualized as possible? Maximize the "potential" of my life? Get rich? Get famous? Get powerful (read: some combination of rich and famous)? To what end? Because I admire the changes Newton and Feynman and Torvalds wrought in the world? Did they even use these systems? Am I smart enough to have anywhere near their level of impact? Can I make up the difference with a highly tuned external brain, like Manfred Macx?
Well, it doesn't seem to matter. Stephen King has a rigidly disciplined writing schedule and is an incredibly prolific author. George RR Martin is so much the opposite he once asked King on stage for advice on being a more productive author. Both are world famous authors, both have had multiple tv shows made from their works. The only thing consistent between them is they both have some kind of output into the world, and that output happens to be really good. Now let me introduce you to some highly successful authors who write atrociously. After that I'll introduce you to some writers you've never heard of, who write a huge volume of good work but just haven't "broken through."
Cynically, it seems like we have about as much say in the outcome of our lives as a dice roll, and all the decisions we make can at most trigger a second or third roll that could end up anywhere, and whether it's a better or worse roll is uncorrelated with whether the decision was a "good" or "bad" one as we typically measure these things (I know a recovered drug addict that found success in life through a combination of using his past as inspiration to fuel not wasting any more of his life, and leveraging the connections he made when making his way through the legal / recovery system. The decision to try heroin is directly correlated with his now enviably successful life).
The interesting thing is, you can experience this for yourself, cheaply and relatively quickly. Make a play at a successful YouTube channel. Film a couple hbomberguy style deep dives into any topic that interests you, and join the hordes of video essayists clambering for Algo attention, hoarding a couple hundred to a thousand views per. After 5 years you might get a viral hit that completely turns your channel around, or not.
I guess unsurprisingly, Life is an unsolved problem. I just wish all the little experiment I try had at least a measurably positive outcome over time. It seems to not matter outside of making me feel good.
Edit: to op, I guess you're looking at different management systems, here's some explanations for how mine works. It's a sort of emotion + knowledge + network management system:
https://blog.calebjay.com/posts/in-defense-of-pen-and-paper/...
https://blog.calebjay.com/posts/my-new-life-stack/#organizat...
One thing I did take away from your posts is the distinction between the medium you think in (pen/paper) and an “authoritative knowledge system” that’s the source of truth for your life/work. That framing resonates: capture can be messy and human, but “what is true / what is next” needs a home you trust.
That’s pretty close to what I’m poking at with Concerns: not “optimize PKM”, but “reduce the gap between what you already know and what you actually do next”. The constraint I’m leaning on is forcing a small number of active goals/projects, so the system can’t pretend everything matters — it has to help you commit.
Curious: in your setup, what’s the smallest digital thing you’d accept as the authoritative source of truth (calendar? a single task list? a project index)? And what’s the trigger that turns a note into an action for you?
The atom of unassailable truth in my system is the calendar event, especially if it's in the calendar managed by my PA, or one of the automatic calendars fed by cal.com. I trust it without question, since it takes a fairly concrete, verifiable event to get something on the calendar e.g. adding my next therapy appointment at the end of my current one.
> And what’s the trigger that turns a note into an action for you?
My notes are all actions insomuch as they must all be digested into my digital note system, todos, or calendar events. If you mean my fledgling Zetelkasten, I don't make actions out of those. I suppose I also have my random list of project ideas and blog ideas that I grab things off when I finally have free time and some energy but not an idea already in my head of what to do.
Question for you: why do both you and several self help gurus I read use the format of "what's the smallest xyz that abc?" Is that some framework someone came up with? Not an attack, I just don't know how to ask that without sounding rude, sorry.
On your last question: the “smallest xyz that abc” phrasing isn’t a special framework or guru thing. It’s just a way to force a constraint so the answer becomes practical instead of aspirational. Did I use it here?
Capture: notion and twitter have been best, obsidian and regular markdown have been worst.
Notion is good because of how they support a calendar view where you can put documents in a day's cell, and then see a list view that's just a stack of those notes. I keep a daily diary or youarehere type doc, where I'll have checklists and notes on small things that don't merit changes to a dedicated page. There's arguably a "retrieval" breakdown in that I don't really go back through these to update them or collate them into bigger pages.
Twitter is good because it's low friction and I can just go off, which is fun, and because they have decent search, so I can quote-tweet a related thing and sort of thread the graph together. If you're talking about BASB you're probably familiar with this corner of twitter. visakanv etc. This method works well if you use it enough to be able to recall your other notes. I think there's something special about the twitter format here too: it discourages whole-page thoughts in favor of sequential pithy bits, which i think are easier to both link and recall.
Execution: I would like a chat frontend (signal/SMS/etc) where I can just talk to my projects, ask the status of things, get suggestions, etc. Push based, rather than pull based, execution.
Active project context: I've dropped todoist-like things since they're limited in what they can express, and notion/markdown can do todolists etc. I tend to have lists in markdown style that live in two places: my daily diary/todo docs, and the actual projects themselves. This is messy and it would be lovely if notion or similar had the concept of a "todo block" and could collate all of them into a single view where I could understand association, prune and dedupe, etc. Even better if there's an agent that does or suggests cleanup whenever a new block enters.
Larger projects will get docs of their own, lots of sprawl and notes etc, and then some formalization around a spec or something. I move these to an archive folder when I'm done with the notes and the final document is fleshed out, but I'd love an agent review that makes sure I'm not leaving things on the cutting board, and that I've handled all the todos etc in my notes pages.
I don't use bidirectional linking/tagging enough, but I really should, since I want to be able to coin keywords for particular concepts inline, and then be able to access their overview and see everything that mentions them in a graphlike way.
Calendar is definitely a much used component day to day. For planning, etc. But it's not a source of truth. Everything on a calendar should just be a proxy/link to a more robust doc.
Hard nos: My take on privacy policies for things like this is "show me your incentives and I'll show you your outcomes". That is to say, any company that can survive an attempt to profit from data fuckery will do so. Your data retention policy should include technically unambiguous red lines that are not to be crossed, and define specific per-user monetary payout in the event that a breach occurs, to include clauses that cause user payout to occur before eg preferred stockholders get liquidation preference and drain the possible payout pool. Routine third party audits of how user data is handled/retained/distributed etc. I recognize that this is a bit unhinged, but that's what signaling credibility looks like. A company says "we won't sell your data" and I say "or what" and there's hemming and hawing because nothing will happen to them. If the answer is "this company dies on the spot and our investors get completely fucked", now we can talk.
I think AI service pricing applies here: generally, if it seems neat I could be in for $20 easy, and if it's genuinely game changing, $200/mo is completely reasonable to ask.
re Migration cost: I expect to be able to get 100% of my data in a reasonable non-proprietary format. If that's some blend of markdown, json, sqlite, whatever, fine.
But the bottom line for me, where does my second brain break down the most? It doesn't talk back to me. I want it to understand what I've got going on, and my idiosyncracies. I want to present it with new information and have it be like "oh, this relates to X" or, periodically, to pop up with something like "I'm noticing this correlation / related idea in areas X, Y, Z... does that resonate? Is there something here?" Again, push vs pull. My second brain should be a proactive chatbot. "Noise" is so often thought about in terms of frequency, but it's really about insight quality. If my response to 80% of push notis is "damn, good call" then you can send one every 5 minutes.
I also hear no mention of one's personal life. I don't really make the distinction. It's all in there. I should be able to bitch to this chatbot about my manager, have it know about that background, and riff with me to navigate hard convos. I should be able to talk to it about side projects I have going on, and let it thread those into my calendar. Etc. Notion is already an adequate second brain for work. Nobody has yet built an adequate second brain for the home. My house, my relationship(s), my side projects, my own diarying and self reflection... these are the contents of my brain that matter.
Email in bio if you want to talk. I'm a design technologist and happy to riff / give feedback.
This “push and pull” framework and the perspective that “noise equals insight quality” are precisely the core constraints I wanted to center my design around.
Two follow-up questions:
1.If the connector adopts a chat-first mode (similar to Signal/SMS), could this generate excessive noise? Since human input often carries emotion and subjective bias, my original intent was for the AI to serve as an emotionless, relatively neutral bridge.
2.Regarding trust mechanisms: Before implementing stricter governance measures (auditing/penalties), should we establish foundational safeguards through local-first storage + explicit export (md/json/sqlite)?
If you're open to deeper discussion and love to explore this further. I put additional information and an optional feedback form in my HN profile.