upvote
Did you call it '/handoff' or did Claude name it that? The reason I'm asking is because I noticed a pattern with Claude subtly influencing me. For example, the first time I heard the the word 'gate' was from Claude and 1 week later I hear it everywhere including on Hacker News. I didn't use the word 'handoff' but Claude creates handoff files also [0]. I was thinking about this all day. Because Claude didn't just use the word 'gate' it created an entire system around it that includes handoffs that I'm starting to see everywhere. This might mean Claude is very quietly leading and influencing us in a direction.

[0] https://github.com/search?q=repo%3Aadam-s%2Fintercept%20hand...

reply
I was reading through the Claude docs and it was talking about common patterns to preserve context across sessions. One pattern was a "handoff file", which they explained like "have claude save a summary of the current session into a handoff file, start a new session, then tell it to read the file."

That sounded like a nice idea, so I made it effortless beyond typing /handoff.

The generated docs turned out to be really handy for me personally, so I kept using it, and committed them into my project as they're generated.

reply
Oh, so the word 'gate' is probably in the documentation also!

I see. So this isn't as scary. Claude is helping me understand how to use it properly.

reply
If this was more than just a gut reaction [0], I have a tough time navigating what swings this topic between scary and not scary for you.

Unless you're a true and invested believer of souls, free will, and other spiritualistic nonsense (or have a vested political affiliation to pretend so), it should be tautological that everything you read and experience biases you. LLM output then is no different.

If you are a believer, then either nothing ever did, or LLMs are special in some way, or everything else is. Which just doesn't make sense to me.

[0] It's jarring to observe the boundaries of one's agency, sure, but LLMs are really nothing special in this way. For example, I somewhat frequently catch myself using words and phrases I saw earlier during the day elsewhere, even if I did not process them consciously.

reply
I have noticed similar phenomena with Claude, where its vocabulary subtly shifts how I think/frame/write about things or points me to subtle gaps in my own understanding. And I also usually come around to understand that it's often not arbitrary. But I do think some confirmation bias is at play: when it tries to shift me into the wrong directions repeatedly, I learn how to make it stop doing that.

It definitely adds a layer of cognitive load, in wrangling/shepherding/accomodating/accepting the unpredictable personalities and stochastic behaviors of the agents. It has strong default behaviors for certain small tasks, and where humans would eventually habituate prescribed procedures/requirements, the LLM's never really internalize my preferences. In that way, they are more like contractors than employees.

reply
Why would it be scary? Claude is just parroting other human knowledge. It has no goal or agency.
reply
You can’t verify that there is no influence by the makers of Claude.
reply
I would certainly expect everyone to assume that influence rather than not.
reply
By that logic, nothing computers do is scary.
reply
Yes I think that is their argument.
reply
deleted
reply
Computer don't do anything.
reply
What's their value then?
reply
Just like with absolutely any other tool, their value is in what it enables humans using them to accomplish.

E.g., a hammer doesn't do anything, and neither does a lawnmower. It would be silly to argue (just because these tools are static objects doing nothing in the absence of direct human involvement) that those tools don't have a very clear value.

reply
Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

When people use other people like tools, i.e. use them to enable themselves to accomplish something, do those people cease to do things as well? Or is that not a terminology you recognize as sensible maybe?

I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

How do you phrase things like "this API consumes that kind of data" in your day to day?

reply
> Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

To be clear, I am not the person you were originally replying to. I personally don't care much for the terminology semantics of whether we should say "hammers do things" (with the opponents claiming it to be incorrect, since hammers cannot do anything on their own). I am more than happy to use whichever of the two terms the majority agrees upon to be the most sensible, as long as everyone agrees on the actual meaning of it.

> I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

To me, it isn't human-exclusive. I just thought that in the context of this specific comment thread, the user you originally replied to used it as a human-exclusive term, so I tried explaining in my reply how they (most likely) used it. For me, I just use whichever term that I feel makes the most sense to use in the context, and then clarify the exact details (in case I suspect the audience to have a number of people who might use the term differently).

> How do you phrase things like "this API consumes that kind of data" in your day to day?

I would use it the exact way you phrased it, "this API consumes that kind of data", because I don't think anyone in the audience would be confused or unclear about what that actually means (depends on the context ofc). Imo it wouldn't be wrong to say "this API receives that kind of data as input" either, but it feels too verbose and awkward to actually use.

reply
I'm not sure how to respond then, because having a preferred position on this is kind of essential to continue. It's the contended point. Can an LLM do things? I think they can, they think they cannot. They think computers cannot do anything in general outright.

To me, what's essential for any "doing" to happen is an entity, a causative relationship, and an occurrence. So a lawnmower can absolutely mow the lawn, but also the wind can shape a canyon.

In a reference frame where a lawnmower cannot mow independently because humans designed it or operate it, humans cannot do anything independently either. Which is something I absolutely do agree with by the way, but then either everything is one big entity, or this is not a salient approach to segmenting entities. Which is then something I also agree with.

And so I consider the lawnmower its own entity, the person operating or designing it their own entity, and just evaluate the process accordingly. The person operating the lawnmower has a lot of control on where the lawnmower goes and whether it is on, the lawnmower has a lot of control over the shape of the grass, and the designer of the lawnmower has a lot of control over what shapes can the lawnmower hope to create.

Clearly they then have more logic applied, where they segment humans (or tools) in this a more special way. I wanted to probe into that further, because the only such labeling I can think of is spiritualistic and anthropocentric. I don't find such a model reasonable or interesting, but maybe they have some other rationale that I might. Especially so, because to me claiming that a given entity "does things" is not assigning it a soul, a free will, or some other spiritualistic quality, since I don't even recognize those as existing (and thus take great issue with the unspoken assumption that I do, or that people like me do).

The next best thing I can maybe think of is to consider the size of the given entity's internal state, and its entropy with relation to the occurred causative action and its environment. This is because that's quite literally how one entity would be independent of another, while being very selective about a given action. But then LLMs, just like humans, got plenty of this, much unlike a hammer or a lawnmower. So that doesn't really fit their segmentation either. LLMs have a lot less of it, but still hopelessly more than any virtual or physical tool ever conceived prior. The closest anything comes (very non-coincidentally) are vector and graph databases, but then those only respond to very specific, grammar-abiding queries, not arbitrary series of symbols.

reply
Computers perform computations. They do what programmers instruct them to do by their nature.
reply
Agreed, just like hammers get the nails hammered into a woodboard. They do what the human operator manually guides them to do by their nature.

I am not disagreeing with you in the slightest, I feel like this is just a linguistic semantics thing. And I, personally, don't care how people use those words, as long as we are on the same page about the actual meaning of what was said. And, in this case, I feel like we are fully on the same page.

reply
FWIW I have worked with people using the word "gate" for years.

For example, "let's gate the new logic behind a feature flag".

reply
Claude has trained me on the use of the word 'invariant'. I never used it before, but it makes sense as a term for a rule the system guarantees. I would have used 'validation' for application-side rules or 'constraint' for db rules, but 'invariant' is a nice generic substitute.
reply
I've started saying "gate" and "bound(ed)" and "handoff" a lot (and even "seam" and "key off" sometimes) since Codex keeps using the terms. They're useful, no doubt, but AI definitely seems to prefer using them.
reply
I've actually been doing this for a year. I call it /checkpoint instead and it does some thing like:

* update our architecture.md and other key md files in folders affected by updates and learnings in this session. * update claude.md with changes in workflows/tooling/conventions (not project summaries) * commit

It's been pretty good so far. Nothing fancy. Recently I also asked to keep memories within the repo itself instead of in ~/.claude.

Only downside is it is slow but keeps enough to pass the baton. May be "handoff" would have been a better name!

reply
Did the same. Although I'm considering a pipeline where sessions are periodically translated to .md with most tool outputs and other junk stripped and using that as source to query against for context. I am testing out a semi-continuous ingestion of it in to my rag/knowledge db.
reply
Is this available online? I'd love documentation of my prompts.
reply
I’ll post it here, one minute.

Ok, here you go: https://gist.github.com/shawwn/56d9f2e3f8f662825c977e6e5d0bf...

Installation steps:

- In your project, download https://gist.github.com/shawwn/56d9f2e3f8f662825c977e6e5d0bf... into .claude/commands/handoff.md

- In your project's CLAUDE.md file, put "Read `docs/agents/handoff/*.md` for context."

Usage:

- Whenever you've finished a feature, done a coherent "thing", or otherwise want to document all the stuff that's in your current session, type /handoff. It'll generate a file named e.g. docs/agents/handoff/2026-03-30-001-whatever-you-did.md. It'll ask you if you like the name, and you can say "yes" or "yes, and make sure you go into detail about X" or whatever else you want the handoff to specifically include info about.

- Optionally, type "/rename 2026-03-23-001-whatever-you-did" into claude, followed by "/exit" and then "claude" to re-open a fresh session. (You can resume the previous session with "claude 2026-03-23-001-whatever-you-did". On the other hand, I've never actually needed to resume a previous session, so you could just ignore this step entirely; just /exit then type claude.)

Here's an example so you can see why I like the system. I was working on a little blockchain visualizer. At the end of the session I typed /handoff, and this was the result:

- docs/agents/handoff/2026-03-24-001-brownie-viz-graph-interactivity.md: https://gist.github.com/shawwn/29ed856d020a0131830aec6b3bc29...

The filename convention stuff was just personal preference. You can tell it to store the docs however you want to. I just like date-prefixed names because it gives a nice history of what I've done. https://github.com/user-attachments/assets/5a79b929-49ee-461...

Try to do a /handoff before your conversation gets compacted, not after. The whole point is to be a permanent record of key decisions from your session. Claude's compaction theoretically preserves all of these details, so /handoff will still work after a compaction, but it might not be as detailed as it otherwise would have been.

reply
I already do this manually each time I finish some work/investigation (I literally just say

"write a summary handoff md in ./planning for a fresh convo"

and it's generally good enough), but maybe a skill like you've done would save some typing, hmm

My ./planning directory is getting pretty big, though!

reply
Thanks! The last link is broken, though, or maybe you didn't mean to include it? Also, if you've never actually resumed a session, do you use these docs at some other time? Do you reference them when working on a related feature, or just keep them for keepsake to track what you've done and why?
reply
Thank you. It was just a screenshot of my handoff directory. I originally tried to upload to imgur but got attacked by ads, then uploaded to github via “new issue” pasting. I thought such screenshots were stable, but looks like GitHub prunes those now.

It wasn’t anything important. I appreciate you pointing that out though.

I just keep old sessions for keepsake. No reason really. I thought maybe I’d want them for some reason but never did.

The docs are the important part. It helps me (and future sessions) understand old decisions.

reply
Oh wow, thank you so much!!!!!
reply
I've got something similar but I call them threads. I work with a number of different contexts and my context discipline is bad so I needed a way to hand off work planned on one context but needs to be executed from another. I wanted a little bit of order to the chaos, so my threads skill will add and search issues created in my local forgejo repo. Gives me a convenient way to explicitly save session state to be picked up later.

I've got a separate script which parses the jsonl files that claude creates for sessions and indexes them in a local database for longer term searchability. A number of times I've found myself needing some detail I knew existed in some conversation history, but CC is pretty bad and slow at searching through the flat files for relevant content. This makes that process much faster and more consistent. Again, this is due to my lack of discipline with contexts. I'll be working with my recipe planner context and have a random idea that I just iterate with right there. Later I'll never remember that idea started from the recipe context. With this setup I don't have to.

reply
Wouldn't the next phase of this be automatic handoffs executed with hooks?

Your system is great and I do similar, my problem is I have a bunch of sessions and forget to 'handoff'.

The clawbots handle this automatically with journals to save knowledge/memory.

reply
when work on task i have task/{name}.md that write a running log to. is this not a common workflow?
reply
I think Cursor does something similar under the hood.
reply