The harness is the part that makes the API calls, interacts with the user, makes the function calls, and keeps track of the conversation memory.
You can also use the LLM to summarize the conversation into a single shorter message so you get compaction. And instead of statically defining which functions are available to the LLM you can create an MCP server which allows the LLM to auto-discover functions it can call and what they do.
That’s the whole magic of something like Claude Code. The rest is details.
Personally, for me it embodies a level of autonomy. I define that as, an AI model with potential to interact with something external to itself based on its output, where that includes its own future behavior.