upvote
Doesn’t need to be a tool call.

As a human coder you don’t summon intellisense. It’s just popped up into your visual field as extra input - contextual cues.

You could force intellisense state into the context vector the LLM receives.

reply
Not really, because the LLM loop doesn't have the ability to get updates from the agent live. It would have to somehow be integrated all the way down the stack.
reply
LLMs can have whatever abilities we build for them. The fact we currently start their context out with a static prompt which we keep feeding in on every iteration of the token prediction loop is a choice. We don’t have to keep doing that if there are other options available.
reply