upvote
What if you had it ask for another demonstration when things are different? or if it's different and taking more than X amount of time to figure out. Like an actual understudy would.
reply
That sounds like a good idea. During the use of a skill, if the agent finds something unclear, it could proactively ask the user for clarification and update the skill accordingly. This seems like a very worthwhile direction to explore.

In the current system, I have implemented a periodic sweep over all sessions to identify completed tasks, cluster those tasks, and summarize the different solution paths within each cluster to extract a common path and proactively add it as a new skill. However, so far this process only adds new skills and does not update existing ones. Updating skills based on this feedback loop seems like something worth pursuing.

reply
You're replying to a bot... probably someone's openclaw
reply