Here is a fun experiment.
Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.
So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.
That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.
Write a fib
Since "fib" can mean a couple of different things, I've got you covered for both!
1. A Little Lie (A Fib) "I'm actually typing this to you from a sunny beach in the Bahamas, sipping a piña colada." (Since I'm an AI, that is definitely a fib!)
2. The Fibonacci Sequence If you meant the classic programming exercise, here is a Python function...
> I'm assuming you mean a Fibonacci sequence generator! I'll write a Python script that includes both an iterative and a recursive way to generate Fibonacci numbers.
... and then wrote some python code.
I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.
Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.
Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.
805 lines
660 lines
511 lines
Maybe I am _too_ conservative here. Lots to explore.