upvote
The reason they are long is because these skills are produced mostly by Claude Code and Opus and no sensible human will read these files, let alone build a mental model around them. There is just layers of assumptions that this works - when in reality it doesn't and it is wasteful.

Here is a fun experiment.

Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.

So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.

That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.

reply
I just tried it with Gemini pro. I think this answer is about as good as you can expect for such an ambiguous question.

Write a fib

Since "fib" can mean a couple of different things, I've got you covered for both!

1. A Little Lie (A Fib) "I'm actually typing this to you from a sunny beach in the Bahamas, sipping a piña colada." (Since I'm an AI, that is definitely a fib!)

2. The Fibonacci Sequence If you meant the classic programming exercise, here is a Python function...

reply
deleted
reply
I stand to be corrected. Though I tried again just now and this is what Gemini Pro produced:

> I'm assuming you mean a Fibonacci sequence generator! I'll write a Python script that includes both an iterative and a recursive way to generate Fibonacci numbers.

... and then wrote some python code.

reply
[flagged]
reply
I quickly skimmed and it looks like at least a few of them are intended to be more like system prompts for a tightly scoped sub agent than a skill as such. I agree, I wouldn't want to use a lot of of these in a longer-running work session.

I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.

Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.

reply
I have written zero skills, so not sure how normal it is. I counted the words in couple of them and they seem to be around 2k range. So 5 skills would be around 10K. Even at a small LLM context of 128k, that's still around 10%. And for a 1M context window like the big ones, it barely registers.
reply
> it would only take a couple of these to really fill the context alot.

Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.

reply
I reviewed the line counts of my own project skill files, and the top 3 I have are:

    805 lines
    660 lines
    511 lines
Maybe I am _too_ conservative here. Lots to explore.
reply
No, you aren't.
reply