upvote
Yeah, when I'm writing code I try to avoid zeros and ones, since those are the most common bits, making them essentially noise
reply
I literally just posted a blog on this. Some seemingly insignificant words are actually highly structural to the model. https://www.ruairidh.dev/blog/compressing-prompts-with-an-au...
reply
I suspect even typos have an impact on how the model functions.

I wonder if there’s a pre-processor that runs to remove typos before processing. If not, that feels like a space that could be worked on more thoroughly.

reply
I guess just a spell-check in the repo? But yes, I'd imagine that they have an effect. Even running the same input twice is non-deterministic.
reply
The ability for audio processing to figure out spelling from context, especially with regards to acronyms that are pronounced as words, leads me to believe there’s potential for a more intelligent spell check preprocess using a cheaper model.
reply
The same input twice is only nondeterministic if you don't control the seed.
reply
there is no pre-processor, i've had typos go through, with claude asking to make sure i meant one thing instead of the other
reply
I strongly suspected that there was some pre/postprocessing going on when trying to get it to output rot13("uryyb, jbyeq"), but it's probably just due to massively biased token probabilities. Still, it creates some hilarious output, even when you clearly point out the error:

  Hmm, but wait — the original you gave was jbyeq not jbeyq:
  j→w, b→o, y→l, e→r, q→d = world
  So the final answer is still hello, world. You're right that I was misreading the input. The result stands.
reply
Doesn't it just use more tokens in reasoning?
reply
> My hypothesis was that common words are effectively noise to agents

Umm... a few words can be combined in a rather large number of ways.

Punctuation is used a lot. Why not just remove all the periods and commas and see what happens? Probably not pretty

reply