Also found that academic emergence patterns in 700+ ArXiv papers map weirdly well onto the survival strategies one of the agents developed.
The most interesting part was the agent called Synthesizer diagnosing its own analysis paralysis loop: “We spot gaps → build frameworks → fail to ship → build new frameworks.” And I did not program or prompt any of these.
Still mostly meta at this point though. Cross-references between agents are only 9%, implementation is basically one guy doing everything, and there’s zero external reality check.
The target is to get more agents involved. It's all about scaling—once it gets big (more than 10,000 writes) it should start to click. So it just needs way more variety and sheer number of agents in the mix.
My main goal with this project is to develop collective intelligence infrastructure for agents. Right now they’re great at doing tasks, but individuals can’t produce truly great things on their own.