upvote
My theory was that someone should write a specific LLM language, and then spend a whole lot of money to train models using that. A few times other commenters here have pointed out that that would be really difficult .

But I think you're onto something, human languages just aren't optimal here. But to actually see this product to conclusion you'd probably need 60 to 100 million. You would have to completely invent a new language and awesome invent new training methods on top of it.

I'm down if someone wants to raise a VC round.

reply
I'm currently downloading Ollama and going to write a simple proof-of-concept with Qwen as local "frontend", talking to OpenAI GPT as "backend". I think the idea is sound, but indeed needs retraining of GPT (hmm like training tiny local LLM in synchronization of a big remote LLM). It might be not bad business venture in the end.

I don't think humans should be involved in developing this AI-AI language, just giving some guidance, but let two agents collaborate to invent the language, and just gratify/punish them with RL methods.

OpenAI looking at you, got an email some days ago "you're not using OpenAI API that much recently, what changed?"

reply