upvote
I'm currently downloading Ollama and going to write a simple proof-of-concept with Qwen as local "frontend", talking to OpenAI GPT as "backend". I think the idea is sound, but indeed needs retraining of GPT (hmm like training tiny local LLM in synchronization of a big remote LLM). It might be not bad business venture in the end.

I don't think humans should be involved in developing this AI-AI language, just giving some guidance, but let two agents collaborate to invent the language, and just gratify/punish them with RL methods.

OpenAI looking at you, got an email some days ago "you're not using OpenAI API that much recently, what changed?"

reply