That is not a problem. Language is messy, you don't need 100% accuracy to learn. The problem is that LLM errors are fundamentally different from human errors, and you won't even know how to recognize them.
Your interlocutors can work around human errors, because they tend to follow the same patterns in same language. But they will freak out with LLM errors.
If OpenAI puts resources to language learning, they could build a great product. But 3rd party devs relying on someone's tech hasn't proven to be a good strategy.