upvote
> nor is there ever going to be a 100% freedom-from-error

That is not a problem. Language is messy, you don't need 100% accuracy to learn. The problem is that LLM errors are fundamentally different from human errors, and you won't even know how to recognize them.

Your interlocutors can work around human errors, because they tend to follow the same patterns in same language. But they will freak out with LLM errors.

reply
The trend I've seen in these AI tech companies is they launch their MVP using base models (or in this case fine tuning gpt4). This gives them enough traction for a seed round, but 2+ years later, they don't have the talent to actually improve the product beyond this.

If OpenAI puts resources to language learning, they could build a great product. But 3rd party devs relying on someone's tech hasn't proven to be a good strategy.

reply