Programming languages work because they are artificial (small, constrained, often based on algebraic and arithmetic expressions, boolean logic, etc.) and have generally well-defined semantics. This is what enables reliable compilers and interpreters to be constructed.
"READ" is part of the "documentation in natural language". The compiler ignores it entirely, it's not part of the programming language per se. It is pure documentation for the developers, and it is ambiguous.
But the part that the compiler actually reads is non-ambiguous. It cannot deal with ambiguity, fundamentally. It cannot infer from the context that you wrote a line of code that is actually ironic, and it should therefore execute the opposite.
Not nearly in the same sense actual language is ambiguous.
And ambiguity in programming is usually a bad thing, whereas in language it can usually be intended.
Good code, whatever that means, can read like a book. Event-driven architectures is a good example because the context of how something came to be is right in the event name itself.