If I ask the same model the same question I should be able to deterministically get the same answer.
Now if we phrase the same question slightly differently we would expect to get a slightly different answer.
You wouldn't get this from an LLM though, a tiny change in starting point gets a massive change in output, its a chaotic system.
LLM: 1
“Language ambiguity with determinism”? Sure I can juxtapose the terms but if it’s semantically inconsistent, then what we mean by that is not a deterministic, definitive thing. You’re chasing your tail on this ‘goal’.
Determinism: If a model is given the exact same request/prompt twice, its two responses will also be identical. Whether or not the consistent response qualifies as correct.
The two concepts are very different.
(Ambiguous vs. precise prompt) x (Deterministic vs. Non-deterministic model) = 4 different scenarios.
A model itself can be non-deterministic without being ambiguous. If you know exactly how it functions, why it is non-deterministic (batch sensitive for instance), that is not an ambiguous model. Its operation is completely characterized. But it is non-deterministic.
An ambiguous model would simply be model whose operation was not characterized. A black box model for instance. A black box model can be deterministic and yet ambiguous.