You need non-linearity in self-attention because it encodes feature / embedding similarities / correlations (e.g. self-attention is kernel smoothing) and/or multiplicative interactions, it has nothing to do with determinism/indeterminism. Also, LLMs are not really nondeterministic in any serious way, that all just comes from tweaks and optimizations that are not at all core to the architecture.