upvote
Yes, that's brought up in the first part of the article. She goes on to discuss differing performance depending on the language being used and its effect on safety guards. Apparently some language models do quite a bit worse in some languages. (The language models tested aren't the latest ones.)
reply
> What the author seems to be saying is that the system prompt can be used to instill bias in LLMs.

That's, like, the whole point of system prompts. "Bias" is how they do what they do.

reply