upvote
Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.

Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.

reply
There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.

For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.

Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.

reply