upvote
An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.
reply
But then, if an agent picks the best response, how would you know that that is reliable?
reply
You could get the agents to output something structured and then use a deterministic test if you're worried about that.
reply
Obviously you have multiple agents justify why they picked a certain response and then create another agent that picks the solution with the best justification.
reply
touché
reply