An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.
Obviously you have multiple agents justify why they picked a certain response and then create another agent that picks the solution with the best justification.