upvote
It's a shame people love to use hostile language (something I am also sometimes to blame), but I think redsocksfan45 misconception is good to address. The comment is however (rightfully) dead. I'll address it anyways.

Model performance consistency is important not because you want inference determinism (which you can actually get by setting tempetature to zero and applying a static seed). The `another axis of non-determinism` can be illustrated by the question "if I move from openrouter to bedrock, will gpt-5.5 perform the same?", to which the answer is no, at least not necessarily.

This is important because workflows that used to work on one platform might degrade or outright not work on another, even using the same model, which you have to account when deciding which provider to use.

reply
Anyone who has used gpt-x via openai vs microsoft has experienced this very clearly.
reply
Which one is better?
reply
For OpenAI, OpenAI direct has always been better; except maybe early 2023-era when OpenAI Platform was not that stable or reliable yet.

For Anthropic, it can vary based on model and time. For Opus 4.7, Bedrock is the clear winner in TPS by leaps: https://artificialanalysis.ai/models/claude-opus-4-7/provide...

reply
That artificial analysis page has some great references for this, thanks for sharing.
reply
As a rule of thumb inference offered by the model labs are closer to the "true implementation" compared to third parties. They have other problems though.
reply
[dead]
reply