Unfortunately, as with most of the AI providers, it's wherever they've been able to find available power and capacity. They've contracts with all of the large cloud vendors and lack of capacity is significant enough of an issue that locality isn't really part of the equation.
The only things they're particular about locality for is the infrastructure they use for training runs, where they need lots of interconnected capacity with low latency links.
Inference is wherever, whenever. You could be having your requests processed halfway around the world, or right next door, from one minute to the next.
Wow, any source for this? It would explain why they vary between feeling really responsive and really delayed.