I.e. it's not worth it.
The cost of launching 100K servers, each of which needs 20m^2 each of radiator (for a single H200 server), or 250 m^2 for a GB200 rack!
Ok but these numbers are for a single server or single rack, now what about a standard cluster size of like... 50k GPUs?
You would need (with optimal idealized efficiencies) roughly 64000 m^2 of space to cool down your space data-center. That's 9 American football fields of double sided radiator panels! For a single data-center, and realistically there would be inefficiencies and wastage so it could end up more like 20 American football fields of cooling needed.
How's that going to work?