I agree they should have also listed the compressed size of the table instead of only mentioning the CSV size. But the compressed dataset is probably not smaller than 1/10 of the CSV size. If that's the case they're transferring ~8GB in 4.6 s on a 2GB/s (15Gbps) connection. Seems pretty close to max.
The size of the dataset should be under 3GB in parquet from what I understand. [0]
So it did 3*8/4.94 = 4.85 Gbps which is underwhelming in terms of network performance.
It is still not possible to make any conclusions since we don’t know how specifically they encode it or how they are running the query.
I just mean this writing is useless in terms of engineering perspective, also what it says about http doesn’t make sense
[0] - https://clickhouse.com/docs/getting-started/example-datasets...