It could be reasonable for computers to be allowed to trigger a data throughput test and the peripheral would state "I support up to 40Gbps of receiving/sending", and then send a simple pattern that can be generated on the fly. But a lot of devices can't receive/send that 80Gbps of data for long enough to perform a decent test - the storage, RAM, buffers, etc get depleted or act as bottlenecks.
If you know enough to accurately interpret the measurements you get from that, you know enough to write your own computer program to try to send 80Gbps from one computer to another and use DMA to process it in real-time without hitting storage (which a lot of peripherals likely don't have the CPU to accomplish).
If you don't know enough to write those test applications, you probably don't know enough to interpret the results of a built-in test function and the measurements would confuse and frustrate a lot of well-meaning, nerdy, but under-educated consumers who make assumptions about why they're not actually getting the rated speed.
Idk, my opinion doesn't go one way or the other here. Perhaps I myself don't quite know enough to be a good judge of that concept.
This is because the cross-sectional-area of the conductor would create an inflexible cable – and even then the connector (even though rated) could never handle a sustained 240W in the real world.
Fires. Fires everywhere... this is why no 240W chip exists.
src: electrician
All an end user cares about is if the cable is the bottleneck, if you think you have known-good devices. If I have a MacBook and a good NVMe enclosure, I want to know if my cable is fast enough, rather than have it quietly fall back to 3.2 or worse.