Die areas for consumer card chips are smaller than die areas for datacenter card chips, and this has held for a few generations now. They can't possibly be the same chips, because they are physically different sizes. The lowest-end consumer dies are less than 1/4 the area of datacenter dies, and even the highest-end consumer dies are only like 80% the area of datacenter dies. This implies there must be some nontrivial differentiation going on at the silicon level.
Secondly, you are not paying for the die area anyway. Whether a chip is obtained from being specially made for that exact model of GPU, or it is obtained from being binned after possibly defective areas get fused off, you are paying for the end-result product. If that product meets the expected performance, it is doing its job. This is not a subsidy (at least, not in that direction), the die is just one small part of what makes a usable GPU card, and excess die area left dark isn't even pure waste, as it helps with heat dissipation.
The fact that nVidia excludes decent FP64 from all of its prosumer offerings (*) can still be called "artificial" insofar as it was indeed done on purpose for market segmentation purposes, but it's not some trivial trick. They really are just not putting it into the silicon. This has been the case for longer than it wasn't by now, even.
* = The Quadro line of "professional" workstation cards nowadays are just consumer cards with ECC RAM and special drivers