No matter what scale you pick, your number line is going to look like this: https://anniecherkaev.com/images/floating_point_density.jpg Do a 2x zoom in or out and not a single pixel of the graph will change, just the labels.
Whether your biggest value is 0.005 or 7000000, most of your range has 25 (or 54) bits of precision. 99% of values are either too small to matter or outside your range. Changing your scale shifts between the "too small" and "too big" categories, but the number of useful values stays roughly the same.
In those old FP formats, the product of the smallest normalized and non-null FP number with the biggest normalized and non-infinite FP number was approximately equal to 1.
However in the IEEE standard for FP arithmetic, it was decided that overflows are more dangerous than underflows, so the range of numbers greater than 1 has been increased by diminishing the range of numbers smaller than 1.
With IEEE FP numbers, the product of the smallest and biggest non-null non-infinite numbers is no longer approximately 1, but it is approximately 4.
So there are more numbers greater than 1 than smaller than 1. For IEEE FP numbers, there are approximately as many numbers smaller than 2 as there are numbers greater than 2.
An extra complication appears when the underflow exception is masked. Then there is an additional set of numbers smaller than 1, the denormalized numbers. Those are not many enough to compensate the additional numbers bigger than 1, but with those the mid point is no longer at 2, but somewhere between 1 and 2, close to 1.5.
This is just wrong? The largest Float64 is 1.7976931348623157e308 and the smallest is 5.0e-324 They multiply to ~1e-16.
That's a subnormal [1]. The smallest normal double is 2.22507e-308:
DBL_MIN = 2.22507e-308
DBL_TRUE_MIN = 4.94066e-324
[1] https://en.wikipedia.org/wiki/Subnormal_numberHowever, for angles the relative error is completely irrelevant. For angles only the absolute error matters.
For angles the optimum representation is as fixed-point numbers, not as floating-point numbers.
Even though the first number is smaller than the 2nd one, they actually represent the same angle once you consider that they are different units. So there's no precision advantage (absolute or relative) to converting degrees to radians.
Note that I'm not saying anything about fixed vs floating point, only responding to an earlier comment that radians give more precision in floating point representation.
Changing the unit gives the illusion of changing absolute error, but doesn't actually change the absolute error.
Re: teaching floats; when I was working with students, we touch on floats slightly, but mostly just to reinforce the idea that they aren’t always exact. I think, realistically, it can be hard. You don’t want to put an “intro to numerical analysis” class into the first couple lectures of your “intro to programming” class, where you introduce the data-types.
Then, if you are going to do a sort of numerical analysis or scientific computing class… I dunno, that bit of information could end up being a bit of trivia or easily forgotten, right?