upvote
Yeah, I'm not sure how widespread the knowledge is that floating point trades precision for magnitude. Its obvious if you know the implementation, but I'm not sure most folks do.
reply
I remember having convincing a few coworkers that the number of distinct floating point values between 0.0 and 1.0 is the same as the number of values between 1.0 and infinity. They must not be teaching this properly anymore. Are there no longer courses that explain the basics of floating point representation?

I was arguing that we could squeeze a tiny bit more precision out of our angle types by storing angles in radians (range: -π to π) instead of degrees (range: -180 to 180) because when storing as degrees, we were wasting a ton of floating point precision on angles between -1° and 1°.

reply
That doesn't work. The only real difference between those two scales is in the values located between -.0000000001 and .0000000001 And that's grossly underestimating the number of 0s.

No matter what scale you pick, your number line is going to look like this: https://anniecherkaev.com/images/floating_point_density.jpg Do a 2x zoom in or out and not a single pixel of the graph will change, just the labels.

Whether your biggest value is 0.005 or 7000000, most of your range has 25 (or 54) bits of precision. 99% of values are either too small to matter or outside your range. Changing your scale shifts between the "too small" and "too big" categories, but the number of useful values stays roughly the same.

reply
What you say was true exactly only in most floating-point formats that were used before 1980.

In those old FP formats, the product of the smallest normalized and non-null FP number with the biggest normalized and non-infinite FP number was approximately equal to 1.

However in the IEEE standard for FP arithmetic, it was decided that overflows are more dangerous than underflows, so the range of numbers greater than 1 has been increased by diminishing the range of numbers smaller than 1.

With IEEE FP numbers, the product of the smallest and biggest non-null non-infinite numbers is no longer approximately 1, but it is approximately 4.

So there are more numbers greater than 1 than smaller than 1. For IEEE FP numbers, there are approximately as many numbers smaller than 2 as there are numbers greater than 2.

An extra complication appears when the underflow exception is masked. Then there is an additional set of numbers smaller than 1, the denormalized numbers. Those are not many enough to compensate the additional numbers bigger than 1, but with those the mid point is no longer at 2, but somewhere between 1 and 2, close to 1.5.

reply
> With IEEE FP numbers, the product of the smallest and biggest non-null non-infinite numbers is no longer approximately 1, but it is approximately 4.

This is just wrong? The largest Float64 is 1.7976931348623157e308 and the smallest is 5.0e-324 They multiply to ~1e-16.

reply
>the smallest is 5.0e-324

That's a subnormal [1]. The smallest normal double is 2.22507e-308:

  DBL_MIN          = 2.22507e-308
  DBL_TRUE_MIN     = 4.94066e-324
[1] https://en.wikipedia.org/wiki/Subnormal_number
reply
Wait this doesn't make sense. Yes you'd get smaller absolute error in radians, but it doesn't really help because it's different units. Relative error is the same in degrees and radians, that's the whole point of exponential representation. All you're doing is adding a fixed offset to the exponent, but it doesn't give you any more precision when converting to radians
reply
Having a constant relative error is indeed the reason for using floating-point numbers.

However, for angles the relative error is completely irrelevant. For angles only the absolute error matters.

For angles the optimum representation is as fixed-point numbers, not as floating-point numbers.

reply
With -π to π radians you get absolute error of approximately 4e-16 radians. With -180 to 180 degrees you get absolute error of approximately 2e-14 degrees.

Even though the first number is smaller than the 2nd one, they actually represent the same angle once you consider that they are different units. So there's no precision advantage (absolute or relative) to converting degrees to radians.

Note that I'm not saying anything about fixed vs floating point, only responding to an earlier comment that radians give more precision in floating point representation.

reply
The absolute error accounting for units is what matters.

Changing the unit gives the illusion of changing absolute error, but doesn't actually change the absolute error.

reply
Yep, it was a long time ago but I think that's exactly what we ended up with, eventually: An int type of unit 2π/(int range). I believe we used unsigned because signed int overflow is undefined behavior.
reply
Wouldn’t you want to use “turns” for that sort of thing?

Re: teaching floats; when I was working with students, we touch on floats slightly, but mostly just to reinforce the idea that they aren’t always exact. I think, realistically, it can be hard. You don’t want to put an “intro to numerical analysis” class into the first couple lectures of your “intro to programming” class, where you introduce the data-types.

Then, if you are going to do a sort of numerical analysis or scientific computing class… I dunno, that bit of information could end up being a bit of trivia or easily forgotten, right?

reply
Some languages even use different definitions of epsilon! (dotnet...)
reply