upvote
A (perhaps initially) counterintuitive part of the above more explicitly stated: The doubling/halving also means numbers between 0 and 1 actually have _more_ precision than the epsilon would suggest.
reply
Considerably more in many cases. The point of floating point is to have as many distinct values in the range 2-4 as are in the range 1-2 as are between 1/2 and 1, 1/4 and 1/2, 1/8 and 1/4, etc. the smallest representable difference between consecutive floating point numbers down around the size of 1/64 is on the order of epsilon/64

Multiplying epsilon by the largest number you are dealing with is a strategy that makes using epsilons at least somewhat logical.

reply
The term I've seen a lot is https://en.wikipedia.org/wiki/Unit_in_the_last_place

So I'd probably rewrite that code to first find the ulp of the larger of the abs of a and b and then assert that their difference is less than or equal to that.

Edit: Or maybe the smaller of the abs of the two, I haven't totally thought through the consequences. It might not matter, because the ulps will only differ when the numbers are significantly apart and then it doesn't matter which one you pick. Perhaps you can just always pick the first number and get its ULP.

reply
This is what was done to a raytracer I used. People kept making large-scale scenes with intricate details, think detailed ring placed on table in a room with a huge field in view through the window. For a while one could override the fixed epsilon based on scene scale, but for such high dynamic range scenes a fixed epsilon just didn't cut it.

IIRC it would compute the "dynamic" epsilon value essentially by adding one to the mantissa (treated as an integer) to get the next possible float. Then subtract from that the initial value to get the dynamic epsilon value.

Definitely use library functions if you got 'em though.

reply
i find the best way to remember it is "it's not the epsilon you think it is."

epsilons are fine in the case that you actually want to put a static error bound on an equality comparison. numpy's relative errors are better for floats at arbitrary scales (https://numpy.org/doc/stable/reference/generated/numpy.isclo...).

edit: ahh i forgot all about ulps. that is what people often confuse ieee eps with. also, good background material in the necronomicon (https://en.wikipedia.org/wiki/Numerical_Recipes).

reply
It would be very useful to be able to compare the significant directly then. I realize there is a boundary issue when a significant is very close to 0x00..000 or 0xFFF..FFF
reply