upvote
Is there any constant more misused in compsci than ieee epsilon? :)

It's defined as the difference between 1.0 and the smallest number larger than 1.0. More usefully, it's the spacing between adjacent representable float numbers in the range 1.0 to 2.0.

Because floats get less precise at every integer power of two, it's impossible for two numbers greater than or equal to 2.0 to be epsilon apart. The spacing between 2.0 and the next larger number is 2*epsilon.

That means `abs(a - b) <= epsilon` is equivalent to `a == b` for any a or b greater than or equal to 2.0. And if you use `<` then the limit will be 1.0 instead.

Epsilon is the wrong tool for the job in 99.9% of cases.

reply
A (perhaps initially) counterintuitive part of the above more explicitly stated: The doubling/halving also means numbers between 0 and 1 actually have _more_ precision than the epsilon would suggest.
reply
Considerably more in many cases. The point of floating point is to have as many distinct values in the range 2-4 as are in the range 1-2 as are between 1/2 and 1, 1/4 and 1/2, 1/8 and 1/4, etc. the smallest representable difference between consecutive floating point numbers down around the size of 1/64 is on the order of epsilon/64

Multiplying epsilon by the largest number you are dealing with is a strategy that makes using epsilons at least somewhat logical.

reply
The term I've seen a lot is https://en.wikipedia.org/wiki/Unit_in_the_last_place

So I'd probably rewrite that code to first find the ulp of the larger of the abs of a and b and then assert that their difference is less than or equal to that.

Edit: Or maybe the smaller of the abs of the two, I haven't totally thought through the consequences. It might not matter, because the ulps will only differ when the numbers are significantly apart and then it doesn't matter which one you pick. Perhaps you can just always pick the first number and get its ULP.

reply
This is what was done to a raytracer I used. People kept making large-scale scenes with intricate details, think detailed ring placed on table in a room with a huge field in view through the window. For a while one could override the fixed epsilon based on scene scale, but for such high dynamic range scenes a fixed epsilon just didn't cut it.

IIRC it would compute the "dynamic" epsilon value essentially by adding one to the mantissa (treated as an integer) to get the next possible float. Then subtract from that the initial value to get the dynamic epsilon value.

Definitely use library functions if you got 'em though.

reply
i find the best way to remember it is "it's not the epsilon you think it is."

epsilons are fine in the case that you actually want to put a static error bound on an equality comparison. numpy's relative errors are better for floats at arbitrary scales (https://numpy.org/doc/stable/reference/generated/numpy.isclo...).

edit: ahh i forgot all about ulps. that is what people often confuse ieee eps with. also, good background material in the necronomicon (https://en.wikipedia.org/wiki/Numerical_Recipes).

reply
It would be very useful to be able to compare the significant directly then. I realize there is a boundary issue when a significant is very close to 0x00..000 or 0xFFF..FFF
reply
Everyone has already made several comments on the incorrect use of EPSILON here, but there's one more thing I want to add that hasn't yet been mentioned:

EPSILON = (1 ulp for numbers in the range [1, 2)). is a lousy choice for tolerance. Every operation whose result is in the range [1, 2) has a mathematical absolute error of ½ ulp. Doing just a few operations in a row has a chance to make the error term larger than your tolerance, simply because of the inherent inaccuracy of floating-point operations. Randomly generate a few doubles in the range [1, 10], then randomize the list and compute the sum of different random orders in the list, and your assertion should fail. I'd guess you haven't run into this issue because either very few people are using this particular assertion, or the people who do happen to be testing it in cases where the result is fully deterministic.

If you look at professional solvers for numerical algorithms, one of the things you'll notice is that not only is the (relative!) tolerance tunable, but there's actually several different tolerance values. The HiGHS linear solver for example uses 5 different tolerance values for its simplex algorithm. Furthermore, the default values for these tolerances tend to be in the region of 10^-6 - 10^-10... about the square root of f64::EPSILON. There's a basic rule of thumb in numerical analysis that you need your internal working precision to be roughly twice the number of digits as your output precision.

reply
Your last comment is essential for numerical analysis, indeed. There is this "surprising" effect that increasing the precision of the input ends up by decreasing that of the output (roughly speaking). So "I shall just use a s very small discretization" can be harmful.
reply
Your assertion code here doesn't make a ton of sense. The epsilon of choice here is the distance between 1 and the next number up, and it's completely separated from the scale of the numbers in question. 1e-50 will compare equal to 2e-50, for example.

I would suggest that "equals" actually is for "exactly equals" as in (a == b). In many pieces of floating point code this is the correct thing to test. Then also add a function for "within range of" so your users can specify an epsilon of interest, using the formula (abs(a - b) < eps). You may also want to support multidimensional quantities by allowing the user to specify a distance metric. You probably also want a relative version of the comparison in addition to an absolute version.

Auto-computing epsilons for an equality check is really hard and depends on the usage, as well as the numerics of the code that is upstream and downstream of the comparison. I don't see how you would do it in an assertion library.

reply
Ignoring the misuse of epsilon, I'd also say that you'd be helping your users more by not providing a general `assert_f64_eq` macro, but rather force the user to decide the error model. Add a required "precision" parameter as an enum with different modes:

    // Precise matching:
    assert_f64_eq!(a, 0.1, Steps(2))
    // same as: assert!(a == 0.1.next_down().next_down())

    // Number of digits (after period) that are matching:
    assert_f64_eq!(a, 0.1, Digits(5))

    // Relative error:
    assert_f64_eq!(a, 0.1, Rel(0.5))
reply
You generally want both relative and absolute tolerances. Relative handles scale, absolute handles values near zero (raw EPSILON isn’t a universal threshold per IEEE 754).

The usual pattern is abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) to avoid both large-value and near-zero pitfalls.

reply
See the implementation of Python's math.isclose

https://github.com/python/cpython/blob/d61fcf834d197f0113a6a...

reply
It depends on the use case, but do you consider NaN to be equal to NaN? For an assert macro, I would expect so. Also, your code works differently for very large and very small numbers, eg. 1.0000001, 1.0000002 vs 1e-100, 1.0000002e-100.

For my own soft-floating point math library, I expect the value is off by a some percentage, not just off by epsilon. And so I have my own almostSame method [1] which accounts for that and is quite a bit more complex. Actually multiple such methods. But well, that's just my own use case.

[1] https://github.com/thomasmueller/bau-lang/blob/main/src/test...

reply
Machine eps provides the maximum rounding error for a single op. Let's say I write:

  let y = 2.0;
  let x = sqrt(y);
Now is `x` actually the square root of 2? Of course not - because the digit expansion of sqrt(2) doesn't terminate, the only way to precisely represent it is with symbolics. So what do we actually have? `x` was either rounded up or down to a number that does have an exact FP representation. So, `x` / sqrt(2) is in `[1 - eps, 1 + eps]`. The eps tells you, on a relative scale, the maximum distance to an adjacent FP number for any real number. (Full disclosure, IDK how this interacts with weird stuff like denormals).

Note that in general we can only guarantee hitting this relative error for single ops. More elaborate computations may develop worse error as things compound. But it gets even worse. This error says nothing about errors that don't occur in the machine. For example, say I have a test that takes some experimental data, runs my whiz-bang algorithm, and checks if the result is close to elementary charge of an electron. Now I can't just worry about machine error but also a zillion different kinds of experimental error.

There are also cases where we want to enforce a contract on a number so we stay within acceptable domains. Author alluded to this. For example - if I compute some `x` s.t. I'm later going to take `acos(x)`, `x` had better be between `[-1, 1]`. `x >= -1 - EPS && x <= 1 + EPS` wouldn't be right because it would include two numbers, -1 - EPS and 1 + EPS, that are outside the acceptable domain.

- "I want to relax exact equality because my computation has errors" -> Make `assert_rel_tol` and `assert_abs_tol`.

- "I want to enforce determinism" -> exact equality.

- "I want to enforce a domain" -> exact comparison

Your code here is using eps for controlling absolute error, which is already not great since eps is about relative error. Unfortunately your assertion degenerates to `a == b` for large numbers but is extremely loose for small numbers.

reply
You should use two tolerances: absolute and relative. See for example numpy.allclose()

https://numpy.org/doc/stable/reference/generated/numpy.allcl...

reply
Apart from what others have commented, IMO an “assertables” crate should not invent new predicates of its own, especially for domains (like math) that are orthogonal to assertability.
reply
Hyb error [1] might be what you want.

[1] https://arxiv.org/html/2403.07492v2

reply
You should allow the user to supply the epsilon value, because the precision needed for the assertion will depend on the use case.
reply
EQ should be exactly equal, I think. Although we often (incorrectly) model floats as a real plus some non-deterministic error, there are cases where you can expect an exact bit pattern, and that’s what EQ is for (the obvious example is, you could be writing a library and accept a scaling factor from the user—scaling factors of 1 or 0 allow you to optimize).

You probably also want an isclose and probably want to push most users toward using that.

reply
Well, could you please describe a scenario where you think this assertion would be useful?
reply
deleted
reply
I suggest

if a.abs()+b.abs() >= (a-b).abs() * 2f64.powi(48)

It remains accurate for small and for big numbers. 48 is slightly less than 52.

reply
You probably don't need the (in)accuracy.

Fix your precision so it matches.

You only need so many significant digits.

reply
You want equality?

‘a.to_bits() == b.to_bits()’

Alternatively, use ‘partial_eq’ and fall back to bit equality if it returns None.

reply
Author says in the article that for tests, and assertion is a test, it's ok to use epsilon.
reply
I think a key you may want is ε which scales with the actual local floating point increment.

C++ implements this https://en.cppreference.com/cpp/numeric/math/nextafter

Rust does not https://rust-lang.github.io/rfcs/3173-float-next-up-down.htm... but people have in various places.

reply
Um, what? You've linked an RFC for Rust, but the CPP Reference article for C++ So yeah, the Rust RFC documents a proposed change, and the C++ reference documents an implemented feature, but you could equally link the C++ Proposal document and the Rust library docs to make the opposite point if you wanted.

Rust's https://doc.rust-lang.org/std/primitive.f32.html#method.next... https://doc.rust-lang.org/std/primitive.f32.html#method.next... of course exist, they're even actually constant expressions (the C++ functions are constexpr since 2023 but of course you're not promised they actually work as constant expressions because C++ is a stupid language and "constexpr" means almost nothing)

You can also rely on the fact (not promised in C++) that these are actually the IEEE floats and so they have all the resulting properties you can (entirely in safe Rust) just ask for the integers with the same bit pattern, compare integers and because of how IEEE is designed that tells you how far away in some proportional sense, the two values are.

On an actual CPU manufactured this century that's almost free because the type system evaporates during compilation -- for example f32::to_bits is literally zero CPU instructions.

reply
Oh, my research was wrong and the line from the RFC doc...

>Currently it is not possible to answer the question ‘which floating point value comes after x’ in Rust without intimate knowledge of the IEEE 754 standard.

So nevermind on it not being present in Rust I guess I was finding old documentation

reply
Yeah, the RFC is explaining what they proposed in 2021. In 2022 that work landed in "nightly" Rust, which means you could see it in the documentation (unless you've turned off seeing unstable features entirely) but to actually use it in software you need the nightly compiler mode and a feature flag in your source #![feature(float_next_up_down)].

By 2025 every remaining question about edge cases or real world experience was resolved and in April 2025 the finished feature was stabilized in release 1.86, so it just works in Rust since about a year.

For future reference you can follow separate links from a Rust RFC document to see whether the project took this RFC (anybody can write one, not everything gets accepted) and then also how far along the implementation work is. Can I use this in nightly? Maybe there's an outstanding question I can help answer. Or, maybe it's writing a stabilization report and this is my last chance to say "Hey, I am an expert on this and your API is a bit wrong".

reply
The use of epsilon is correct here. It's exactly what I was taught in comp sci over 20 years ago. You can call it's use here an "epsilon-delta".
reply