Discussion
mizmar: There is another way to compare floats for rough equality that I haven't seen much explored anywhere: bit-cast to integer, strip few least significant bits and then compare for equality. This is agnostic to magnitude, unlike epsilon which has to be tuned for range of values you expect to get a meaningful result.
twic: This doesn't work. For any number of significant bits, there are pairs of numbers one machine epsilon apart which will truncate to different values.
andyjohnson0: > strip few least significant bitsI'm unconvinced. Doesnt this just replace the need to choose a suitable epsilon with the need to choose the right number of bits to strip?
4pkjai: I do this to see if text in a PDF is exactly where it is in some other PDF. For my use case it works pretty well.
jph: I have this floating-point problem at scale and will donate $100 to anyone here who can improve my testing code. I'm the author of the Rust assertables crate. It provides floating-point assert macros much as described in the article.https://github.com/SixArm/assertables-rust-crate/blob/main/s...The Rust code in the assert_f64_eq macro is: if (a >= b && a - b < f64::EPSILON) || (a <= b && b - a < f64::EPSILON) If there's a way to make it more accurate and/or specific and/or faster, that's great.See the same directory for corresponding macros for less than, greater than, etc.
SideQuark: Completely worked out at least 20 years ago: https://www.lomont.org/papers/2005/CompareFloat.pdf
demorro: I guess I'm confused. I thought epsilon was the smallest possible value to account for accuracy drift across the range of a floating point representation, not just "1e-4".Done some reading. Thanks to the article to waking me up to this fact at least. I didn't realize that the epsilon provided by languages tends to be the one that only works around 1.0, and if you want to use episilons globally (which the article would say is generally a bad idea) you need to be more dynamic as your ranges, and potential errors, increase.
lifthrasiir: Hyb error [1] might be what you want.[1] https://arxiv.org/html/2403.07492v2
lukax: You generally want both relative and absolute tolerances. Relative handles scale, absolute handles values near zero (raw EPSILON isn’t a universal threshold per IEEE 754).The usual pattern is abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) to avoid both large-value and near-zero pitfalls.
lukax: See the implementation of Python's math.isclosehttps://github.com/python/cpython/blob/d61fcf834d197f0113a6a...
AshamedCaptain: One of the goals of comparing floating points with an epsilon is precisely so that you can apply these types of accuracy increasing (or decreasing) changes to the operations, and still get similar results.Anything else is basically a nightmare to however has to maintain the code in the future.Also, good luck with e.g. checking if points are aligned to a grid or the like without introducing a concept of epsilon _somewhere_.
pclmulqdq: Your assertion code here doesn't make a ton of sense. The epsilon of choice here is the distance between 1 and the next number up, and it's completely separated from the scale of the numbers in question. 1e-50 will compare equal to 2e-50, for example.I would suggest that "equals" actually is for "exactly equals" as in (a == b). In many pieces of floating point code this is the correct thing to test. Then also add a function for "within range of" so your users can specify an epsilon of interest, using the formula (abs(a - b) < eps). You may also want to support multidimensional quantities by allowing the user to specify a distance metric. You probably also want a relative version of the comparison in addition to an absolute version.Auto-computing epsilons for an equality check is really hard and depends on the usage, as well as the numerics of the code that is upstream and downstream of the comparison. I don't see how you would do it in an assertion library.
hmry: Is there any constant more misused in compsci that ieee epsilon?It's the difference between 1.0 and the smallest number larger that 1.0.Because floats get less precise at every integer power of two, it's impossible for two numbers greater than 2.0 to be epsilon apart. The spacing between 2.0 and the next larger number is 2*epsilon.Epsilon is the wrong tool for the job in 99.9% of cases.
StilesCrisis: Rather than stripping bits, you can just compare if the bit-casted numbers are less than N apart (choose an appropriate N that works for your data; a good starting point is 4).This breaks down across the positive/negative boundary, but honestly, that's probably a good property. -0.00001 is not all that similar to +0.00001 despite being close on the number line.It also requires that the inputs are finite (no INF/NAN), unless you are okay saying that FLT_MAX is roughly equal to infinity.
rpdillon: Yeah, I'm not sure how widespread the knowledge is that floating point trades precision for magnitude. Its obvious if you know the implementation, but I'm not sure most folks do.
thomasmg: Well it depends on the use case, but do you consider NaN to be equal to NaN? For an assert macro, I would expect so. Also, your code works differently for very large and very small numbers, eg. 1.0000001, 1.0000002 vs 1e-100, 1.0000002e-100.For my own soft-floating point math library, I expect the value is off by a some percentage, not just off by epsilon. And so I have my own almostSame method [1] which accounts for that and is quite a bit more complex. Actually multiple such methods. But well, that's just my use own use case.[1] https://github.com/thomasmueller/bau-lang/blob/main/src/test...
fn-mote: Note for the skeptic: this cites Knuth, Volume II, writes out the IEEE edge cases, and optimizes.
fouronnes3: You should use two tolerances: absolute and relative. See for example numpy.allclose()https://numpy.org/doc/stable/reference/generated/numpy.allcl...
vouwfietsman: This explanation is relatively reductive when it comes to its criticism of computational geometry.The thing with computational geometry is, that its usually someone else's geometry, i.e you have no control over its quality or intention. In other words, whether two points or planes or lines actually align or align within 1e-4 is no longer really mathematically interesting because its all about the intention of the user: does the user think these planes overlap?.This is why most geometry kernels (see open cascade) sport things like "fuzzy boolean operations" [0]) that lean into epsilons. These epsilons mask the error-prone supply chain of these meshes that arrive in your program by allowing some tolerance.Finally, the remark "There are many ways of solving this problem" is also overly reductive, everyone reading here should really understand that this is a topic that is being actively researched right now in 2026, hence there are currently no blessed solutions to this problem, otherwise this research would not be needed. Even more so, to some extent this problem is fundamentally unsolvable depending on what you mean by "solvable", because your input is inexact not all geometrical operations are topologically valid, hence an "exact" or let alone "correct along some dimension" result cannot be achieved for all (combination of) inputs.[0] https://dev.opencascade.org/content/fuzzy-boolean-operations
ethan_smith: This is essentially ULP (units in the last place) comparison, and it's a solid approach. One gotcha: IEEE 754 floats have separate representations for +0 and -0, so values straddling zero (like 1e-45 and -1e-45) will look maximally far apart as integers even though they're nearly equal. You need to handle the sign bit specially.
amelius: Think about this. It's silly to use floating point numbers to represent geometry, because it gives coordinates closer to the origin more precision and in most cases the origin is just an arbitrary point.