29 comments

  • tomhow 20 hours ago ago

    What Every Computer Scientist Should Know About Floating-Point Arithmetic (1991) - https://news.ycombinator.com/item?id=23665529 - June 2020 (85 comments)

    What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=3808168 - April 2012 (3 comments)

    What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1982332 - Dec 2010 (14 comments)

    What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1746797 - Oct 2010 (2 comments)

    Weekend project: What Every Programmer Should Know About FP Arithmetic - https://news.ycombinator.com/item?id=1257610 - April 2010 (9 comments)

    What every computer scientist should know about floating-point arithmetic - https://news.ycombinator.com/item?id=687604 - July 2009 (2 comments)

  • randusername 12 hours ago ago

    For anyone turned off by this document and its proofs, I recommend Numerical Methods for Scientists and Engineers (Hamming). Still a math text, but more approachable.

    The five key ideas from that book, enumerated by the author:

    (1) the purpose of computing is insight, not numbers

    (2) study families and relationships of methods, not individual algorithms

    (3) roundoff error

    (4) truncation error

    (5) instability

  • lifthrasiir 21 hours ago ago

    (1991). This article is also available in HTML: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

  • jbarrow 10 hours ago ago

    Shared this because I was having fun thinking through floating point numbers the other day.

    I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.

    For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.

    For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.

  • emil-lp 16 hours ago ago

        > 0.1 + 0.1 + 0.1 == 0.3
        
        False
    
    I always tell my students that if they (might) have a float, and are using the `==` operator, they're doing something wrong.
    • materialpoint 6 hours ago ago

      Well, there are many legitimate cases for using the equality operator. Insisting someone is doing something wrong is downright wrong and you shouldn't be teaching floating-point numbers. A few use cases are: Floating-points differing from default or initial values and carrying meaning, e.g. 0 or 1 translates to omitting entire operations. Then there is also the case for measuring the tinyest possible variation when using relative tolerances are not what you want. Not exhaustive. If you use == with fp, it only means you should've thought about it thoroughly.

    • zokier 14 hours ago ago

      That has more to do with decimal <-> binary conversion than arithmetic/comparison. Using hex literals makes it clearer

           0x1.999999999999ap-4 ("0.1")
          +0x1.999999999999ap-4 ("0.1")
          ---------------------
          =0x3.3333333333334p-4 ("0.2")
          +0x1.999999999999ap-4 ("0.1")
          ---------------------
          =0x4.cccccccccccf0p-4 ("0.30000000000000004")
          !=0x4.cccccccccccccp-4 ("0.3")
      • jacquesm 14 hours ago ago

        Absolutely nobody will think this is 'clearer', this is a leaky abstraction and personally I think that the OP is right and == in combination with floating point constants should be limited to '0' and that's it.

        • pdpi 10 hours ago ago

          We all know that 1/3 + 1/3 + 1/3 = 1, but 0.33 + 0.33 + 0.33 = 0.99. We're sufficiently used to decimal to know that 1/3 doesn't have a finite decimal representation. Decimal 1/10 doesn't have a finite binary representation, for the exact same reason that 1/3 doesn't have one in decimal — 3 is co-prime with 10, and 5 is co-prime with 2.

          The only leaky abstraction here is our bias towards decimal. (Fun fact: "base 10" is meaningless, because every base calls itself base 10)

          • 1718627440 9 hours ago ago

            > Fun fact: "base 10" is meaningless, because every base calls itself base 10

            Maybe we should name the bases by the largest digit they have, so that we are using base 9 most of the time.

      • brandmeyer 11 hours ago ago

        Repeating the exercise with something that is exactly representable in floating point like 1/8 instead of 1/10 highlights the difference.

    • SideQuark 5 hours ago ago

      There’s plenty of cases where ‘==‘ is correct. If you understand how floating point numbers work at the same depth you understand integers, then you may know the result of each side and know there’s zero error.

      Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).

      For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.

      I’d rather teach students there’s subtlety here with some tradeoffs.

    • magicalhippo 15 hours ago ago

      I also like how a / b can result in infinity even if both a and b are strictly non-zero[1]. So be careful rewriting floating-point expressions.

      [1]: https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_09_weird_f... (division result matrix)

      • StilesCrisis 13 hours ago ago

        Anything that overflows the max float turns into infinity. You can multiply very large numbers, or divide large numbers into small ones.

        • magicalhippo 12 hours ago ago

          Sure, division might be a tad more surprising though since most don't do that on an every-day basis. The specific case we had was when a colleague had rewritten

            (a / b) * (c / d) * (e / f)
          
          to

            (a * c * e) / (b * d * f)
          
          as a performance optimization. The result of each division in the original was all roughly one due to how the variables were computed, but the latter was sometimes unstable because the products could produce denomalized numbers.
    • kccqzy 9 hours ago ago

      I have a relaxed rule for myself: if I’m using the == operator on floats, I must write a comment explaining why. I use == for maybe once a year.

    • jmalicki 14 hours ago ago

      .125 + .375 == .5

      You should be using == for floats when they're actually equal. 0.1 just isn't an actual number.

      • emil-lp 12 hours ago ago

        Are you saying that my students should memorize which numbers are actual floats and which are not?

            > 1.25 * 0.1
            
            0.1250000000000000069388939039
        • SideQuark 5 hours ago ago

          If they were taught what was representable and why they’d learn it quickly. And those that forget details later know to chase it down again if they need it. Making it voodoo hides that it’s learnable, deterministic, and useful to understand.

        • stephencanon 9 hours ago ago

          Your students should be able to figure out if a computation is exact or not, because they should understand binary representation of numbers.

        • dehrmann 9 hours ago ago

          Tell them that they can only store integer powers of 2 and their sums exactly. 2^0 == 1. 2^-2 == .25. Then say it's the same with base 10. 10^-1 == 0.1. 1/9 isn't a power of 10, you you can't have an exact representation.

        • kccqzy 9 hours ago ago

          They shouldn’t “memorize” this per se, but it should take them only a few seconds to work out in their head.

      • Sharlin 14 hours ago ago

        > 0.1 just isn't an actual number.

        A finitist computer scientists only accepts those numbers as real that can be expressed exactly in finite base-two floating point?

        • SideQuark 5 hours ago ago

          Yes. A computer scientist should know how numbers are represented and not expect non-representable numbers in that format to be representable.

          0.1 is just as non-representable in floating point as is pi as is 100^100 in a 32 bit integer.

          Terminating dyadic rationals (up to limits based on float size) are the representable values.

        • 1718627440 9 hours ago ago

          That's essentially what you already do for integer arithmetic.

    • fransje26 14 hours ago ago

      I would argue that

          double m_D{}; [...]
      
          if (m_D == 0) somethingNeedsInstantiation();
      
      can avoid having to carry around, set and check some extra m_HasValueBeenSet booleans.

      Of course, it might not be something you want to overload beginner programmers with.

  • atoav 18 hours ago ago

    One thing that really did it for me was programming something where you would normally use floats (audio/DSP) on a platform where floats were abysmally slow. This forced me to explore Fixed-Point options which in turn forced me to explore what the differences to floats are.

    • jacquesm 14 hours ago ago

      Fixed point gave rise to the old programmers meme 'if you need floating point you don't understand your problem'. It's of course partially in jest but there is a grain of truth in it as well.

    • KeplerBoy 17 hours ago ago

      Also heavily used in FPGA based DSP.