# Interesting quirk with fractions and rounding

20 messages
Open this post in threaded view
|

## Interesting quirk with fractions and rounding

 Hello, R friends My student unearthed this quirk that might interest you. I wondered if this might be a bug in the R interpreter. If not a bug, it certainly stands as a good example of the dangers of floating point numbers in computing. What do you think? > 100*(23/40) [1] 57.5 > (100*23)/40 [1] 57.5 > round(100*(23/40)) [1] 57 > round((100*23)/40) [1] 58 The result in the 2 rounds should be the same, I think.  Clearly some digital number devil is at work. I *guess* that when you put in whole numbers and group them like this (100*23), the interpreter does integer math, but if you group (23/40), you force a fractional division and a floating point number. The results from the first 2 calculations are not actually 57.5, they just appear that way. Before you close the books, look at this: > aa <- 100*(23/40) > bb <- (100*23)/40 > all.equal(aa,bb) [1] TRUE > round(aa) [1] 57 > round(bb) [1] 58 I'm putting this one in my collection of "difficult to understand" numerical calculations. If you have seen this before, I'm sorry to waste your time. pj -- Paul E. Johnson   http://pj.freefaculty.orgDirector, Center for Research Methods and Data Analysis http://crmda.ku.eduTo write to me directly, please address me at pauljohn at ku.edu. ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 This is FAQ 7.31.  It is not a bug, it is the unavoidable problem of accurately representing floating point numbers with a finite number of bits of precision.  Look at the following: > a <- 100*(23/40) > b <- (100*23)/40 > print(a,digits=20) [1] 57.499999999999993 > print(b,digits=20) [1] 57.5 > Your example with all.equal evaluates TRUE because all.equal uses a 'fuzz factor'.  From the all.equal man page "all.equal(x, y) is a utility to compare R objects x and y testing 'near equality'." Hope this is helpful, Dan Daniel Nordlund, PhD Research and Data Analysis Division Services & Enterprise Support Administration Washington State Department of Social and Health Services > -----Original Message----- > From: R-help [mailto:[hidden email]] On Behalf Of Paul > Johnson > Sent: Thursday, April 20, 2017 2:56 PM > To: R-help > Subject: [R] Interesting quirk with fractions and rounding > > Hello, R friends > > My student unearthed this quirk that might interest you. > > I wondered if this might be a bug in the R interpreter. If not a bug, > it certainly stands as a good example of the dangers of floating point > numbers in computing. > > What do you think? > > > 100*(23/40) > [1] 57.5 > > (100*23)/40 > [1] 57.5 > > round(100*(23/40)) > [1] 57 > > round((100*23)/40) > [1] 58 > > The result in the 2 rounds should be the same, I think.  Clearly some > digital number devil is at work. I *guess* that when you put in whole > numbers and group them like this (100*23), the interpreter does > integer math, but if you group (23/40), you force a fractional > division and a floating point number. The results from the first 2 > calculations are not actually 57.5, they just appear that way. > > Before you close the books, look at this: > > > aa <- 100*(23/40) > > bb <- (100*23)/40 > > all.equal(aa,bb) > [1] TRUE > > round(aa) > [1] 57 > > round(bb) > [1] 58 > > I'm putting this one in my collection of "difficult to understand" > numerical calculations. > > If you have seen this before, I'm sorry to waste your time. > > pj > -- > Paul E. Johnson   http://pj.freefaculty.org> Director, Center for Research Methods and Data Analysis > http://crmda.ku.edu> > To write to me directly, please address me at pauljohn at ku.edu. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R-project.org/posting-> guide.html > and provide commented, minimal, self-contained, reproducible code. ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 In reply to this post by PaulJohnson32gmail Use all.equal(tolerance=0, aa, bb) to check for exact equality:    > aa <- 100*(23/40)    > bb <- (100*23)/40    > all.equal(aa,bb)    [1] TRUE    > all.equal(aa,bb,tolerance=0)    [1] "Mean relative difference: 1.235726e-16"    > aa < bb    [1] TRUE The numbers there are rounded to 52 binary digits (16+ decimal digits) for storage and rounding is not a linear or associative operation.  Think of doing arithmetic by hand where you store all numbers, include intermediate results, with only 2 significant decimal digits:    (3 * 1) / 3 -> 3 / 3 -> 1    3 * (1/3) -> 3 * 0.33 -> 0.99 Bill Dunlap TIBCO Software wdunlap tibco.com On Thu, Apr 20, 2017 at 2:56 PM, Paul Johnson <[hidden email]> wrote: > Hello, R friends > > My student unearthed this quirk that might interest you. > > I wondered if this might be a bug in the R interpreter. If not a bug, > it certainly stands as a good example of the dangers of floating point > numbers in computing. > > What do you think? > > > 100*(23/40) > [1] 57.5 > > (100*23)/40 > [1] 57.5 > > round(100*(23/40)) > [1] 57 > > round((100*23)/40) > [1] 58 > > The result in the 2 rounds should be the same, I think.  Clearly some > digital number devil is at work. I *guess* that when you put in whole > numbers and group them like this (100*23), the interpreter does > integer math, but if you group (23/40), you force a fractional > division and a floating point number. The results from the first 2 > calculations are not actually 57.5, they just appear that way. > > Before you close the books, look at this: > > > aa <- 100*(23/40) > > bb <- (100*23)/40 > > all.equal(aa,bb) > [1] TRUE > > round(aa) > [1] 57 > > round(bb) > [1] 58 > > I'm putting this one in my collection of "difficult to understand" > numerical calculations. > > If you have seen this before, I'm sorry to waste your time. > > pj > -- > Paul E. Johnson   http://pj.freefaculty.org> Director, Center for Research Methods and Data Analysis > http://crmda.ku.edu> > To write to me directly, please address me at pauljohn at ku.edu. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R-project.org/> posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >         [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 I've been following this thread with interest. A nice collection of things to watch out for, if you don't want the small arithmetic errors due to finite-length digital representations of fractions to cause trouble! However, as well as these small discrepancies, major malfunctions can also result. Back on Dec 22, 2013, I posted a Christmas Greetings message to R-help:   Season's Greetings (and great news ... )! which starts:   Greetings All!   With the Festive Season fast approaching, I bring you joy   with the news (which you will surely wish to celebrate)   that R cannot do arithmetic!   Usually, this is manifest in a trivial way when users report   puzzlement that, for instance,     sqrt(pi)^2 == pi     # [1] FALSE   which is the result of a (generally trivial) rounding or   truncation error:      sqrt(pi)^2 - pi     # [1] -4.440892e-16   But for some very simple calculations R goes off its head. And the example given is:   Consider a sequence generated by the recurrence relation     x[n+1] = 2*x[n] if 0 <= x[n] <= 1/2     x[n+1] = 2*(1 - x[n]) if 1/2 < x[n] <= 1   (for 0 <= x[n] <= 1).   This has equilibrium points (x[n+1] = x[n]) at x[n] = 0   and at x[n] = 2/3:     2/3 -> 2*(1 - 2/3) = 2/3   It also has periodic points, e.g.     2/5 -> 4/5 -> 2/5 (period 2)     2/9 -> 4/9 -> 8/9 -> 2/9 (period 3)   The recurrence relation can be implemented as the R function     nextx <- function(x){       if( (0<=x)&(x<=1/2) ) {x <- 2*x} else {x <- 2*(1 - x)}     }   Now have a look at what happens when we start at the equilibrium   point x = 2/3:     N <- 1 ; x <- 2/3     while(x > 0){       cat(sprintf("%i: %.9f\n",N,x))       x <- nextx(x) ; N <- N+1     }     cat(sprintf("%i: %.9f\n",N,x)) For a while [run it and see!], this looks as though it's doing what the arithmetic would lead us to expect: the first 24 results will all be printed as 0.666666667, which looks fine as 2/3 to 9 places. But then the "little errors" start to creep in:   N=25: 0.666666666   N=28: 0.666666672   N=46: 0.667968750   N=47: 0.664062500   N=48: 0.671875000   N=49: 0.656250000   N=50: 0.687500000   N=51: 0.625000000   N=52: 0.750000000   N=53: 0.500000000   N=54: 1.000000000   N=55: 0.000000000   What is happening is that, each time R multiplies by 2, the binary   representation is shifted up by one and a zero bit is introduced   at the bottom end. At N=53, the first binary bit of 'x' is 1, and all the rest are 0, so now 'x' is exactly 0.5 = 1/2, hence the final two are also exact results; 53 is the Machine\$double.digits = 53 binary places. So this normally "almost" trivial feature can, for such a simple calculation, lead to chaos or catastrophe (in the literal technical sense). For more detail, including an extension of the above, look at the original posting in the R-help archives for Dec 22, 2013:   From: (Ted Harding) <[hidden email]>   Subject: [R] Season's Greetings (and great news ... )!   Date: Sun, 22 Dec 2013 09:59:16 -0000 (GMT) (Apologies, but I couldn't track down the URL for this posting in the R-help archives; there were a few follow-ups). I gave this as an example to show that the results of the "little" arithmetic errors (such as have recently been discussed from many aspects) can, in certain contexts, destroy a computation. So be careful to consider what can happen in the particular context you are working with. There are ways to dodge the issue -- such as using the R interface to the 'bc' calculator, which computes arithmetic expressions in a way which is quite different from the fixed-finite-length binary representation and algorithms used, not only by R, but also by many other numerical computation software suites Best wishes to all, Ted. ------------------------------------------------- E-Mail: (Ted Harding) <[hidden email]> Date: 21-Apr-2017  Time: 22:03:15 This message was sent by XFMail ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 On 04/21/2017 02:03 PM, (Ted Harding) wrote: > I've been following this thread with interest. A nice > collection of things to watch out for, if you don't > want the small arithmetic errors due to finite-length > digital representations of fractions to cause trouble! > > However, as well as these small discrepancies, major > malfunctions can also result. > > Back on Dec 22, 2013, I posted a Christmas Greetings > message to R-help: > >   Season's Greetings (and great news ... )! > > which starts: > >   Greetings All! >   With the Festive Season fast approaching, I bring you joy >   with the news (which you will surely wish to celebrate) >   that R cannot do arithmetic! > >   Usually, this is manifest in a trivial way when users report >   puzzlement that, for instance, > >     sqrt(pi)^2 == pi >     # [1] FALSE > >   which is the result of a (generally trivial) rounding or >   truncation error: > >      sqrt(pi)^2 - pi >     # [1] -4.440892e-16 > >   But for some very simple calculations R goes off its head. > > And the example given is: > >   Consider a sequence generated by the recurrence relation > >     x[n+1] = 2*x[n] if 0 <= x[n] <= 1/2 >     x[n+1] = 2*(1 - x[n]) if 1/2 < x[n] <= 1 > >   (for 0 <= x[n] <= 1). > >   This has equilibrium points (x[n+1] = x[n]) at x[n] = 0 >   and at x[n] = 2/3: > >     2/3 -> 2*(1 - 2/3) = 2/3 > >   It also has periodic points, e.g. > >     2/5 -> 4/5 -> 2/5 (period 2) >     2/9 -> 4/9 -> 8/9 -> 2/9 (period 3) > >   The recurrence relation can be implemented as the R function > >     nextx <- function(x){ >       if( (0<=x)&(x<=1/2) ) {x <- 2*x} else {x <- 2*(1 - x)} >     } > >   Now have a look at what happens when we start at the equilibrium >   point x = 2/3: > >     N <- 1 ; x <- 2/3 >     while(x > 0){ >       cat(sprintf("%i: %.9f\n",N,x)) >       x <- nextx(x) ; N <- N+1 >     } >     cat(sprintf("%i: %.9f\n",N,x)) > > For a while [run it and see!], this looks as though it's doing what > the arithmetic would lead us to expect: the first 24 results will all > be printed as 0.666666667, which looks fine as 2/3 to 9 places. > > But then the "little errors" start to creep in: > >   N=25: 0.666666666 >   N=28: 0.666666672 >   N=46: 0.667968750 >   N=47: 0.664062500 >   N=48: 0.671875000 >   N=49: 0.656250000 >   N=50: 0.687500000 >   N=51: 0.625000000 >   N=52: 0.750000000 >   N=53: 0.500000000 >   N=54: 1.000000000 >   N=55: 0.000000000 > >   What is happening is that, each time R multiplies by 2, the binary >   representation is shifted up by one and a zero bit is introduced >   at the bottom end. > > At N=53, the first binary bit of 'x' is 1, and all the rest are 0, > so now 'x' is exactly 0.5 = 1/2, hence the final two are also exact > results; 53 is the Machine\$double.digits = 53 binary places. > > So this normally "almost" trivial feature can, for such a simple > calculation, lead to chaos or catastrophe (in the literal technical > sense). The only surprise is to end up at the other equilibrium point (0), not that the iterations didn't stabilize around or converge to equilibrium point 2/3. That's because:    1) You didn't start *exactly* at equilibrium point 2/3    2) The iterating process is not supposed to converge but to diverge       from the equilibrium point. It's its mathematical nature and has       nothing to do with rounding errors. So if you don't start exactly at 2/3, the iterations will take you further apart, even on a hypothetical computer that uses exact representation of any arbitrary real number (i.e. no rounding errors). With such an instable algorithm, all bets are off: you can end up to the other equilibrium point or not end up anywhere (chaos) or hit a periodic point and go in cycles from there. But hey, shouldn't people examine the mathematical properties of their iterative numerical algorithms before implementing them? Like convergence, stability etc... H. > > For more detail, including an extension of the above, look at the > original posting in the R-help archives for Dec 22, 2013: > >   From: (Ted Harding) <[hidden email]> >   Subject: [R] Season's Greetings (and great news ... )! >   Date: Sun, 22 Dec 2013 09:59:16 -0000 (GMT) > > (Apologies, but I couldn't track down the URL for this posting > in the R-help archives; there were a few follow-ups). > > I gave this as an example to show that the results of the "little" > arithmetic errors (such as have recently been discussed from many > aspects) can, in certain contexts, destroy a computation. > > So be careful to consider what can happen in the particular > context you are working with. > > There are ways to dodge the issue -- such as using the R interface > to the 'bc' calculator, which computes arithmetic expressions in > a way which is quite different from the fixed-finite-length binary > representation and algorithms used, not only by R, but also by many > other numerical computation software suites > > Best wishes to all, > Ted. > > ------------------------------------------------- > E-Mail: (Ted Harding) <[hidden email]> > Date: 21-Apr-2017  Time: 22:03:15 > This message was sent by XFMail > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Dhelp&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=ILkgA-W4NkL_PpalU1VtS9iO60moznxYVBGfU5QbyNw&s=puQK7QZvG7lZuOYvo9D6UANEgWylkApF-xIvpLMGVhQ&e=> PLEASE do read the posting guide https://urldefense.proofpoint.com/v2/url?u=http-3A__www.R-2Dproject.org_posting-2Dguide.html&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=ILkgA-W4NkL_PpalU1VtS9iO60moznxYVBGfU5QbyNw&s=zohn8q-ufE7UhlQDQGc3KP-lsbM4O62SzwBXQ4S7I0g&e=> and provide commented, minimal, self-contained, reproducible code. > -- Hervé Pagès Program in Computational Biology Division of Public Health Sciences Fred Hutchinson Cancer Research Center 1100 Fairview Ave. N, M1-B514 P.O. Box 19024 Seattle, WA 98109-1024 E-mail: [hidden email] Phone:  (206) 667-5791 Fax:    (206) 667-1319 ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 For over 4 decades I've had to put up with people changing my codes because I use equalities of floating point numbers in tests for convergence. (Note that tests of convergence are a subset of tests for termination -- I'll be happy to explain that if requested.) Then I get "your program isn't working" and find that is is NOT my program, but a crippled version thereof. But I don't use equality tests (i.e., ==) blindly. My tests are of the form    if ( (x_new + offset) == (x_old + offset) ) { # we cannot get more progress Now it is possible to imagine some weird cases where this can fail, so an additional test is needed on, say, maximum iterations to avoid trouble. But I've not seen such cases (or perhaps never noticed one, though I run with very large maximum counters). The test works by having offset as some modest value. For single precision I use 10 or 16, for double around 100. When x_new and x_old are near zero, the bit pattern is dominated by offset and we get convergence. When x_new is big, then it dominates. Why do I do this? The reason is that in the early 1970s there were many, many different floating point arithmetics with all sorts of choices of radix and length of mantissa. (Are "radix" and "mantissa" taught to computer science students any more?). If my programs were ported between machines there was a good chance any tolerances would be wrong. And users -- for some reason at the time engineers in particular -- would say "I only need 2 digits, I'll use 1e-3 as my tolerance". And "my" program would then not work. Sigh. For some reason, the nicely scaled offset did not attract the attention of the compulsive fiddlers. So equality in floating point is not always "wrong", though it should be used with some attention to what is going on. Apologies to those (e.g., Peter D.) who have heard this all before. I suspect there are many to whom it is new. John Nash On 2017-04-23 12:52 AM, Paul Johnson wrote: > > 1 Don't use == for floating point numbers. > ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 > On 23 Apr 2017, at 14:49 , J C Nash <[hidden email]> wrote: > > > So equality in floating point is not always "wrong", though it should be used > with some attention to what is going on. > > Apologies to those (e.g., Peter D.) who have heard this all before. I suspect > there are many to whom it is new. Peter D. still insists on never trusting exact equality, though. There was at least one case in the R sources where age-old code got itself into a condition where a residual terme that provably should decrease on every iteration oscillated between two values of 1-2 ulp in magnitude without ever reaching 0. The main thing is that you cannot trust optimising compilers these days. There is, e.g.,  no guarantee that a compiler will not transform (x_new + offset) == (x_old + offset) to (x_new + offset) - (x_old + offset) == 0 to (x_new - x_old) + (offset - offset) == 0 to.... well, you get the point. -pd -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Office: A 4.23 Email: [hidden email]  Priv: [hidden email] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 Yes. I should have mentioned "optimizing" compilers, and I can agree with "never trusting exact equality", though I consider conscious use of equality tests useful. Optimizing compilers have bitten me once or twice. Unfortunately, a lot of floating-point work requires attention to detail. In the situation of testing for convergence, the alternatives to the test I propose require quite a lot of code, with variants for different levels of precision e.g., single, double, quad. There's no universal answer and we do have to look "under the hood". A particularly nasty instance (now fortunately long gone) was the Tektronix 4051 graphics station, where the comparisons automatically included a "fuzz". There was a FUZZ command to set this. Sometimes the "good old days" weren't! Today's equivalent is when there is an upstream change to an "optimization" that changes the manner of computation, as in Peter's examples. If we specify x <- y * (a / b), then we should not get x <- (y * a) / b. A slightly related case concerns eigenvectors / singular vectors when there are degenerate values (i.e., two or more equal eigenvalues). The vectors are then determined only to lie in a (hyper)plane. A large computer contracting firm spent two weeks of high-priced but non-numerical help trying to find the "error" in either an IBM or Univac program because they gave very different eigenvectors. And in my own field of function minimization / nonlinear least squares, it is quite common to have multiple minima or a plateau. Does anyone know if R has a test suite to check some of these situations? If not, I'll be happy to participate in generating some. They would not need to be run very often, and could be useful as a didactic tool as well as checking for differences in platforms. JN On 2017-04-23 09:37 AM, peter dalgaard wrote: > >> On 23 Apr 2017, at 14:49 , J C Nash <[hidden email]> wrote: >> >> >> So equality in floating point is not always "wrong", though it should be used >> with some attention to what is going on. >> >> Apologies to those (e.g., Peter D.) who have heard this all before. I suspect >> there are many to whom it is new. > > Peter D. still insists on never trusting exact equality, though. There was at least one case in the R sources where age-old code got itself into a condition where a residual terme that provably should decrease on every iteration oscillated between two values of 1-2 ulp in magnitude without ever reaching 0. The main thing is that you cannot trust optimising compilers these days. There is, e.g.,  no guarantee that a compiler will not transform > > (x_new + offset) == (x_old + offset) > > to > > (x_new + offset) - (x_old + offset) == 0 > > to > > (x_new - x_old) + (offset - offset) == 0 > > to.... well, you get the point. > > -pd > ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.