# Interesting quirk with fractions and rounding

20 messages
Open this post in threaded view
|

## Interesting quirk with fractions and rounding

 Hello, R friends My student unearthed this quirk that might interest you. I wondered if this might be a bug in the R interpreter. If not a bug, it certainly stands as a good example of the dangers of floating point numbers in computing. What do you think? > 100*(23/40) [1] 57.5 > (100*23)/40 [1] 57.5 > round(100*(23/40)) [1] 57 > round((100*23)/40) [1] 58 The result in the 2 rounds should be the same, I think.  Clearly some digital number devil is at work. I *guess* that when you put in whole numbers and group them like this (100*23), the interpreter does integer math, but if you group (23/40), you force a fractional division and a floating point number. The results from the first 2 calculations are not actually 57.5, they just appear that way. Before you close the books, look at this: > aa <- 100*(23/40) > bb <- (100*23)/40 > all.equal(aa,bb) [1] TRUE > round(aa) [1] 57 > round(bb) [1] 58 I'm putting this one in my collection of "difficult to understand" numerical calculations. If you have seen this before, I'm sorry to waste your time. pj -- Paul E. Johnson   http://pj.freefaculty.orgDirector, Center for Research Methods and Data Analysis http://crmda.ku.eduTo write to me directly, please address me at pauljohn at ku.edu. ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 This is FAQ 7.31.  It is not a bug, it is the unavoidable problem of accurately representing floating point numbers with a finite number of bits of precision.  Look at the following: > a <- 100*(23/40) > b <- (100*23)/40 > print(a,digits=20) [1] 57.499999999999993 > print(b,digits=20) [1] 57.5 > Your example with all.equal evaluates TRUE because all.equal uses a 'fuzz factor'.  From the all.equal man page "all.equal(x, y) is a utility to compare R objects x and y testing 'near equality'." Hope this is helpful, Dan Daniel Nordlund, PhD Research and Data Analysis Division Services & Enterprise Support Administration Washington State Department of Social and Health Services > -----Original Message----- > From: R-help [mailto:[hidden email]] On Behalf Of Paul > Johnson > Sent: Thursday, April 20, 2017 2:56 PM > To: R-help > Subject: [R] Interesting quirk with fractions and rounding > > Hello, R friends > > My student unearthed this quirk that might interest you. > > I wondered if this might be a bug in the R interpreter. If not a bug, > it certainly stands as a good example of the dangers of floating point > numbers in computing. > > What do you think? > > > 100*(23/40) > [1] 57.5 > > (100*23)/40 > [1] 57.5 > > round(100*(23/40)) > [1] 57 > > round((100*23)/40) > [1] 58 > > The result in the 2 rounds should be the same, I think.  Clearly some > digital number devil is at work. I *guess* that when you put in whole > numbers and group them like this (100*23), the interpreter does > integer math, but if you group (23/40), you force a fractional > division and a floating point number. The results from the first 2 > calculations are not actually 57.5, they just appear that way. > > Before you close the books, look at this: > > > aa <- 100*(23/40) > > bb <- (100*23)/40 > > all.equal(aa,bb) > [1] TRUE > > round(aa) > [1] 57 > > round(bb) > [1] 58 > > I'm putting this one in my collection of "difficult to understand" > numerical calculations. > > If you have seen this before, I'm sorry to waste your time. > > pj > -- > Paul E. Johnson   http://pj.freefaculty.org> Director, Center for Research Methods and Data Analysis > http://crmda.ku.edu> > To write to me directly, please address me at pauljohn at ku.edu. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R-project.org/posting-> guide.html > and provide commented, minimal, self-contained, reproducible code. ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 I might add that things that *look* like integers in R are not really integers, unless you explicitly label them as such: > str(20)  num 20 > str(20.5)  num 20.5 > str(20L)  int 20 > I think that Python 2 will do integer arithmetic on things that look like integers: \$ python2 . . . >>> 30 / 20 1 >>> But that behavior has changed in Python 3: \$ python3 . . . >>> 30 / 20 1.5 >>> -- Mike On Thu, Apr 20, 2017 at 3:20 PM, Nordlund, Dan (DSHS/RDA) <[hidden email]> wrote: > This is FAQ 7.31.  It is not a bug, it is the unavoidable problem of accurately representing floating point numbers with a finite number of bits of precision.  Look at the following: > >> a <- 100*(23/40) >> b <- (100*23)/40 >> print(a,digits=20) > [1] 57.499999999999993 >> print(b,digits=20) > [1] 57.5 >> > > Your example with all.equal evaluates TRUE because all.equal uses a 'fuzz factor'.  From the all.equal man page > > "all.equal(x, y) is a utility to compare R objects x and y testing 'near equality'." > > > Hope this is helpful, > > Dan > > Daniel Nordlund, PhD > Research and Data Analysis Division > Services & Enterprise Support Administration > Washington State Department of Social and Health Services > > >> -----Original Message----- >> From: R-help [mailto:[hidden email]] On Behalf Of Paul >> Johnson >> Sent: Thursday, April 20, 2017 2:56 PM >> To: R-help >> Subject: [R] Interesting quirk with fractions and rounding >> >> Hello, R friends >> >> My student unearthed this quirk that might interest you. >> >> I wondered if this might be a bug in the R interpreter. If not a bug, >> it certainly stands as a good example of the dangers of floating point >> numbers in computing. >> >> What do you think? >> >> > 100*(23/40) >> [1] 57.5 >> > (100*23)/40 >> [1] 57.5 >> > round(100*(23/40)) >> [1] 57 >> > round((100*23)/40) >> [1] 58 >> >> The result in the 2 rounds should be the same, I think.  Clearly some >> digital number devil is at work. I *guess* that when you put in whole >> numbers and group them like this (100*23), the interpreter does >> integer math, but if you group (23/40), you force a fractional >> division and a floating point number. The results from the first 2 >> calculations are not actually 57.5, they just appear that way. >> >> Before you close the books, look at this: >> >> > aa <- 100*(23/40) >> > bb <- (100*23)/40 >> > all.equal(aa,bb) >> [1] TRUE >> > round(aa) >> [1] 57 >> > round(bb) >> [1] 58 >> >> I'm putting this one in my collection of "difficult to understand" >> numerical calculations. >> >> If you have seen this before, I'm sorry to waste your time. >> >> pj >> -- >> Paul E. Johnson   http://pj.freefaculty.org>> Director, Center for Research Methods and Data Analysis >> http://crmda.ku.edu>> >> To write to me directly, please address me at pauljohn at ku.edu. >> >> ______________________________________________ >> [hidden email] mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help>> PLEASE do read the posting guide http://www.R-project.org/posting->> guide.html >> and provide commented, minimal, self-contained, reproducible code. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code. ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 In reply to this post by PaulJohnson32gmail Use all.equal(tolerance=0, aa, bb) to check for exact equality:    > aa <- 100*(23/40)    > bb <- (100*23)/40    > all.equal(aa,bb)    [1] TRUE    > all.equal(aa,bb,tolerance=0)    [1] "Mean relative difference: 1.235726e-16"    > aa < bb    [1] TRUE The numbers there are rounded to 52 binary digits (16+ decimal digits) for storage and rounding is not a linear or associative operation.  Think of doing arithmetic by hand where you store all numbers, include intermediate results, with only 2 significant decimal digits:    (3 * 1) / 3 -> 3 / 3 -> 1    3 * (1/3) -> 3 * 0.33 -> 0.99 Bill Dunlap TIBCO Software wdunlap tibco.com On Thu, Apr 20, 2017 at 2:56 PM, Paul Johnson <[hidden email]> wrote: > Hello, R friends > > My student unearthed this quirk that might interest you. > > I wondered if this might be a bug in the R interpreter. If not a bug, > it certainly stands as a good example of the dangers of floating point > numbers in computing. > > What do you think? > > > 100*(23/40) > [1] 57.5 > > (100*23)/40 > [1] 57.5 > > round(100*(23/40)) > [1] 57 > > round((100*23)/40) > [1] 58 > > The result in the 2 rounds should be the same, I think.  Clearly some > digital number devil is at work. I *guess* that when you put in whole > numbers and group them like this (100*23), the interpreter does > integer math, but if you group (23/40), you force a fractional > division and a floating point number. The results from the first 2 > calculations are not actually 57.5, they just appear that way. > > Before you close the books, look at this: > > > aa <- 100*(23/40) > > bb <- (100*23)/40 > > all.equal(aa,bb) > [1] TRUE > > round(aa) > [1] 57 > > round(bb) > [1] 58 > > I'm putting this one in my collection of "difficult to understand" > numerical calculations. > > If you have seen this before, I'm sorry to waste your time. > > pj > -- > Paul E. Johnson   http://pj.freefaculty.org> Director, Center for Research Methods and Data Analysis > http://crmda.ku.edu> > To write to me directly, please address me at pauljohn at ku.edu. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help> PLEASE do read the posting guide http://www.R-project.org/> posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >         [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 In reply to this post by Michael Hannon-2 Also note that we see the same thing in Ruby:    irb(main):001:0> 100*(23/40)    => 0    irb(main):002:0> 100.0*(23.0/40.0)    => 57.49999999999999    irb(main):003:0> (100.0*23.0)/40.0    => 57.5 and in C:    hpages@latitude:~\$ cat test.c    #include    main() {      printf("%.15f\n", 100.0 * (23.0 / 40.0));      printf("%.15f\n", (100.0 * 23.0) / 40.0);    }    hpages@latitude:~\$ gcc test.c    hpages@latitude:~\$ ./a.out    57.499999999999993    57.500000000000000 These rounding errors are intrinsically related to how computers do floating point arithmetic. H. On 04/20/2017 03:34 PM, Michael Hannon wrote: > I might add that things that *look* like integers in R are not really > integers, unless you explicitly label them as such: > >> str(20) >  num 20 > >> str(20.5) >  num 20.5 > >> str(20L) >  int 20 >> > > I think that Python 2 will do integer arithmetic on things that look > like integers: > > \$ python2 > . > . > . >>>> 30 / 20 > 1 >>>> > > But that behavior has changed in Python 3: > > \$ python3 > . > . > . >>>> 30 / 20 > 1.5 >>>> > > -- Mike > > > On Thu, Apr 20, 2017 at 3:20 PM, Nordlund, Dan (DSHS/RDA) > <[hidden email]> wrote: >> This is FAQ 7.31.  It is not a bug, it is the unavoidable problem of accurately representing floating point numbers with a finite number of bits of precision.  Look at the following: >> >>> a <- 100*(23/40) >>> b <- (100*23)/40 >>> print(a,digits=20) >> [1] 57.499999999999993 >>> print(b,digits=20) >> [1] 57.5 >>> >> >> Your example with all.equal evaluates TRUE because all.equal uses a 'fuzz factor'.  From the all.equal man page >> >> "all.equal(x, y) is a utility to compare R objects x and y testing 'near equality'." >> >> >> Hope this is helpful, >> >> Dan >> >> Daniel Nordlund, PhD >> Research and Data Analysis Division >> Services & Enterprise Support Administration >> Washington State Department of Social and Health Services >> >> >>> -----Original Message----- >>> From: R-help [mailto:[hidden email]] On Behalf Of Paul >>> Johnson >>> Sent: Thursday, April 20, 2017 2:56 PM >>> To: R-help >>> Subject: [R] Interesting quirk with fractions and rounding >>> >>> Hello, R friends >>> >>> My student unearthed this quirk that might interest you. >>> >>> I wondered if this might be a bug in the R interpreter. If not a bug, >>> it certainly stands as a good example of the dangers of floating point >>> numbers in computing. >>> >>> What do you think? >>> >>>> 100*(23/40) >>> [1] 57.5 >>>> (100*23)/40 >>> [1] 57.5 >>>> round(100*(23/40)) >>> [1] 57 >>>> round((100*23)/40) >>> [1] 58 >>> >>> The result in the 2 rounds should be the same, I think.  Clearly some >>> digital number devil is at work. I *guess* that when you put in whole >>> numbers and group them like this (100*23), the interpreter does >>> integer math, but if you group (23/40), you force a fractional >>> division and a floating point number. The results from the first 2 >>> calculations are not actually 57.5, they just appear that way. >>> >>> Before you close the books, look at this: >>> >>>> aa <- 100*(23/40) >>>> bb <- (100*23)/40 >>>> all.equal(aa,bb) >>> [1] TRUE >>>> round(aa) >>> [1] 57 >>>> round(bb) >>> [1] 58 >>> >>> I'm putting this one in my collection of "difficult to understand" >>> numerical calculations. >>> >>> If you have seen this before, I'm sorry to waste your time. >>> >>> pj >>> -- >>> Paul E. Johnson   https://urldefense.proofpoint.com/v2/url?u=http-3A__pj.freefaculty.org&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=mCvk8xBulYqnlArSRFzabdeCvvmH9UFJ0kxrxsNC0SE&e=>>> Director, Center for Research Methods and Data Analysis >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__crmda.ku.edu&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=rtQq4X4vty_dsRAWEsD_-ZZh7UDdnlLKyBllAl3i5eY&e=>>> >>> To write to me directly, please address me at pauljohn at ku.edu. >>> >>> ______________________________________________ >>> [hidden email] mailing list -- To UNSUBSCRIBE and more, see >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Dhelp&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=efyHNH41uiy_mo_2RYV1aQU3sMi2yKoAfK7LMdoK5eM&e=>>> PLEASE do read the posting guide https://urldefense.proofpoint.com/v2/url?u=http-3A__www.R-2Dproject.org_posting-2D&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=ZmY65_nfgCDuDwZFHVL0QcFn1ttVyyj8UY4XADgtDm0&e=>>> guide.html >>> and provide commented, minimal, self-contained, reproducible code. >> >> ______________________________________________ >> [hidden email] mailing list -- To UNSUBSCRIBE and more, see >> https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Dhelp&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=efyHNH41uiy_mo_2RYV1aQU3sMi2yKoAfK7LMdoK5eM&e=>> PLEASE do read the posting guide https://urldefense.proofpoint.com/v2/url?u=http-3A__www.R-2Dproject.org_posting-2Dguide.html&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=0CaeomvoF9oO4LmfDFSjbfMQn4TufgUA-mqqIP7VFno&e=>> and provide commented, minimal, self-contained, reproducible code. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Dhelp&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=efyHNH41uiy_mo_2RYV1aQU3sMi2yKoAfK7LMdoK5eM&e=> PLEASE do read the posting guide https://urldefense.proofpoint.com/v2/url?u=http-3A__www.R-2Dproject.org_posting-2Dguide.html&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=h2uDuHofumYjJ48NvOaART10-esHowpTtp37cGtD3yQ&s=0CaeomvoF9oO4LmfDFSjbfMQn4TufgUA-mqqIP7VFno&e=> and provide commented, minimal, self-contained, reproducible code. > -- Hervé Pagès Program in Computational Biology Division of Public Health Sciences Fred Hutchinson Cancer Research Center 1100 Fairview Ave. N, M1-B514 P.O. Box 19024 Seattle, WA 98109-1024 E-mail: [hidden email] Phone:  (206) 667-5791 Fax:    (206) 667-1319 ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 I've been following this thread with interest. A nice collection of things to watch out for, if you don't want the small arithmetic errors due to finite-length digital representations of fractions to cause trouble! However, as well as these small discrepancies, major malfunctions can also result. Back on Dec 22, 2013, I posted a Christmas Greetings message to R-help:   Season's Greetings (and great news ... )! which starts:   Greetings All!   With the Festive Season fast approaching, I bring you joy   with the news (which you will surely wish to celebrate)   that R cannot do arithmetic!   Usually, this is manifest in a trivial way when users report   puzzlement that, for instance,     sqrt(pi)^2 == pi     # [1] FALSE   which is the result of a (generally trivial) rounding or   truncation error:      sqrt(pi)^2 - pi     # [1] -4.440892e-16   But for some very simple calculations R goes off its head. And the example given is:   Consider a sequence generated by the recurrence relation     x[n+1] = 2*x[n] if 0 <= x[n] <= 1/2     x[n+1] = 2*(1 - x[n]) if 1/2 < x[n] <= 1   (for 0 <= x[n] <= 1).   This has equilibrium points (x[n+1] = x[n]) at x[n] = 0   and at x[n] = 2/3:     2/3 -> 2*(1 - 2/3) = 2/3   It also has periodic points, e.g.     2/5 -> 4/5 -> 2/5 (period 2)     2/9 -> 4/9 -> 8/9 -> 2/9 (period 3)   The recurrence relation can be implemented as the R function     nextx <- function(x){       if( (0<=x)&(x<=1/2) ) {x <- 2*x} else {x <- 2*(1 - x)}     }   Now have a look at what happens when we start at the equilibrium   point x = 2/3:     N <- 1 ; x <- 2/3     while(x > 0){       cat(sprintf("%i: %.9f\n",N,x))       x <- nextx(x) ; N <- N+1     }     cat(sprintf("%i: %.9f\n",N,x)) For a while [run it and see!], this looks as though it's doing what the arithmetic would lead us to expect: the first 24 results will all be printed as 0.666666667, which looks fine as 2/3 to 9 places. But then the "little errors" start to creep in:   N=25: 0.666666666   N=28: 0.666666672   N=46: 0.667968750   N=47: 0.664062500   N=48: 0.671875000   N=49: 0.656250000   N=50: 0.687500000   N=51: 0.625000000   N=52: 0.750000000   N=53: 0.500000000   N=54: 1.000000000   N=55: 0.000000000   What is happening is that, each time R multiplies by 2, the binary   representation is shifted up by one and a zero bit is introduced   at the bottom end. At N=53, the first binary bit of 'x' is 1, and all the rest are 0, so now 'x' is exactly 0.5 = 1/2, hence the final two are also exact results; 53 is the Machine\$double.digits = 53 binary places. So this normally "almost" trivial feature can, for such a simple calculation, lead to chaos or catastrophe (in the literal technical sense). For more detail, including an extension of the above, look at the original posting in the R-help archives for Dec 22, 2013:   From: (Ted Harding) <[hidden email]>   Subject: [R] Season's Greetings (and great news ... )!   Date: Sun, 22 Dec 2013 09:59:16 -0000 (GMT) (Apologies, but I couldn't track down the URL for this posting in the R-help archives; there were a few follow-ups). I gave this as an example to show that the results of the "little" arithmetic errors (such as have recently been discussed from many aspects) can, in certain contexts, destroy a computation. So be careful to consider what can happen in the particular context you are working with. There are ways to dodge the issue -- such as using the R interface to the 'bc' calculator, which computes arithmetic expressions in a way which is quite different from the fixed-finite-length binary representation and algorithms used, not only by R, but also by many other numerical computation software suites Best wishes to all, Ted. ------------------------------------------------- E-Mail: (Ted Harding) <[hidden email]> Date: 21-Apr-2017  Time: 22:03:15 This message was sent by XFMail ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 On 04/21/2017 02:03 PM, (Ted Harding) wrote: > I've been following this thread with interest. A nice > collection of things to watch out for, if you don't > want the small arithmetic errors due to finite-length > digital representations of fractions to cause trouble! > > However, as well as these small discrepancies, major > malfunctions can also result. > > Back on Dec 22, 2013, I posted a Christmas Greetings > message to R-help: > >   Season's Greetings (and great news ... )! > > which starts: > >   Greetings All! >   With the Festive Season fast approaching, I bring you joy >   with the news (which you will surely wish to celebrate) >   that R cannot do arithmetic! > >   Usually, this is manifest in a trivial way when users report >   puzzlement that, for instance, > >     sqrt(pi)^2 == pi >     # [1] FALSE > >   which is the result of a (generally trivial) rounding or >   truncation error: > >      sqrt(pi)^2 - pi >     # [1] -4.440892e-16 > >   But for some very simple calculations R goes off its head. > > And the example given is: > >   Consider a sequence generated by the recurrence relation > >     x[n+1] = 2*x[n] if 0 <= x[n] <= 1/2 >     x[n+1] = 2*(1 - x[n]) if 1/2 < x[n] <= 1 > >   (for 0 <= x[n] <= 1). > >   This has equilibrium points (x[n+1] = x[n]) at x[n] = 0 >   and at x[n] = 2/3: > >     2/3 -> 2*(1 - 2/3) = 2/3 > >   It also has periodic points, e.g. > >     2/5 -> 4/5 -> 2/5 (period 2) >     2/9 -> 4/9 -> 8/9 -> 2/9 (period 3) > >   The recurrence relation can be implemented as the R function > >     nextx <- function(x){ >       if( (0<=x)&(x<=1/2) ) {x <- 2*x} else {x <- 2*(1 - x)} >     } > >   Now have a look at what happens when we start at the equilibrium >   point x = 2/3: > >     N <- 1 ; x <- 2/3 >     while(x > 0){ >       cat(sprintf("%i: %.9f\n",N,x)) >       x <- nextx(x) ; N <- N+1 >     } >     cat(sprintf("%i: %.9f\n",N,x)) > > For a while [run it and see!], this looks as though it's doing what > the arithmetic would lead us to expect: the first 24 results will all > be printed as 0.666666667, which looks fine as 2/3 to 9 places. > > But then the "little errors" start to creep in: > >   N=25: 0.666666666 >   N=28: 0.666666672 >   N=46: 0.667968750 >   N=47: 0.664062500 >   N=48: 0.671875000 >   N=49: 0.656250000 >   N=50: 0.687500000 >   N=51: 0.625000000 >   N=52: 0.750000000 >   N=53: 0.500000000 >   N=54: 1.000000000 >   N=55: 0.000000000 > >   What is happening is that, each time R multiplies by 2, the binary >   representation is shifted up by one and a zero bit is introduced >   at the bottom end. > > At N=53, the first binary bit of 'x' is 1, and all the rest are 0, > so now 'x' is exactly 0.5 = 1/2, hence the final two are also exact > results; 53 is the Machine\$double.digits = 53 binary places. > > So this normally "almost" trivial feature can, for such a simple > calculation, lead to chaos or catastrophe (in the literal technical > sense). The only surprise is to end up at the other equilibrium point (0), not that the iterations didn't stabilize around or converge to equilibrium point 2/3. That's because:    1) You didn't start *exactly* at equilibrium point 2/3    2) The iterating process is not supposed to converge but to diverge       from the equilibrium point. It's its mathematical nature and has       nothing to do with rounding errors. So if you don't start exactly at 2/3, the iterations will take you further apart, even on a hypothetical computer that uses exact representation of any arbitrary real number (i.e. no rounding errors). With such an instable algorithm, all bets are off: you can end up to the other equilibrium point or not end up anywhere (chaos) or hit a periodic point and go in cycles from there. But hey, shouldn't people examine the mathematical properties of their iterative numerical algorithms before implementing them? Like convergence, stability etc... H. > > For more detail, including an extension of the above, look at the > original posting in the R-help archives for Dec 22, 2013: > >   From: (Ted Harding) <[hidden email]> >   Subject: [R] Season's Greetings (and great news ... )! >   Date: Sun, 22 Dec 2013 09:59:16 -0000 (GMT) > > (Apologies, but I couldn't track down the URL for this posting > in the R-help archives; there were a few follow-ups). > > I gave this as an example to show that the results of the "little" > arithmetic errors (such as have recently been discussed from many > aspects) can, in certain contexts, destroy a computation. > > So be careful to consider what can happen in the particular > context you are working with. > > There are ways to dodge the issue -- such as using the R interface > to the 'bc' calculator, which computes arithmetic expressions in > a way which is quite different from the fixed-finite-length binary > representation and algorithms used, not only by R, but also by many > other numerical computation software suites > > Best wishes to all, > Ted. > > ------------------------------------------------- > E-Mail: (Ted Harding) <[hidden email]> > Date: 21-Apr-2017  Time: 22:03:15 > This message was sent by XFMail > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://urldefense.proofpoint.com/v2/url?u=https-3A__stat.ethz.ch_mailman_listinfo_r-2Dhelp&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=ILkgA-W4NkL_PpalU1VtS9iO60moznxYVBGfU5QbyNw&s=puQK7QZvG7lZuOYvo9D6UANEgWylkApF-xIvpLMGVhQ&e=> PLEASE do read the posting guide https://urldefense.proofpoint.com/v2/url?u=http-3A__www.R-2Dproject.org_posting-2Dguide.html&d=DwICAg&c=eRAMFD45gAfqt84VtBcfhQ&r=BK7q3XeAvimeWdGbWY_wJYbW0WYiZvSXAJJKaaPhzWA&m=ILkgA-W4NkL_PpalU1VtS9iO60moznxYVBGfU5QbyNw&s=zohn8q-ufE7UhlQDQGc3KP-lsbM4O62SzwBXQ4S7I0g&e=> and provide commented, minimal, self-contained, reproducible code. > -- Hervé Pagès Program in Computational Biology Division of Public Health Sciences Fred Hutchinson Cancer Research Center 1100 Fairview Ave. N, M1-B514 P.O. Box 19024 Seattle, WA 98109-1024 E-mail: [hidden email] Phone:  (206) 667-5791 Fax:    (206) 667-1319 ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding

 In reply to this post by JRG On Apr 21, 2017 12:01 PM, "JRG" <[hidden email]> wrote: A good part of the problem in the specific case you initially presented is that some non-integer numbers have an exact representation in the binary floating point arithmetic being used.  Basically, if the fractional part is of the form 1/2^k for some integer k > 0, there is an exact representation in the binary floating point scheme. > options(digits=20) > (100*23)/40 [1] 57.5 > 100*(23/40) [1] 57.499999999999992895 So the two operations give a slightly different result because the fractional part of the division of 100*23 by 40 is 0.5.  So the first operations gives, exactly, 57.5 while the second operation does not because 23/40 has no exact representation. Thanks for answering. This case seemed fun because it was not a contrived example.  We found this one by comparing masses of report tables from 2 separate programs. It happened 1 time in about 10,000 calculations. Guidelines for R coders, though, would be welcome. So far, all I am sure of is 1 Don't use == for floating point numbers. Your 1/2^k point helps me understand why == does seem to work correctly sometimes. I wonder if we should be suspicious of >=. Imagine the horror if a= w/x > b=y/z in fractions, but digitally a < b. Blech. Can that happen? But, change the example's divisor from 40 to 30 [the fractional part from 1/2 to 2/3]: > (100*23)/30 [1] 76.666666666666671404 > 100*(23/30) [1] 76.666666666666671404 Now the two operations give the same answer to the full precision available.  So, it isn't "generally true true in R that (100*x)/y is more accurate than 100*(x/y), if x > y." The good news here is that round() gives same answer in both cases:) I am looking for a case where the first method is less accurate than the second. I expect that multiplying integers before dividing is never less accurate. Sometimes it is more accurate. ` Following your 1/2^k insight, you see why multiplying first is helpful in some cases. Question is will situation get worse. But Bert is right. I have to read more books. I studied Golub and van Loan and came away with healthy fear of matrix inversion. But when you look at user contributed regression packages, what do you find? Matrix inversion and lots of X'X. Paul Johnson University of Kansask The key (in your example) is a property of the way that floating point arithmetic is implemented. ---JRG On 04/21/2017 08:19 AM, Paul Johnson wrote: > We all agree it is a problem with digital computing, not unique to R. I > don't think that is the right place to stop. > > What to do? The round example arose in a real funded project where 2 R > programs differed in results and cause was  that one person got 57 and > another got 58. The explanation was found, but its less clear how to > prevent similar in future. Guidelines, anyone? > > So far, these are my guidelines. > > 1. Insert L on numbers to signal that you really mean INTEGER. In R, > forgetting the L in a single number will usually promote whole calculation > to floats. > 2. S3 variables are called 'numeric' if they are integer or double storage. > So avoid "is.numeric" and prefer "is.double". > 3. == is a total fail on floats > 4. Run print with digits=20 so we can see the less rounded number. Perhaps > start sessions with "options(digits=20)" > 5. all.equal does what it promises, but one must be cautious. > > Are there math habits we should follow? > > For example, Is it generally true in R that (100*x)/y is more accurate than > 100*(x/y), if x > y?   (If that is generally true, couldn't the R > interpreter do it for the user?) > > I've seen this problem before. In later editions of the game theory program > Gambit, extraordinary effort was taken to keep values symbolically as > integers as long as possible. Avoid division until the last steps. Same in > Swarm simulations. Gary Polhill wrote an essay about the Ghost in the > Machine along those lines, showing accidents from trusting floats. > > I wonder now if all uses of > or < with numeric variables are suspect. > > Oh well. If everybody posts their advice, I will write a summary. > > Paul Johnson > University of Kansas > > On Apr 21, 2017 12:02 AM, "PIKAL Petr" <[hidden email]> wrote: > >> Hi >> >> The problem is that people using Excel or probably other such spreadsheets >> do not encounter this behaviour as Excel silently rounds all your >> calculations and makes approximate comparison without telling it does so. >> Therefore most people usually do not have any knowledge of floating point >> numbers representation. >> >>  Cheers >> Petr >> ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.         [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 For over 4 decades I've had to put up with people changing my codes because I use equalities of floating point numbers in tests for convergence. (Note that tests of convergence are a subset of tests for termination -- I'll be happy to explain that if requested.) Then I get "your program isn't working" and find that is is NOT my program, but a crippled version thereof. But I don't use equality tests (i.e., ==) blindly. My tests are of the form    if ( (x_new + offset) == (x_old + offset) ) { # we cannot get more progress Now it is possible to imagine some weird cases where this can fail, so an additional test is needed on, say, maximum iterations to avoid trouble. But I've not seen such cases (or perhaps never noticed one, though I run with very large maximum counters). The test works by having offset as some modest value. For single precision I use 10 or 16, for double around 100. When x_new and x_old are near zero, the bit pattern is dominated by offset and we get convergence. When x_new is big, then it dominates. Why do I do this? The reason is that in the early 1970s there were many, many different floating point arithmetics with all sorts of choices of radix and length of mantissa. (Are "radix" and "mantissa" taught to computer science students any more?). If my programs were ported between machines there was a good chance any tolerances would be wrong. And users -- for some reason at the time engineers in particular -- would say "I only need 2 digits, I'll use 1e-3 as my tolerance". And "my" program would then not work. Sigh. For some reason, the nicely scaled offset did not attract the attention of the compulsive fiddlers. So equality in floating point is not always "wrong", though it should be used with some attention to what is going on. Apologies to those (e.g., Peter D.) who have heard this all before. I suspect there are many to whom it is new. John Nash On 2017-04-23 12:52 AM, Paul Johnson wrote: > > 1 Don't use == for floating point numbers. > ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 > On 23 Apr 2017, at 14:49 , J C Nash <[hidden email]> wrote: > > > So equality in floating point is not always "wrong", though it should be used > with some attention to what is going on. > > Apologies to those (e.g., Peter D.) who have heard this all before. I suspect > there are many to whom it is new. Peter D. still insists on never trusting exact equality, though. There was at least one case in the R sources where age-old code got itself into a condition where a residual terme that provably should decrease on every iteration oscillated between two values of 1-2 ulp in magnitude without ever reaching 0. The main thing is that you cannot trust optimising compilers these days. There is, e.g.,  no guarantee that a compiler will not transform (x_new + offset) == (x_old + offset) to (x_new + offset) - (x_old + offset) == 0 to (x_new - x_old) + (offset - offset) == 0 to.... well, you get the point. -pd -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Office: A 4.23 Email: [hidden email]  Priv: [hidden email] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.
Open this post in threaded view
|

## Re: Interesting quirk with fractions and rounding / using == for floating point

 Yes. I should have mentioned "optimizing" compilers, and I can agree with "never trusting exact equality", though I consider conscious use of equality tests useful. Optimizing compilers have bitten me once or twice. Unfortunately, a lot of floating-point work requires attention to detail. In the situation of testing for convergence, the alternatives to the test I propose require quite a lot of code, with variants for different levels of precision e.g., single, double, quad. There's no universal answer and we do have to look "under the hood". A particularly nasty instance (now fortunately long gone) was the Tektronix 4051 graphics station, where the comparisons automatically included a "fuzz". There was a FUZZ command to set this. Sometimes the "good old days" weren't! Today's equivalent is when there is an upstream change to an "optimization" that changes the manner of computation, as in Peter's examples. If we specify x <- y * (a / b), then we should not get x <- (y * a) / b. A slightly related case concerns eigenvectors / singular vectors when there are degenerate values (i.e., two or more equal eigenvalues). The vectors are then determined only to lie in a (hyper)plane. A large computer contracting firm spent two weeks of high-priced but non-numerical help trying to find the "error" in either an IBM or Univac program because they gave very different eigenvectors. And in my own field of function minimization / nonlinear least squares, it is quite common to have multiple minima or a plateau. Does anyone know if R has a test suite to check some of these situations? If not, I'll be happy to participate in generating some. They would not need to be run very often, and could be useful as a didactic tool as well as checking for differences in platforms. JN On 2017-04-23 09:37 AM, peter dalgaard wrote: > >> On 23 Apr 2017, at 14:49 , J C Nash <[hidden email]> wrote: >> >> >> So equality in floating point is not always "wrong", though it should be used >> with some attention to what is going on. >> >> Apologies to those (e.g., Peter D.) who have heard this all before. I suspect >> there are many to whom it is new. > > Peter D. still insists on never trusting exact equality, though. There was at least one case in the R sources where age-old code got itself into a condition where a residual terme that provably should decrease on every iteration oscillated between two values of 1-2 ulp in magnitude without ever reaching 0. The main thing is that you cannot trust optimising compilers these days. There is, e.g.,  no guarantee that a compiler will not transform > > (x_new + offset) == (x_old + offset) > > to > > (x_new + offset) - (x_old + offset) == 0 > > to > > (x_new - x_old) + (offset - offset) == 0 > > to.... well, you get the point. > > -pd > ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code.