Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: When is a zero not a zero?
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > When is a zero not a zero?
Data MiningPredictive Analytics

When is a zero not a zero?

DavidMSmith
DavidMSmith
8 Min Read
SHARE

Answer: when it’s in floating point.No, this isn’t my entry for the “least funny joke ever” competition. It’s the answer to a fairly common complaint of beginning R users, which goes something like this: “R has a bug! It’s giving the wrong answer to a simple calculation!”. (I paraphrase.) Let’s see some examples of such “bugs”:”The square of the square root of two isn’t two!”> a if(a*a != 2) print(“R has a bug!”)[1] “R has a bug!” # this shouldn’t print, should it?”Fractions which should be equal, aren’t!”> a a[1] 0.45> b …->->->

Answer: when it's in floating point.

No, this isn't my entry for the "least funny joke ever" competition. It's the answer to a fairly common complaint of beginning R users, which goes something like this: "R has a bug! It's giving the wrong answer to a simple calculation!". (I paraphrase.)  Let's see some examples of such "bugs":

More Read

An Ode to Disruption: Social Media’s Impact on Research-As-Usual
The Achilles’ Heel of SaaS: the processor vendors Intel and IBM
CRAN R 2.9.0 now available
Why Telcos Can No Longer Rely on Traditional Machine Data Analytics to Deliver High Quality Service
Consolidation in the Social Business Market Continues: Salesforce.com Acquires Radian6
"The square of the square root of two isn't two!"

> a <- sqrt(2)
> if(a*a != 2) print("R has a bug!")
[1] "R has a bug!"  # this shouldn't print, should it?

"Fractions which should be equal, aren't!"

> a <- (58/40 – 1)
> a
[1] 0.45
> b <- (18/40)
> b
[1] 0.45
> a==b
[1] FALSE  # shouldn't this be TRUE?

"The sum of the residuals isn't zero!"

> x <- 1:25 + rnorm(25)
> sum(x-mean(x))
[1] 1.509903e-14  # shouldn't this be zero?

"My while loop runs one iteration too many times!"

> j <- 0
> while (j < 1) j<-j+0.1
> j
[1] 1.1  # shouldn't this end with j equal to 1?

What's going on?

The short answer is that R, like pretty much every other numerical software in existence, uses floating point arithmetic to do its calculations.  In each case above R is doing the right thing, given the principles of floating-point.  To use a strained analogy, floating point arithmetic is to the "real" arithmetic you learned in school as Newtonian physics is to Einstein's Theory of Relativity — most of the time it works just like you expect, but in extreme cases the results can be surprising. Unfortunately, while floating-point arithmetic is familiar to computer scientists, it's rarely taught in statistics classes.

The basic principle is this: computers don't store numbers (except smallish integers and some fractions) exactly. It's very similar to the way you can't write down 1/3 in decimal exactly: how ever many 3's you add to the end of .3333333 the number you write will be close to, but not quite, one third.

The principle is the same for floating point numbers: the main difference is that the underlying representation is binary, not decimal. Although the command j <- 0.1 looks like you're assigning the value "one-tenth" to j, in fact it is stored as a number close to, but not exactly, one tenth. (In fact, it's about 2 quadrillionths less than that, on most systems). Most of the time you'll never notice, because an error on that scale is too small to print (actually, the error cancels out in the conversion from decimal to binary and back again). This "error cancellation" happens much of the time, for example, if we multiply j by 10 everything looks fine:

> j <- 0.1
> j*10 – 1
[1] 0

Sometimes, though, these errors accumulate:

> j+j+j+j+j+j+j+j+j+j-1  # ten j's
[1] -1.110223e-16

(One of the weird things about floating-point arithmetic is that it's not necessarily associative, so that (a+b)+c isn't always equal to a+(b+c), nor is it always distributive, so (a+b)*c might not be the same as a*c+b*c.)  A similar effect is evident in the "residuals" example above. Sometimes, the errors can multiply dramatically if you use the wrong algorithm to make calculations, especially where very large and very small numbers mix. For example, calculating standard deviations using the naive "calculator algorithm" can give the wrong answer for large numbers with small variances. Thankfully, R's internal algorithms (including that for the stdev function) are carefully coded to avoid such floating-point error accumulations. (Some other software tools haven't always been so careful.)

Here are some tips to help you avoid some of the most common floating-point pitfalls:

Don't test floating point numbers for exact equality.  If your code includes expressions like x==0 when x is a floating-point number, you're asking for trouble.

Use integer objects when working with whole numbers. If you know that x will only ever take integer values, give it an integer representation, like this: x <- as.integer(1). As long as you only ever add, subtract etc. other integers to/from x, it's safe to use the equality test, and expressions like x==0 are meaningful. (Bonus: you'll reduce memory usage, too.)
   
If you must test floating points numbers, use fuzzy matching. If "real" arithmetic tells you x should be one, and x is floating point, test whether x is in a range near one, not whether it's one exactly. Replace code that looks like this: x==1, with this: abs(x-1)<eps , where eps is a small number. How small eps should be depends on the values you expect x to take. You can use the function all.equal(x,1) to test x against the smallest possible difference. A similar solution this would help our "while loop" example above, but it's usually better to rewrite your code so that such a test isn't necessary.

Use internal algorithms where possible. R's built-in functions are carefully written to avoid accumulation of floating-point errors.  Use functions like stdev and scale instead of rolling your own variants.

Finally, it's always worth learning more about how floating-point arithmetic works.  The Wikipedia article is a good start, and David Goldberg's article What Every Computer Scientist Should Know About Floating-Point Arithmetic has everything you ever wanted to know (and them some). And if you see other R users with floating-point woes, point them to the R FAQ entry Why doesn't R think these numbers are equal?
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

image fx (2)
Monitoring Data Without Turning into Big Brother
Big Data Exclusive
image fx (71)
The Power of AI for Personalization in Email
Artificial Intelligence Exclusive Marketing
image fx (67)
Improving LinkedIn Ad Strategies with Data Analytics
Analytics Big Data Exclusive Software
big data and remote work
Data Helps Speech-Language Pathologists Deliver Better Results
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Image
AnalyticsPredictive Analytics

The Ever-Increasing Importance of Predictive Analytics

3 Min Read

Teleportd: Big Data Meets Social to Detect Photo-Worthy Events

5 Min Read

Operational decision making as a corporate asset

6 Min Read

PAW: Cross Industry Challenges and Solutions in Predictive Analytics

4 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?