Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    image fx (60)
    Data Analytics Driving the Modern E-commerce Warehouse
    13 Min Read
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: When is a zero not a zero?
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > When is a zero not a zero?
Data MiningPredictive Analytics

When is a zero not a zero?

DavidMSmith
DavidMSmith
8 Min Read
SHARE

Answer: when it’s in floating point.No, this isn’t my entry for the “least funny joke ever” competition. It’s the answer to a fairly common complaint of beginning R users, which goes something like this: “R has a bug! It’s giving the wrong answer to a simple calculation!”. (I paraphrase.) Let’s see some examples of such “bugs”:”The square of the square root of two isn’t two!”> a if(a*a != 2) print(“R has a bug!”)[1] “R has a bug!” # this shouldn’t print, should it?”Fractions which should be equal, aren’t!”> a a[1] 0.45> b …->->->

Answer: when it's in floating point.

No, this isn't my entry for the "least funny joke ever" competition. It's the answer to a fairly common complaint of beginning R users, which goes something like this: "R has a bug! It's giving the wrong answer to a simple calculation!". (I paraphrase.)  Let's see some examples of such "bugs":

More Read

Could You Be POTUS?
IBM Launches New Advanced Analytics Center In New York,…
What Enterprises Can Learn from Major Events and Surprises in 2011
Solve Data Issues with Cloud Database Management Systems
Data Mining Vital Statistics Yields Fascinating Societal Insights
"The square of the square root of two isn't two!"

> a <- sqrt(2)
> if(a*a != 2) print("R has a bug!")
[1] "R has a bug!"  # this shouldn't print, should it?

"Fractions which should be equal, aren't!"

> a <- (58/40 – 1)
> a
[1] 0.45
> b <- (18/40)
> b
[1] 0.45
> a==b
[1] FALSE  # shouldn't this be TRUE?

"The sum of the residuals isn't zero!"

> x <- 1:25 + rnorm(25)
> sum(x-mean(x))
[1] 1.509903e-14  # shouldn't this be zero?

"My while loop runs one iteration too many times!"

> j <- 0
> while (j < 1) j<-j+0.1
> j
[1] 1.1  # shouldn't this end with j equal to 1?

What's going on?

The short answer is that R, like pretty much every other numerical software in existence, uses floating point arithmetic to do its calculations.  In each case above R is doing the right thing, given the principles of floating-point.  To use a strained analogy, floating point arithmetic is to the "real" arithmetic you learned in school as Newtonian physics is to Einstein's Theory of Relativity — most of the time it works just like you expect, but in extreme cases the results can be surprising. Unfortunately, while floating-point arithmetic is familiar to computer scientists, it's rarely taught in statistics classes.

The basic principle is this: computers don't store numbers (except smallish integers and some fractions) exactly. It's very similar to the way you can't write down 1/3 in decimal exactly: how ever many 3's you add to the end of .3333333 the number you write will be close to, but not quite, one third.

The principle is the same for floating point numbers: the main difference is that the underlying representation is binary, not decimal. Although the command j <- 0.1 looks like you're assigning the value "one-tenth" to j, in fact it is stored as a number close to, but not exactly, one tenth. (In fact, it's about 2 quadrillionths less than that, on most systems). Most of the time you'll never notice, because an error on that scale is too small to print (actually, the error cancels out in the conversion from decimal to binary and back again). This "error cancellation" happens much of the time, for example, if we multiply j by 10 everything looks fine:

> j <- 0.1
> j*10 – 1
[1] 0

Sometimes, though, these errors accumulate:

> j+j+j+j+j+j+j+j+j+j-1  # ten j's
[1] -1.110223e-16

(One of the weird things about floating-point arithmetic is that it's not necessarily associative, so that (a+b)+c isn't always equal to a+(b+c), nor is it always distributive, so (a+b)*c might not be the same as a*c+b*c.)  A similar effect is evident in the "residuals" example above. Sometimes, the errors can multiply dramatically if you use the wrong algorithm to make calculations, especially where very large and very small numbers mix. For example, calculating standard deviations using the naive "calculator algorithm" can give the wrong answer for large numbers with small variances. Thankfully, R's internal algorithms (including that for the stdev function) are carefully coded to avoid such floating-point error accumulations. (Some other software tools haven't always been so careful.)

Here are some tips to help you avoid some of the most common floating-point pitfalls:

Don't test floating point numbers for exact equality.  If your code includes expressions like x==0 when x is a floating-point number, you're asking for trouble.

Use integer objects when working with whole numbers. If you know that x will only ever take integer values, give it an integer representation, like this: x <- as.integer(1). As long as you only ever add, subtract etc. other integers to/from x, it's safe to use the equality test, and expressions like x==0 are meaningful. (Bonus: you'll reduce memory usage, too.)
   
If you must test floating points numbers, use fuzzy matching. If "real" arithmetic tells you x should be one, and x is floating point, test whether x is in a range near one, not whether it's one exactly. Replace code that looks like this: x==1, with this: abs(x-1)<eps , where eps is a small number. How small eps should be depends on the values you expect x to take. You can use the function all.equal(x,1) to test x against the smallest possible difference. A similar solution this would help our "while loop" example above, but it's usually better to rewrite your code so that such a test isn't necessary.

Use internal algorithms where possible. R's built-in functions are carefully written to avoid accumulation of floating-point errors.  Use functions like stdev and scale instead of rolling your own variants.

Finally, it's always worth learning more about how floating-point arithmetic works.  The Wikipedia article is a good start, and David Goldberg's article What Every Computer Scientist Should Know About Floating-Point Arithmetic has everything you ever wanted to know (and them some). And if you see other R users with floating-point woes, point them to the R FAQ entry Why doesn't R think these numbers are equal?
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

image fx (60)
How Finance & BI Teams Choose Accounting Software
Big Data Business Intelligence Exclusive
Why the AI Race Is Being Decided at the Dataset Level
Why the AI Race Is Being Decided at the Dataset Level
Artificial Intelligence Big Data Exclusive
image fx (60)
Data Analytics Driving the Modern E-commerce Warehouse
Analytics Big Data Exclusive
ai for building crypto banks
Building Your Own Crypto Bank with AI
Blockchain Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Predictive Analytics
AnalyticsPredictive Analytics

5 Applications of Predictive Analytics

5 Min Read

The Role of Predictive Analytics in the Sub-Prime Crisis

5 Min Read

Analyzing and predicting user satisfaction with sponsored search

4 Min Read

Voodoo Spectrum of Machine Learning and Data Sets

3 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?