Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: When is a zero not a zero?
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Big Data > Data Mining > When is a zero not a zero?
Data MiningPredictive Analytics

When is a zero not a zero?

DavidMSmith
DavidMSmith
8 Min Read
SHARE

Answer: when it’s in floating point.No, this isn’t my entry for the “least funny joke ever” competition. It’s the answer to a fairly common complaint of beginning R users, which goes something like this: “R has a bug! It’s giving the wrong answer to a simple calculation!”. (I paraphrase.) Let’s see some examples of such “bugs”:”The square of the square root of two isn’t two!”> a if(a*a != 2) print(“R has a bug!”)[1] “R has a bug!” # this shouldn’t print, should it?”Fractions which should be equal, aren’t!”> a a[1] 0.45> b …->->->

Answer: when it's in floating point.

No, this isn't my entry for the "least funny joke ever" competition. It's the answer to a fairly common complaint of beginning R users, which goes something like this: "R has a bug! It's giving the wrong answer to a simple calculation!". (I paraphrase.)  Let's see some examples of such "bugs":

More Read

The SAS-L Rookie of the Year
How geeks are opening up government on the Web (via iGov – The…
PAW: SAS and the art and science of better
Science Needs to Be Less Certain
Predictive Analytics Improves Trading Decisions as Euro Rebounds
"The square of the square root of two isn't two!"

> a <- sqrt(2)
> if(a*a != 2) print("R has a bug!")
[1] "R has a bug!"  # this shouldn't print, should it?

"Fractions which should be equal, aren't!"

> a <- (58/40 – 1)
> a
[1] 0.45
> b <- (18/40)
> b
[1] 0.45
> a==b
[1] FALSE  # shouldn't this be TRUE?

"The sum of the residuals isn't zero!"

> x <- 1:25 + rnorm(25)
> sum(x-mean(x))
[1] 1.509903e-14  # shouldn't this be zero?

"My while loop runs one iteration too many times!"

> j <- 0
> while (j < 1) j<-j+0.1
> j
[1] 1.1  # shouldn't this end with j equal to 1?

What's going on?

The short answer is that R, like pretty much every other numerical software in existence, uses floating point arithmetic to do its calculations.  In each case above R is doing the right thing, given the principles of floating-point.  To use a strained analogy, floating point arithmetic is to the "real" arithmetic you learned in school as Newtonian physics is to Einstein's Theory of Relativity — most of the time it works just like you expect, but in extreme cases the results can be surprising. Unfortunately, while floating-point arithmetic is familiar to computer scientists, it's rarely taught in statistics classes.

The basic principle is this: computers don't store numbers (except smallish integers and some fractions) exactly. It's very similar to the way you can't write down 1/3 in decimal exactly: how ever many 3's you add to the end of .3333333 the number you write will be close to, but not quite, one third.

The principle is the same for floating point numbers: the main difference is that the underlying representation is binary, not decimal. Although the command j <- 0.1 looks like you're assigning the value "one-tenth" to j, in fact it is stored as a number close to, but not exactly, one tenth. (In fact, it's about 2 quadrillionths less than that, on most systems). Most of the time you'll never notice, because an error on that scale is too small to print (actually, the error cancels out in the conversion from decimal to binary and back again). This "error cancellation" happens much of the time, for example, if we multiply j by 10 everything looks fine:

> j <- 0.1
> j*10 – 1
[1] 0

Sometimes, though, these errors accumulate:

> j+j+j+j+j+j+j+j+j+j-1  # ten j's
[1] -1.110223e-16

(One of the weird things about floating-point arithmetic is that it's not necessarily associative, so that (a+b)+c isn't always equal to a+(b+c), nor is it always distributive, so (a+b)*c might not be the same as a*c+b*c.)  A similar effect is evident in the "residuals" example above. Sometimes, the errors can multiply dramatically if you use the wrong algorithm to make calculations, especially where very large and very small numbers mix. For example, calculating standard deviations using the naive "calculator algorithm" can give the wrong answer for large numbers with small variances. Thankfully, R's internal algorithms (including that for the stdev function) are carefully coded to avoid such floating-point error accumulations. (Some other software tools haven't always been so careful.)

Here are some tips to help you avoid some of the most common floating-point pitfalls:

Don't test floating point numbers for exact equality.  If your code includes expressions like x==0 when x is a floating-point number, you're asking for trouble.

Use integer objects when working with whole numbers. If you know that x will only ever take integer values, give it an integer representation, like this: x <- as.integer(1). As long as you only ever add, subtract etc. other integers to/from x, it's safe to use the equality test, and expressions like x==0 are meaningful. (Bonus: you'll reduce memory usage, too.)
   
If you must test floating points numbers, use fuzzy matching. If "real" arithmetic tells you x should be one, and x is floating point, test whether x is in a range near one, not whether it's one exactly. Replace code that looks like this: x==1, with this: abs(x-1)<eps , where eps is a small number. How small eps should be depends on the values you expect x to take. You can use the function all.equal(x,1) to test x against the smallest possible difference. A similar solution this would help our "while loop" example above, but it's usually better to rewrite your code so that such a test isn't necessary.

Use internal algorithms where possible. R's built-in functions are carefully written to avoid accumulation of floating-point errors.  Use functions like stdev and scale instead of rolling your own variants.

Finally, it's always worth learning more about how floating-point arithmetic works.  The Wikipedia article is a good start, and David Goldberg's article What Every Computer Scientist Should Know About Floating-Point Arithmetic has everything you ever wanted to know (and them some). And if you see other R users with floating-point woes, point them to the R FAQ entry Why doesn't R think these numbers are equal?
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

AI role in medical industry
The Role Of AI In Transforming Medical Manufacturing
Artificial Intelligence Exclusive
b2b sales
Unseen Barriers: Identifying Bottlenecks In B2B Sales
Business Rules Exclusive Infographic
data intelligence in healthcare
How Data Is Powering Real-Time Intelligence in Health Systems
Big Data Exclusive
intersection of data
The Intersection of Data and Empathy in Modern Support Careers
Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

Image
Big DataPredictive AnalyticsSoftware

Embedding Predictive Analytics in Your Software Product

4 Min Read

R Still the Preferred Tool of Predictive Modelers Competing at Kaggle

2 Min Read

In-Memory Analytics Tools To Take Center Stage In 2012

4 Min Read

Who knows what happiness lurks in the hearts of men? Facebook knows.

8 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

ai chatbot
The Art of Conversation: Enhancing Chatbots with Advanced AI Prompts
Chatbots
giveaway chatbots
How To Get An Award Winning Giveaway Bot
Big Data Chatbots Exclusive

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?