Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
    pexels pavel danilyuk 8112119
    Data Analytics Is Revolutionizing Medical Credentialing
    8 Min Read
    data and seo
    Maximize SEO Success with Powerful Data Analytics Insights
    8 Min Read
    data analytics for trademark registration
    Optimizing Trademark Registration with Data Analytics
    6 Min Read
    data analytics for finding zip codes
    Unlocking Zip Code Insights with Data Analytics
    6 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Your Statistical Result is Below .05: Now What?
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Analytics > Your Statistical Result is Below .05: Now What?
AnalyticsStatistics

Your Statistical Result is Below .05: Now What?

KMcCormickBlog
KMcCormickBlog
8 Min Read
SHARE

When you get a statistical result, one too often immediately jumps to the conclusion that the finding “is statistically significant” or “is not statistically significant.” While that is literally true since we use those words to describe below .05 and above .05, it does not imply that there are only two conclusions to draw about our finding. Have we ruled out the possible ways that our statistical result might be tricking us?

When you get a statistical result, one too often immediately jumps to the conclusion that the finding “is statistically significant” or “is not statistically significant.” While that is literally true since we use those words to describe below .05 and above .05, it does not imply that there are only two conclusions to draw about our finding. Have we ruled out the possible ways that our statistical result might be tricking us?

Things to think about if it is below .05

Real:
You might have a Real Finding on you hands. Congrats. Consider the other possibilities first, but then start thinking about who needs to know about your finding.

More Read

#23: Here’s a thought…
Customer Analytics Deserve More Than Spreadsheets
Location Intelligence and Mobile BI: Advancing Customer Relations in the Finance and Banking Sector
Lessons from Social Media Meet-Up, Part I – Measuring is Easy; Evaluation is Hard.
IBM Research has built a new nanoscale microscope capable of…

Small Effect: Your finding is Real, but is of no practical consequence. Did you definitively prove a result with an effect so small that there is no real world application of what you have found? Did you prove that a drug lowers cholesterol at the .001 level, but the drug only lowers it at a level so small that no Doctor or patient will care? Is your finding of a large enough magnitude to prompt action or to get attention?

Poor Sample: Your data does not represent of population. There is nothing you can do at this point. Are you sure you have a good sample? Did you start with a ‘Sampling Frame’ that accurately reflects the population? What was your response rate on this particular variable? Would the finding hold up if you had more complete data? Have you checked to see if the respondent and non-respondent status on this ‘significant’ variable is correlated with any other variable you have? Maybe you have a census, or you are Data Mining – are you sure you should be focused on p values?

Rare Event: You have encountered that 5% thing. It going to happen. The good news is we know how often it is going to happen. If you are like everyone else, you probably are operating at 95% confidence, and then each test, by definition, has a 5% chance of coming in below .05 from random forces alone. So you have a dozen findings – which ones are real? Was choosing 95% Confidence a deliberate and thoughtful decision? Have you ensured that Type I error will be rare? If you have a modest sample size did you chose a level of confidence that gave you enough Statistical Power (see below)? If you are doing lots of tests (perhaps Multiple Comparisons) did you take this into account or did you use 95% confidence out of habit?

Too Liberal: You have violated an assumption which has made your result Liberal. Your p value only appears to be below .05. For instance, did you use the usual Pearson Chi-Sq when Continuity Correction would have been better? Maybe Pearson was .045, Likelihood Ratio was .049,  Continuity Correction was .051. Did you chose wisely? Did you use Independent Samples T-Test when a non-parametric would have been better? Having good Stats books around can help, because they will often tell you that a particular assumption violation tends to produce Liberal results. You could always consider a Monte Carlo simulation or Exact Test, and make this problem go away. (An interesting ponderable is to ask if we are within a generation of abandoning distributional assumptions as ordinarily outfitted computers get more powerful?)

Things to think about if it is above .05

Negative Finding: You might have disproven your hypothesis. (I know that you have ‘proven’ your ‘Null Hypothesis’, but does anyone talk that way outside of a classroom?) Congrats might be in order. Consider the other possibilities and then start thinking about who needs to know about your negative finding. If it is the real thing, a negative finding could be a valuable. Be careful however before you shout that the literature was wrong. Make sure it is a bona fide finding.

Power: You may simply have lacked enough data. Did you do a Power Analysis before you began? Was your sample size commensurate with your number of Independent Variables? Did you begin with a reasonable amount of data, but attempted every interaction term under the sun? Did you thoughtlessly include effects like 5 way interactions without measuring the impact that it had on your ability to detect true effects? If you aren’t sure what a Power Analysis is, it is best that you describe your negative results using phrases like: “We failed to prove X”, not “We were able to prove that the claim of X, believed to be true for years, was disproved by our study (N=17)”. You can also Google Jacob Cohen’s wonderful “Things I have Learned (So Far)” to learn more about Power Analysis. I mention is in my Resources section, and it has influenced my thinking for years. Its influence is certainly present in this post.

Poor Sample: Your data is not representative of the population. This one can get your p value to move, incorrectly, in either direction.

Too Conservative: You have violated an assumption which has made your result Conservative. Your p value only appears to be above .05. Did you use an adjusted test in an instance when no adjustment was needed? Did you use Scheffe for Multiple Comparisons, but aren’t quite sure how to justify your choice? Most assumptions make our tests lean Liberal, coming in too low, but the opposite can occur.

This list has served me well for a long time. Always best to report your findings thoughtfully. Statistics, at first, seems like a system of Rule Following. It is more subtle than that. It is about extracting meaning, and then persuading an audience with data. Without an audience, there would be no point. They deserve to know how certain (or uncertain) we are.

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

crypto marketing
How a Crypto Marketing Agency Can Use AI to Create Powerful Native Advertising Strategies
Blockchain Exclusive Marketing
data driven insights
How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
Analytics Big Data Exclusive
image fx (37)
Boosting SMS Marketing Efficiency with AI Automation
Exclusive
pexels pavel danilyuk 8112119
Data Analytics Is Revolutionizing Medical Credentialing
Analytics Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

DIALOG Sodexo – Workforce Management

5 Min Read

IBM Podcast: Retailers Respond to Economic and Consumer…

0 Min Read

The beef on how predictive analytics delivers business value

1 Min Read
benefits of analytics in pricing
Analytics

5 Ways B2B Companies Can Use Analytics for Pricing

11 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
AI chatbots
AI Chatbots Can Help Retailers Convert Live Broadcast Viewers into Sales!
Chatbots

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?