Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics for pharmacy trends
    How Data Analytics Is Tracking Trends in the Pharmacy Industry
    5 Min Read
    car expense data analytics
    Data Analytics for Smarter Vehicle Expense Management
    10 Min Read
    image fx (60)
    Data Analytics Driving the Modern E-commerce Warehouse
    13 Min Read
    big data analytics in transporation
    Turning Data Into Decisions: How Analytics Improves Transportation Strategy
    3 Min Read
    sales and data analytics
    How Data Analytics Improves Lead Management and Sales Results
    9 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Scientists misusing Statistics
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Uncategorized > Scientists misusing Statistics
Uncategorized

Scientists misusing Statistics

DavidMSmith
DavidMSmith
4 Min Read
SHARE

In ScienceNews this month, there’s controversial article exposing the fact that results claimed to be “statistically significant” in scientific articles aren’t always what they’re cracked up to be. The article — titled “Odds Are, It’s Wrong” is interesting, but I take a bit of an issue with the sub-headline, “Science fails to face the shortcomings of Statistics”. As it happens, the examples in the article are mostly cases of scientists behaving badly and abusing statistical techniques and results:

  • Authors abusing P-vales to conflate statistical significance with practical significance. A for example, a drug may uncritically be described as “significantly” reducing the risk of some outcome, but the the actual scale of the statistically significant difference is so small that is has no real clinical implication.
  • Not accounting for multiple comparisons biases. By definition, a test “significant at the 95% level” has 5% chance of having occurred by random chance alone. Do enough tests, and you’ll find some indeed indicate significant differences — but there will be some fluke events in that batch. There are so many studies, experiments and tests being done…

In ScienceNews this month, there’s controversial article exposing the fact that results claimed to be “statistically significant” in scientific articles aren’t always what they’re cracked up to be. The article — titled “Odds Are, It’s Wrong” is interesting, but I take a bit of an issue with the sub-headline, “Science fails to face the shortcomings of Statistics”. As it happens, the examples in the article are mostly cases of scientists behaving badly and abusing statistical techniques and results:

More Read

How to Succeed in Adopting Enterprise Social Networks
3 Technologies That Are Changing How Business Continuity Is Handled
Playing with the feeds…
Attending Times Open
Social Media: Tracking Its Exponential Growth [Stats, Video]
  • Authors abusing P-vales to conflate statistical significance with practical significance. A for example, a drug may uncritically be described as “significantly” reducing the risk of some outcome, but the the actual scale of the statistically significant difference is so small that is has no real clinical implication.
  • Not accounting for multiple comparisons biases. By definition, a test “significant at the 95% level” has 5% chance of having occurred by random chance alone. Do enough tests, and you’ll find some indeed indicate significant differences — but there will be some fluke events in that batch. There are so many studies, experiments and tests being done today  (oftentimes, all in the same paper)that the “false discovery rate” maybe higher than we think — especially given that most nonsignificant results go unreported.

Statisticians, in general, are aware of these problems and have offered solutions: there’s a vast field of literature on multiple comparisons tests, reporting bias, and alternatives (such as Bayesian methods) to P-value tests. But more often than not, these “arcane” issues (which are actually part of any statistical training) go ignored in scientific journals. You don’t need to be a cynic to understand the motives of the authors for doing so — hey, a publication is a publication, right? — but the cooperation of the peer reviewers and editorial boards is disturbing.

ScienceNews: Odds Are, It’s Wrong

Link to original post

TAGGED:data quality
Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

cybersecurity essentials
Cybersecurity Essentials For Customer-Facing Platforms
Exclusive Infographic IT Security
ai for making lyric videos
How AI Is Revolutionizing Lyric Video Creation
Artificial Intelligence Exclusive
intersection of data and patient care
How Healthcare Careers Are Expanding at the Intersection of Data and Patient Care
Big Data Exclusive
dedicated servers for ai businesses
5 Reasons AI-Driven Business Need Dedicated Servers
Artificial Intelligence Exclusive News

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

DQ Certification a Noble Cause

1 Min Read
data catalog big data quality
Big DataData QualityPolicy and Governance

Turbo-Charge Data Scientist Productivity with a Data Catalog

8 Min Read

The Idea of Order in Data

6 Min Read

Stuck in First Gear

5 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data
ai in ecommerce
Artificial Intelligence for eCommerce: A Closer Look
Artificial Intelligence

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?