The Mythbusters and Statistics

5 Min Read

I love the TV show “Mythbusters.” If you’ve never seen the show, it’s great. (A new season starts this week on the Discovery Channel in the US.) Every week, the team describes one or more myths or urban legends (example: a penny dropped from the top of the Empire State Building could kill a pedestrian), and then they attempt to confirm or “bust” the myth with experimentation. (Wind tunnel experiments showed that the terminal velocity of a penny isn’t fast enough to seriously injure someone.)

One thing that has mildly bugged me about the show is the lack of replication in the experiments: typically, they only ever attempt to replicate the myth once or maybe twice, and rarely discuss the variability in their measurements. So I was pleased to see the issue come up in this New Scientist interview:

You often have sample sizes of one or two, but science is all about replication. How do you respond to that criticism?

JH: People simply wouldn’t watch it if we were just repeating things over and over again. We do them as compactly as we can to keep up the energy level and flow. We intend these shows to be thought-provoking, not definitive.

AS: I think the part of the scientific enterprise

I love the TV show “Mythbusters.” If you’ve never seen the show, it’s great. (A new season starts this week on the Discovery Channel in the US.) Every week, the team describes one or more myths or urban legends (example: a penny dropped from the top of the Empire State Building could kill a pedestrian), and then they attempt to confirm or “bust” the myth with experimentation. (Wind tunnel experiments showed that the terminal velocity of a penny isn’t fast enough to seriously injure someone.)

One thing that has mildly bugged me about the show is the lack of replication in the experiments: typically, they only ever attempt to replicate the myth once or maybe twice, and rarely discuss the variability in their measurements. So I was pleased to see the issue come up in this New Scientist interview:

You often have sample sizes of one or two, but science is all about replication. How do you respond to that criticism?

JH: People simply wouldn’t watch it if we were just repeating things over and over again. We do them as compactly as we can to keep up the energy level and flow. We intend these shows to be thought-provoking, not definitive.

AS: I think the part of the scientific enterprise that we do illuminate is that it’s a messy, creative process that changes your whole understanding. We’ll spend half an episode finding that we’re asking the wrong question.

A fair response. I think in general they’re aware of the variability and significance issues, but I agree that more important contribution of the show is the experimental process: encouraging kids in particular to actually measure and compare things to answer questions. Issues like controls, replication, and significance are moot without data, after all. (Co-host Adam Savage talks more about the effect of the show on kids in this video interview from reason.com. Heartwarming stuff.)  

I was also pleased to see that they do seek the input of statisticians from time to time, too:

When you are testing your own reactions, might you bias your results because you have expectations about the outcome?

AS:
That’s a good point and makes me think that we should demonstrate
experimental bias on the show. It was an issue when we investigated
“beer goggles”: whether drinking alcohol can make people seem more
attractive. I spent a long time with a friend of mine who’s a
statistician to try and remove as much of the bias as possible.

New Scientist: MythBusters: ‘Using your head is a lot of fun’ 

Link to original post

Share This Article
Exit mobile version