Found 4 comments on HN
misiti3780 · 2015-04-07 · Original thread
I just finished a pretty interesting book on this topic:

"The Cult Of Statistical Significance":

It basically goes through a bunch of examples, mostly in Economics, but in medicine also (Vioxx) where statistical significance has failed us and people have died for it. As someone who works with statistics for a living, I found to book interesting - but it was pretty depressing to find out that most scientist are using t-test and p-values because it seems to be the status quo and it is the easiest way to get published. The authors of this book suggest a few different things -- publishing the size of your coefficients and using a loss function. In the end, they make the point that statistical significance is different than economic significance, political significance, etc.

Deirdre McCloskey (an economist) has an entire book devoted to this[1]. Her article here: covers the main argument in the book. One important point she makes is that not all fields misuse p-values and statistical significance. In physics significance is almost always used appropriately, while in social sciences (including economics) statistical significance is often conflated with actual significance.


RA_Fisher · 2015-02-02 · Original thread
I have a simple criterion for a summary judgement of the reliability of results:

a) Is the data made available? b) Is it a Bayesian analysis? c) Has a power study been offered?

As a statistician, I have a keen awareness of the ways that p-values can depart from truth. You can see Optimizely's effort to cope ( You can read about it in The Cult of Statistical Significance ( This Economist video captures it solidly (

The key component missing is a bias towards positive results. Most scientists only have two statistics classes. In these classes they learn a number of statistical tests, but much less how things can go wrong. Classic, "just enough to be dangerous."

In order to cope, I have a personal set of criteria to make a quick first sort of papers. It's a personal heuristic for quality. I assume some degree of belief (Bayes, FTW!) that those that offer the full data set along side conclusions feel confident in their own analysis. Also, if they're using Bayesian methods, that they've had more than two stats classes. Finally, if they do choose Frequentist methods, a power study tells me that they understand the important finite nature of data in the context of asymptotic models / assumptions.

I'd suspect that other statisticians feel this way, because I've heard that privately --- what do you think of my criteria?

RA_Fisher · 2014-07-03 · Original thread
The field receives a pretty scathing review in The Cult of Statistical Significance. The summaries offered there are pretty damning:

Get dozens of book recommendations delivered straight to your inbox every Thursday.