[1]: http://www.amazon.com/The-Cult-Statistical-Significance-Econ...
a) Is the data made available? b) Is it a Bayesian analysis? c) Has a power study been offered?
As a statistician, I have a keen awareness of the ways that p-values can depart from truth. You can see Optimizely's effort to cope (https://www.optimizely.com/statistics). You can read about it in The Cult of Statistical Significance (http://www.amazon.com/The-Cult-Statistical-Significance-Econ...). This Economist video captures it solidly (http://www.economist.com/blogs/graphicdetail/2013/10/daily-c...).
The key component missing is a bias towards positive results. Most scientists only have two statistics classes. In these classes they learn a number of statistical tests, but much less how things can go wrong. Classic, "just enough to be dangerous."
In order to cope, I have a personal set of criteria to make a quick first sort of papers. It's a personal heuristic for quality. I assume some degree of belief (Bayes, FTW!) that those that offer the full data set along side conclusions feel confident in their own analysis. Also, if they're using Bayesian methods, that they've had more than two stats classes. Finally, if they do choose Frequentist methods, a power study tells me that they understand the important finite nature of data in the context of asymptotic models / assumptions.
I'd suspect that other statisticians feel this way, because I've heard that privately --- what do you think of my criteria?
"The Cult Of Statistical Significance":
http://www.amazon.com/Cult-Statistical-Significance-Economic...
It basically goes through a bunch of examples, mostly in Economics, but in medicine also (Vioxx) where statistical significance has failed us and people have died for it. As someone who works with statistics for a living, I found to book interesting - but it was pretty depressing to find out that most scientist are using t-test and p-values because it seems to be the status quo and it is the easiest way to get published. The authors of this book suggest a few different things -- publishing the size of your coefficients and using a loss function. In the end, they make the point that statistical significance is different than economic significance, political significance, etc.