Found in 4 comments on Hacker News
melling · 2022-07-02 · Original thread
That’s because all probabilities are conditional, and people don’t consider that.

These two things helped me to understand probability better.

The Signal and the Noise:

https://www.amazon.com/Signal-Noise-Many-Predictions-Fail-bu...

The videos for Harvard Statistics 110:

https://projects.iq.harvard.edu/stat110/home

dmix · 2017-08-29 · Original thread
This article reminds me of Nate Silver's book [1] which has a far more scientific and in depth look on the failure/success of these types of predictive statistics.

The chapters on the many attempts (or many failed hopes) in predicting earthquakes was particularly interesting, including many times the media has bought into hyped up new charlatans who say they finally figured it out but which ultimately failed to survive under basic statistical scrutiny.

It also has a useful soft introduction to Bayesian statistics and other useful concepts from the field of prediction that I hope more journalists read about. As this seems to be a very common theme in reporting.

Even this journalist couldn't help themselves with this line (combined with some scary looking charts described with an alarmist tone farther down):

> Was there some miscalculation of how frequently these massive flooding events occur? Or, most alarmingly, is something else happening that suggests these catastrophic weather events are becoming much more common?

The failure to mention the effects of El Nino/El Nina seems like a big oversight in this article, especially when we're just coming out of a particularly strong one. Climate stats are an easy one to get wrong - or to shape into any narrative - especially when timeframes and location are easy things to be viewed too narrowly.

[1] https://www.amazon.com/Signal-Noise-Many-Predictions-Fail-bu...

dmix · 2016-02-18 · Original thread
Nate Silvers wrote an entire book on this subject called "The Signal and the Noise" [1]. Humans are so often taken in by people claiming to be able to make predictions by combining new data points. The more unusual, or unrelated to the subject matter, the better. They make good headlines but (not surpsingly) almost always turn out to be heavily flawed in practice.

You can basically measure how much a pundit/expert is going to be wrong in their predictions by how ideological they are in their analysis. The best indicator is when they use only one or two metrics as a basis of a prediction of an otherwise very complex scenario.

One example from the book is how a researcher became famous before the 2000 US presidential elections by claiming to predict races with 90% accuracy [2]. He claimed that by measuring a) per-capita disposable income combined with b) # of military causalities you can determine whether democrat or republicans get elected. He said historical data backs up his theory. He then proceeded to fail to predict that years election and faded into obscurity.

Nate did his own historical analysis and demonstrated it was only 60% accurate instead of 90%. Plus that was only if you ignore 3rd party candidates as the model assumes a two-party system.

Plenty of other examples are provided in the book which makes me highly suspicious of the value of the predictions made in this article.

The general idea is that we need to stop looking for simple one-off solutions to complex problems. Instead we should adopt multi-factor approaches which suffer from fewer biases and are better grounded in reality. Otherwise these predictions are just another form of anti-intellectualism.

[1] http://www.amazon.com/Signal-Noise-Many-Predictions-Fail--bu...

[2] the "Bread and Peace" model by Douglas Hibbs of the University of Gothenberg http://query.nytimes.com/gst/fullpage.html?res=9803E5DD1F3DF...

tabeth · 2016-01-04 · Original thread
Nate Silver's The Signal and the Noise [1] is excellent.

[1] http://www.amazon.com/Signal-Noise-Many-Predictions-Fail--bu...

Fresh book recommendations delivered straight to your inbox every Thursday.