Found 6 comments on HN
axplusb · 2017-02-02 · Original thread
There are actually two sides of what is referred to as causal inference. Either (a) inferring a causal graph from the data, or (b) given a graph and data, measuring the causal effect of variables among each other.

The broad idea in (a) is to start with a fully connected graph, and eliminate edges between nodes that can be tested as independent, or independent conditionally on other nodes. This gives you a non-directed graph which can be oriented by several methods (identifying V-structures, looking at residuals of regressions of X on Y vs Y on X).

The theory in (b) actually generalizes instrumental variables and lays out graphical configurations where you can measure the causal effect of a variable onto another variable, and how to compute that effect.

A great reference:

A nice introduction:

torustic · 2016-08-21 · Original thread
In retrospect, my other comment was stupidly obtuse. Both too technical (in the sense of specificity) and too unstructured (in the sense of presentation order). A more appropriate path from CS might be analogous (well, inverse if anything) to the path Robert Goldblatt has taken. It dips into nonstandard analysis, but not totally without reason. Some subset of the following, with nLab and Wikipedia supplementing as necessary:

0. Milewski's "Category Theory for Programmers"[0]

1. Goldblatt's "Topoi"[1]

2. McLarty's "The Uses and Abuses of the History of Topos Theory"[2] (this does not require [1], it just undoes some historical assumptions made in [1] and, like everything else by McLarty, is extraordinarily well-written)

3. Goldblatt's "Lectures on the Hyperreals"[3]

4. Nelson's "Radically Elementary Probability Theory"[4]

5. Tao's "Ultraproducts as a Bridge Between Discrete and Continuous Analysis"[5]

6. Some canonical machine learning text, like Murphy[6] or Bishop[7]

7. Koller/Friedman's "Probabilistic Graphical Models"[8]

8. Lawvere's "Taking Categories Seriously"[9]

From there you should see a variety of paths for mapping (things:Uncertainty) <-> (things:Structure). The Giry monad is just one of them, and would probably be understandable after reading Barr/Wells' "Toposes, Triples and Theories"[10].

The above list also assumes some comfort with integration. Particularly good books in line with this pedagogical path might be:

9. Any and all canonical intros to real analysis

10. Malliavin's "Integration and Probability"[11]

11. Segal/Kunze's "Integrals and Operators"[12]

Similarly, some normative focus on probability would be useful:

12. Jaynes' "Probability Theory"[13]

13. Pearl's "Causality"[14]

















pc2g4d · 2015-04-16 · Original thread
And don't forget the later "Causality" (2nd edition):
cma · 2014-02-28 · Original thread
He won a Turing Award in 2011 for his work. Haven't read it yet, but this is supposedly where to start:

Jach · 2011-11-15 · Original thread
I don't think anyone here is arguing that smoking pot (or in general "doing drugs") raises IQ scores... Nor do I see arguing that high IQ scores cause pot use, just that the presence of a high IQ seems to, if the study is to be believed, increase the probability of pot use. Correlations and probabilities are interesting for themselves, and can in fact be used to imply causation.

Get dozens of book recommendations delivered straight to your inbox every Thursday.