http://www.iapsych.com/iqmr/koening2008.pdf
"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144549/
"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."
http://www.nytimes.com/roomfordebate/2011/12/04/why-should-s...
"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."
http://faculty.psy.ohio-state.edu/peters/lab/pubs/publicatio...
"Numeracy’s effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."
As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.
Psychologist Keith R. Stanovich makes the interesting point that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff. Thus Stanovich distinguishes "intelligence" (essentially, IQ) from "rationality" (making correct decisions that overcome human cognitive biases) as distinct aspects of human cognition. He has a whole book on the subject, What Intelligence Tests Miss, that is quite thought-provoking and informative.
http://www.amazon.com/What-Intelligence-Tests-Miss-Psycholog...
(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people
and am interested in how such young people develop over the course of life.)
"Contrarian anecdotes like these are particularly common
http://news.ycombinator.com/item?id=4076643
http://news.ycombinator.com/item?id=4076066
in medical discussions, even in fairly rational communities like HN. I find this particularly insidious (though the commenters mean no harm), because it can ultimately sway readers from taking advantage of statistically backed evidence for or against medical cures. Most topics aren’t as serious as medicine, but the type of harm done is the same, only on a lesser scale."
The basic problem, as the interesting comments here illustrate, is that human thinking has biases that ratchet discussions in certain directions even if disagreement and debate is vigorous. The general issue of human cognitive biases was well discussed in Keith R. Stanovich's book What Intelligence Tests Miss: The Psychology of Rational Thought.
http://yalepress.yale.edu/yupbooks/book.asp?isbn=97803001646...
http://www.amazon.com/What-Intelligence-Tests-Miss-Psycholog...
The author is an experienced cognitive science researcher and author of a previous book How to Think Straight about Psychology. He writes about aspects of human cognition that are not tapped by IQ tests. He is part of the mainstream of psychology in feeling comfortable with calling what is estimated by IQ tests "intelligence," but he disagrees that there are no other important aspects of human cognition. Rather, Stanovich says, there are many aspects of human cognition that can be summed up as "rationality" that explain why high-IQ people (he would say "intelligent people") do stupid things. Stanovich names a new concept, "dysrationalia," and explores the boundaries of that concept at the beginning of his book. His shows a welcome convergence in the point of view of the best writers on IQ testing, as James R. Flynn's recent book What Is Intelligence? supports these conclusions from a different direction with different evidence.
Stanovich develops a theoretical framework, based on the latest cognitive science, and illustrated by diagrams in his book, of the autonomous mind (rapid problem-solving modules with simple procedures evolutionarily developed or developed by practice), the algorithmic mind (roughly what IQ tests probe, characterized by fluid intelligence), and the reflective mind (habits of thinking and tools for rational cognition). He uses this framework to show how cognition tapped by IQ tests ("intelligence") interacts with various cognitive errors to produce dysrationalia. He describes several kinds of dysrationalia in detailed chapters in his book, referring to cases of human thinkers performing as cognitive misers, which is the default for all human beings, and posing many interesting problems that have been used in research to demonstrate cognitive errors.
For many kinds of errors in cognition, as Stanovich points out with multiple citations to peer-reviewed published research, the performance of high-IQ individuals is no better at all than the performance of low-IQ individuals. The default behavior of being a cognitive miser applies to everyone, as it is strongly selected for by evolution. In some cases, an experimenter can prompt a test subject on effective strategies to minimize cognitive errors, and in some of those cases prompted high-IQ individuals perform better than control groups. Stanovich concludes with dismay in a sentence he writes in bold print: "Intelligent people perform better only when you tell them what to do!"
Stanovich gives you the reader the chance to put your own cognition to the test. Many famous cognitive tests that have been presented to thousands of subjects in dozens of studies are included in the book. Read along, and try those cognitive tests on yourself. Stanovich comments that if the many cognitive tasks found in cognitive research were included in the item content of IQ tests, we would change the rank-ordering of many test-takers, and some persons now called intelligent would be called average, while some other people who are now called average would be called highly intelligent.
Stanovich then goes on to discuss the term "mindware" coined by David Perkins and illustrates two kinds of "mindware" problems. Some--most--people have little knowledge of correct reasoning processes, which Stanovich calls having "mindware gaps," and thus make many errors of reasoning. And most people have quite a lot of "contaminated mindware," ideas and beliefs that lead to repeated irrational behavior. High IQ does nothing to protect thinkers from contaminated mindware. Indeed, some forms of contaminated mindware appeal to high-IQ individuals by the complicated structure of the false belief system. He includes information about a survey of a high-IQ society that found widespread belief in false concepts from pseudoscience among the society members.
Near the end of the book, Stanovich revises his diagram of a cognitive model of the relationship between intelligence and rationality, and mentions the problem of serial associative cognition with focal bias, a form of thinking that requires fluid intelligence but that nonetheless is irrational. So there are some errors of cognition that are not helped at all by higher IQ.
In his last chapter, Stanovich raises the question of how different college admission procedures might be if they explicitly favored rationality, rather than IQ proxies such as high SAT scores, and lists some of social costs of widespread irrationality. He mentions some aspects of sound cognition that are learnable, and I encouraged my teenage son to read that section. He also makes the intriguing observation, "It is an interesting open question, for example, whether race and social class differences on measures of rationality would be found to be as large as those displayed on intelligence tests."
Applying these concepts to my observation of Hacker News discussions after 1309 days since joining the community, I notice that indeed most Hacker News participants (I don't claim to be an exception) enter into discussions supposing that their own comments are rational and based on sound evidence and logic. Discussions of medical treatment issues, the main concern of the submitted blog post, are highly emotional (many of us know of sad examples of close relatives who have suffered from long illnesses or who have died young despite heroic treatment) and thus personal anecdotes have strong saliency in such discussions. The process of rationally evaluating medical treatments is the subject on entire group blogs with daily posts
http://www.sciencebasedmedicine.org/index.php/about-science-...
and has huge implications for public policy. Not only is safe and effective medical treatment and prevention a matter of life and death, it is a matter of hundreds of billions of dollars of personal and tax-subsidized spending around the world, so it is important to get right.
Blog post author and submitter here tylerhobbs suggests disregarding an individual contrary anecdote, or a group of contrary anecdotes, as a response to a general statement about effective treatment or risk reduction established by a scientifically valid
http://norvig.com/experiment-design.html
study. With that suggestion I must agree. Even medical practitioners themselves do have difficulty sticking to the evidence,
http://www.sciencebasedmedicine.org/index.php/how-do-you-fee...
and it doesn't advance the discussion here to bring up a few heart-wrenching personal stories if the weight of the evidence is contrary to the cognitive miser's easy conclusion from such a story.
That said, I see that the submitter here has developed an empirical understanding of what gets us going in a Hacker News discussion. Making a definite statement about what ought to be downvoted works much better in gaining comments and karma than asking an open-ended question about what should be upvoted, and I'm still curious about what kinds of comments most deserve to be upvoted. I'd like to learn from other people's advice on that issue how to promote more rational thinking here and how all of us can learn from one another about evaluating evidence for controversial claims.
http://www.tanyakhovanova.com/
I remembered that I had seen her blog post "Should You Date a Mathematician?"
http://blog.tanyakhovanova.com/?p=319
posted to Hacker News (and other sites I read) before. I'll read more of her more purely mathematical blog posts over the next few days. I see one I can use right away in the local classes I teach to elementary-age learners.
On the substance of the post, I'm seeing several comments that equate "genius" to "person with a high IQ score." That was indeed the old-fashioned way that Lewis Terman (1877 to 1956) labeled a person with a high IQ score as he developed the Stanford-Binet IQ test. But as Terman gained more experience, especially with the subjects in his own longitudinal study of Americans identified in childhood by high IQ scores, he didn't equate high IQ to genius, and he became more aware of the shortcomings of IQ tests. Terman and his co-author Maude Merrill wrote in 1937,
"There are, however, certain characteristics of age scores with which the reader should be familiar. For one thing, it is necessary to bear in mind that the true mental age as we have used it refers to the mental age on a particular intelligence test. A subject's mental age in this sense may not coincide with the age score he would make in tests of musical ability, mechanical ability, social adjustment, etc. A subject has, strictly speaking, a number of mental ages; we are here concerned only with that which depends on the abilities tested by the new Stanford-Binet scales."
Terman, Lewis & Merrill, Maude (1937). Measuring Intelligence: A Guide to the Administration of the New Revised Stanford-Binet Tests of Intelligence. Boston: Houghton Mifflin. p. 25. That is why the later authors Kenneth Hopkins and Julian Stanley (founder of the Study of Exceptional Talent) suggested that is better to regard IQ tests as tests of "scholastic aptitude" rather than of intelligence. They wrote
"Most authorities feel that current intelligence tests are more aptly described as 'scholastic aptitude' tests because they are so highly related to academic performance, although current use suggests that the term intelligence test is going to be with us for some time. This reservation is based not on the opinion that intelligence tests do not reflect intelligence but on the belief that there are other kinds of intelligence that are not reflected in current tests; the term intelligence is too inclusive."
Hopkins, Kenneth D. & Stanley, Julian C. (1981). Educational and Psychological Measurement and Evaluation. Englewood Cliffs, NJ: Prentice Hall. p. 364.
So on the one hand there is the acknowledged issue among experts on IQ testing that IQ scores don't tell the whole story of a test subject's mental ability. A less well known issue is the degree to which error in estimation increases in IQ scores as IQ scores are found to be above the norming sample mean. Terman and Merrill wrote,
"The reader should not lose sight of the fact that a test with even a high reliability yields scores which have an appreciable probable error. The probable error in terms of mental age is of course larger with older than with young children because of the increasing spread of mental age as we go from younger to older groups. For this reason it has been customary to express the P.E. [probable error] of a Binet score in terms of I.Q., since the spread of Binet I.Q.'s is fairly constant from age to age. However, when our correlation arrays [between Form L and Form M] were plotted for separate age groups they were all discovered to be distinctly fan-shaped. Figure 3 is typical of the arrays at every age level.
"From Figure 3 [not shown here on HN, alas] it becomes clear that the probable error of an I.Q. score is not a constant amount, but a variable which increases as I.Q. increases. It has frequently been noted in the literature that gifted subjects show greater I.Q. fluctuation than do clinical cases with low I.Q.'s . . . . we now see that this trend is inherent in the I.Q. technique itself, and might have been predicted on logical grounds."
Terman, Lewis & Merrill, Maude (1937). Measuring Intelligence: A Guide to the Administration of the New Revised Stanford-Binet Tests of Intelligence. Boston: Houghton Mifflin. p. 44
Readers of this thread who would like to follow the current scientific literature on genius (as it is now defined by mainstream psychologists) may enjoy reading the works of Dean Keith Simonton,
http://www.amazon.com/Dean-Keith-Simonton/e/B001ITRL1I/
the world's leading researcher on genius and its development. Readers curious about what IQ tests miss may enjoy reading the book What Intelligence Tests Miss: The Psychology of Rational Thought
http://www.amazon.com/What-Intelligence-Tests-Miss-Psycholog...
by Keith R. Stanovich and some of Stanovich's other recent books.
Readers who would like to read a whole lot about current research on human intelligence and related issues can find a lot of curated reading suggestions at a Wikipedia user bibliography
http://en.wikipedia.org/wiki/User:WeijiBaikeBianji/Intellige...
occasionally used for the slow, pains-taking process of updating the many Wikipedia articles on related subjects (most of which are plagued by edit-warring and badly in need of more editing).