Found in 4 comments on Hacker News
Crows also seem to be doing analogies http://www.scientificamerican.com/article/crows-understand-a...

Douglas Hofstadter says that thinking is all about making analogies, so that is all pretty remarkable.

https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinki...

jonnathanson · 2014-03-06 · Original thread
The whole point of analogy problems is to test reasoning skills. Specifically, logical skills, inferences, categorization, and so forth. The vocabulary test nested within the analogy test is incidental. It creates a bivariate challenge (vocabulary + categorization), which is not necessarily an invalid test. It's just not a pure test of analogies, and it's also duplicative of the vocabulary portion of the test.

In re the value of analogies, there are some computer scientists and philosophers who believe analogy-drawing is the irreducible core of higher cognition. I'm not sure I'd go that far, but Douglas Hofstadter has come close:

http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...

jrs99 · 2014-01-24 · Original thread
writing can help you organize your thoughts and that can lead to new epiphanies. Writing helps you explore, helps you think, and helps you find ideas. Some people really hate that though. They don't like non-linear thinkers who use analogies and metaphors. They like to start with the proof. Other people enjoy surprising, sometimes ludicrous connections and analogies.

Some people also don't want facebook to fail. If you go deep into their comment history, many of them have argued that facebook cannot be the next myspace.

http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...

"The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Stryrofoam with steel.) a) The large ball b) The table"

And a very large styrofoam ball can crash a comparatively small table made of wood, all depends on relative definition of 'large' right?

I think the problem is that everybody can pick his favorite feature and define it as the key of 'general intelligence'. Some say Anaphora, some say machine learning; I like Hofstaedter he says that it is all about analogy. http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...

Also: The problem is that these statements can't be proven; it is all about opinions and dogmas; I think the argument is s question of power: the one who wins the argument has the power over huge DARPA funds, or whoever else gives out grants for this type of research. The run after defining problems (expert systems, big data) in AI might have something to do with the funding problem.

The 'Society of Mind' argument says that there are many agents that together somehow miraculously create intelligence. http://en.wikipedia.org/wiki/Society_of_Mind This argument sounds good, but it makes it hard to search for general patterns/universal explanations of intelligence.

On the one hand they have to focus on some real solvable problem, on the other hand that makes it very hard to ask and find answers to general questions; I don't know if there will be some solution to this dilemma.

Fresh book recommendations delivered straight to your inbox every Thursday.