In re the value of analogies, there are some computer scientists and philosophers who believe analogy-drawing is the irreducible core of higher cognition. I'm not sure I'd go that far, but Douglas Hofstadter has come close:
http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...
Some people also don't want facebook to fail. If you go deep into their comment history, many of them have argued that facebook cannot be the next myspace.
http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...
And a very large styrofoam ball can crash a comparatively small table made of wood, all depends on relative definition of 'large' right?
I think the problem is that everybody can pick his favorite feature and define it as the key of 'general intelligence'. Some say Anaphora, some say machine learning; I like Hofstaedter he says that it is all about analogy. http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...
Also: The problem is that these statements can't be proven; it is all about opinions and dogmas; I think the argument is s question of power: the one who wins the argument has the power over huge DARPA funds, or whoever else gives out grants for this type of research. The run after defining problems (expert systems, big data) in AI might have something to do with the funding problem.
The 'Society of Mind' argument says that there are many agents that together somehow miraculously create intelligence. http://en.wikipedia.org/wiki/Society_of_Mind This argument sounds good, but it makes it hard to search for general patterns/universal explanations of intelligence.
On the one hand they have to focus on some real solvable problem, on the other hand that makes it very hard to ask and find answers to general questions; I don't know if there will be some solution to this dilemma.
Douglas Hofstadter says that thinking is all about making analogies, so that is all pretty remarkable.
https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinki...