Found in 3 comments on Hacker News
mindcrime · 2019-08-05 · Original thread
I'm curious how close the research community is to general AI

Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.

Also, are there useful books, courses or papers that go into general AI research?

Of course there are. See:

https://agi.mit.edu

https://agi.reddit.com

http://www.agi-society.org/

https://opencog.org/

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Artificial-General-Intelligence-Cogni...

https://www.amazon.com/Universal-Artificial-Intelligence-Alg...

https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...

https://www.amazon.com/Intelligence-Understanding-Creation-I...

https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...

https://www.amazon.com/Unified-Theories-Cognition-William-Le...

https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...

https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...

https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...

https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...

See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,

https://en.wikipedia.org/wiki/Cognitive_architecture

"Neuvoevolution"

https://en.wikipedia.org/wiki/Neuroevolution

and "Biologically Inspired Computing"

https://en.wikipedia.org/wiki/Biologically_inspired_computin...

kobeya · 2017-01-09 · Original thread
> If you think that we have all the conceptual understanding to assemble a general artificial intelligence then you should be able to give an outline of how it would work — beyond just lots of computing power and data, that doesn't differentiate the problem form the problem of object classification or natural language processing.

Sure:

https://www.amazon.com/Engineering-General-Intelligence-Part...

> Maybe what you mean to say is that once we have the computing power and data collection technology required, so that researches can experiment, the unsolved conceptual problems will become easy to solve.

No, I mean that there appears to be a basis of a solution already. Actually, multiple solutions being pursued by different groups. It's like asking a rocket engineer in 1955 how to build a rocketship to the Moon, or a physicist in 1936 how to build an atomic bomb, or the Wright brothers in 1900 how to build an airplane. Sure, in every single one of these cases you wouldn't get an exact, definitive answer. The Wright brothers didn't even understand the aerodynamics of their airplane, for example. But there were known avenues of inquiry for which there was very solid reason to believe that they would not be dead ends.

We're at a point now with AI/ML where solutions can be learned by machines for any solvable problem. It just needs some humans doing the selection of algorithms and guiding in hyper parameter space. But there is active research on automating that meta level which is yielding results. And both the reinforcement learning and the older AGI communities have working, tested designs for cognitive architectures that are truly general.

I'm not claiming we're done. I'm just saying that we're basically at the level of a working Wright flyer -- a bunch of research projects individually exhibit intelligence in separate domains, and a couple of cognitive architectures for generalizing them which have been shown to work on toy problems. There's no known unknowns that would cause these approaches to fail, so the reasonable expectation is that in the coming decades we will see the rise of useful AGI. Just like a reasonable observer with all the facts in 1905 should have predicted consumer passenger air travel.

> But we don't have evidence of intelligence arising that way in the past. What we have is the human brain, an incredibly complex structure...

Yes the human brain is difficult to understand. So is the flight of a bird. It's a good thing that we don't need to replicate the mechanics of bird flight to build flying machines -- otherwise you and I would still be stuck to trains and boats for getting around.

I suggest looking not at a neuroscience text book but a psychology textbook. Ask yourself not whether you can replicate exactly the conditions going on in the brain, but rather ask if you can implement a program to the same general functional description as the psychology textbook provides. That's a much easier task, and one well within the capabilities we have today.

kobeya · 2017-01-09 · Original thread
> If you think that we have all the conceptual understanding to assemble a general artificial intelligence then you should be able to give an outline of how it would work — beyond just lots of computing power and data, that doesn't differentiate the problem form the problem of object classification or natural language processing.

Sure:

https://www.amazon.com/Engineering-General-Intelligence-Part...

> Maybe what you mean to say is that once we have the computing power and data collection technology required, so that researches can experiment, the unsolved conceptual problems will become easy to solve.

No, I mean that there appears to be a basis of a solution already. Actually, multiple solutions being pursued by different groups. It's like asking a rocket engineer in 1955 how to build a rocketship to the Moon, or a physicist in 1936 how to build an atomic bomb, or the Wright brothers in 1900 how to build an airplane. Sure, in every single one of these cases you wouldn't get an exact, definitive answer. The Wright brothers didn't even understand the aerodynamics of their airplane, for example. But there were known avenues of inquiry for which there was very solid reason to believe that they would not be dead ends.

We're at a point now with AI/ML where solutions can be learned by machines for any solvable problem. It just needs some humans doing the selection of algorithms and guiding in hyper parameter space. But there is active research on automating that meta level which is yielding results. And both the reinforcement learning and the older AGI communities have working, tested designs for cognitive architectures that are truly general.

I'm not claiming we're done. I'm just saying that we're basically at the level of a working Wright flyer -- a bunch of research projects individually exhibit intelligence in separate domains, and a couple of cognitive architectures for generalizing them which have been shown to work on toy problems. There's no known unknowns that would cause these approaches to fail, so the reasonable expectation is that in the coming decades we will see the rise of useful AGI. Just like a reasonable observer with all the facts in 1905 should have predicted consumer passenger air travel.

> But we don't have evidence of intelligence arising that way in the past. What we have is the human brain, an incredibly complex structure...

Yes the human brain is difficult to understand. So is the flight of a bird. It's a good thing that we don't need to replicate the mechanics of bird flight to build flying machines -- otherwise you and I would still be stuck to trains and boats for getting around.

I suggest looking not at a neuroscience text book but a psychology textbook. Ask yourself not whether you can replicate exactly the conditions going on in the brain, but rather ask if you can implement a program to the same general functional description as the psychology textbook provides. That's a much easier task, and one well within the capabilities we have today.

Fresh book recommendations delivered straight to your inbox every Thursday.