Found in 4 comments on Hacker News
mindcrime · 2024-10-12 · Original thread
That's a fair point. We certainly lack a measure of rigor in terms of defining intelligence and AGI - especially in the vernacular sense and among the lay public. But among people who work on this stuff, there are useful definitions that are widely used - if not universally accepted as "the" definition.

I would say that the material from Chapter 4 of Engineering General Intelligence - Volume 1[1] by Ben Goertzel reflects a pretty spirited and useful attempt to capture the important details, at least vis-a-vis the discussion at hand.

Excerpt:

Many attempts to characterize general intelligence have been made; Legg and Hutter [LH07a] review over 70! Our preferred abstract characterization of intelligence is: the capability of a system to choose actions maximizing its goal-achievement, based on its perceptions and memories, and making reasonably efficient use of its computational resources [Goe10b]. A general intelligence is then understood as one that can do this for a variety of complex goals in a variety of complex environments. However, apart from positing definitions, it is difficult to say anything nontrivial about general intelligence in general. Marcus Hutter [Hut05a] has demonstrated, using a characterization of general intelligence similar to the one above, that a very simple algorithm called AIXI can demonstrate arbitrarily high levels of general intelligence, if given sufficiently immense computational resources. This is interesting because it shows that (if we assume the universe can effectively be modeled as a computational system) general intelligence is basically a problem of computational efficiency. The particular structures and dynamics that characterize real-world general intelligences like humans arise because of the need to achieve reasonable levels of intelligence using modest space and time resources.

[1]: https://www.amazon.com/Engineering-General-Intelligence-Part...

mindcrime · 2019-08-05 · Original thread
I'm curious how close the research community is to general AI

Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.

Also, are there useful books, courses or papers that go into general AI research?

Of course there are. See:

https://agi.mit.edu

https://agi.reddit.com

http://www.agi-society.org/

https://opencog.org/

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Artificial-General-Intelligence-Cogni...

https://www.amazon.com/Universal-Artificial-Intelligence-Alg...

https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...

https://www.amazon.com/Intelligence-Understanding-Creation-I...

https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...

https://www.amazon.com/Unified-Theories-Cognition-William-Le...

https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...

https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...

https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...

https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...

See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,

https://en.wikipedia.org/wiki/Cognitive_architecture

"Neuvoevolution"

https://en.wikipedia.org/wiki/Neuroevolution

and "Biologically Inspired Computing"

https://en.wikipedia.org/wiki/Biologically_inspired_computin...

kobeya · 2017-01-09 · Original thread
> If you think that we have all the conceptual understanding to assemble a general artificial intelligence then you should be able to give an outline of how it would work — beyond just lots of computing power and data, that doesn't differentiate the problem form the problem of object classification or natural language processing.

Sure:

https://www.amazon.com/Engineering-General-Intelligence-Part...

> Maybe what you mean to say is that once we have the computing power and data collection technology required, so that researches can experiment, the unsolved conceptual problems will become easy to solve.

No, I mean that there appears to be a basis of a solution already. Actually, multiple solutions being pursued by different groups. It's like asking a rocket engineer in 1955 how to build a rocketship to the Moon, or a physicist in 1936 how to build an atomic bomb, or the Wright brothers in 1900 how to build an airplane. Sure, in every single one of these cases you wouldn't get an exact, definitive answer. The Wright brothers didn't even understand the aerodynamics of their airplane, for example. But there were known avenues of inquiry for which there was very solid reason to believe that they would not be dead ends.

We're at a point now with AI/ML where solutions can be learned by machines for any solvable problem. It just needs some humans doing the selection of algorithms and guiding in hyper parameter space. But there is active research on automating that meta level which is yielding results. And both the reinforcement learning and the older AGI communities have working, tested designs for cognitive architectures that are truly general.

I'm not claiming we're done. I'm just saying that we're basically at the level of a working Wright flyer -- a bunch of research projects individually exhibit intelligence in separate domains, and a couple of cognitive architectures for generalizing them which have been shown to work on toy problems. There's no known unknowns that would cause these approaches to fail, so the reasonable expectation is that in the coming decades we will see the rise of useful AGI. Just like a reasonable observer with all the facts in 1905 should have predicted consumer passenger air travel.

> But we don't have evidence of intelligence arising that way in the past. What we have is the human brain, an incredibly complex structure...

Yes the human brain is difficult to understand. So is the flight of a bird. It's a good thing that we don't need to replicate the mechanics of bird flight to build flying machines -- otherwise you and I would still be stuck to trains and boats for getting around.

I suggest looking not at a neuroscience text book but a psychology textbook. Ask yourself not whether you can replicate exactly the conditions going on in the brain, but rather ask if you can implement a program to the same general functional description as the psychology textbook provides. That's a much easier task, and one well within the capabilities we have today.

kobeya · 2017-01-09 · Original thread
> If you think that we have all the conceptual understanding to assemble a general artificial intelligence then you should be able to give an outline of how it would work — beyond just lots of computing power and data, that doesn't differentiate the problem form the problem of object classification or natural language processing.

Sure:

https://www.amazon.com/Engineering-General-Intelligence-Part...

> Maybe what you mean to say is that once we have the computing power and data collection technology required, so that researches can experiment, the unsolved conceptual problems will become easy to solve.

No, I mean that there appears to be a basis of a solution already. Actually, multiple solutions being pursued by different groups. It's like asking a rocket engineer in 1955 how to build a rocketship to the Moon, or a physicist in 1936 how to build an atomic bomb, or the Wright brothers in 1900 how to build an airplane. Sure, in every single one of these cases you wouldn't get an exact, definitive answer. The Wright brothers didn't even understand the aerodynamics of their airplane, for example. But there were known avenues of inquiry for which there was very solid reason to believe that they would not be dead ends.

We're at a point now with AI/ML where solutions can be learned by machines for any solvable problem. It just needs some humans doing the selection of algorithms and guiding in hyper parameter space. But there is active research on automating that meta level which is yielding results. And both the reinforcement learning and the older AGI communities have working, tested designs for cognitive architectures that are truly general.

I'm not claiming we're done. I'm just saying that we're basically at the level of a working Wright flyer -- a bunch of research projects individually exhibit intelligence in separate domains, and a couple of cognitive architectures for generalizing them which have been shown to work on toy problems. There's no known unknowns that would cause these approaches to fail, so the reasonable expectation is that in the coming decades we will see the rise of useful AGI. Just like a reasonable observer with all the facts in 1905 should have predicted consumer passenger air travel.

> But we don't have evidence of intelligence arising that way in the past. What we have is the human brain, an incredibly complex structure...

Yes the human brain is difficult to understand. So is the flight of a bird. It's a good thing that we don't need to replicate the mechanics of bird flight to build flying machines -- otherwise you and I would still be stuck to trains and boats for getting around.

I suggest looking not at a neuroscience text book but a psychology textbook. Ask yourself not whether you can replicate exactly the conditions going on in the brain, but rather ask if you can implement a program to the same general functional description as the psychology textbook provides. That's a much easier task, and one well within the capabilities we have today.

Fresh book recommendations delivered straight to your inbox every Thursday.