Anyway, suffice to say, AI and AGI didn't stop progressing, and Chomsky is no longer any sort of expert in those fields.
Even Norvig isn't up to speed on the most advanced approaches to AGI, but at least he enters the same room with people who are aware of the field. For example, he gave a talk at the recent Singularity Summit.
The Fifth Conference on Artificial General Intelligence is going to be in Oxford in December. http://agi-conference.org/2012/
Here is some information for people who are interested in pertinent ideas related to AGI.
http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/06...
>OpenCog is a diverse assemblage of cognitive algorithms, each embodying their own innovations — but what makes the overall architecture powerful is its careful adherence to the principle of cognitive synergy.
>The human brain consists of a host of subsystems carrying out particular tasks — some more specialized, some more general in nature — and connected together in a manner enabling them to (usually) synergetically assist rather than work against each other.
http://wiki.opencog.org/w/Probabilistic_Logic_Networks
> PLN is a novel conceptual, mathematical and computational approach to uncertain inference. In order to carry out effective reasoning in real-world circumstances, AI software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.
http://wiki.opencog.org/w/AtomSpace
Conceptually, knowledge in OpenCog is stored within large [weighted, labeled] hypergraphs with nodes and links linked together to represent knowledge. This is done on two levels: Information primitives are symbolized in individual or small sets of nodes/links, and patterns of relationships or activity found in [potentially] overlapping and nesting networks of nodes and links. (OCP tutorial log #2).
http://www.izhikevich.org/publications/large-scale_model_of_...
Large-Scale Model of Mammalian Thalamocortical Systems
> The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.
http://www.sciencebytes.org/2011/05/03/blueprint-for-the-bra...
Essentials of General Intelligence: The direct path to AGI
http://www.adaptiveai.com/RealAI_chap_ver2c.htm
>General intelligence, as described above, demands a number of irreducible features and capabilities. In order to proactively accumulate knowledge from various (and/ or changing) environments, it requires:
>1. Senses to obtain features from ‘the world’ (virtual or actual),
>2. A coherent means for storing knowledge obtained this way, and
>3. Adaptive output/ actuation mechanisms (both static and dynamic).
>Such knowledge also needs to be automatically adjusted and updated on an ongoing basis; new knowledge must be appropriately related to existing data. Furthermore, perceived entities/ patterns must be stored in a way that facilitates concept formation and generalization. An effective way to represent complex feature relationships is through vector encoding (Churchland 1995).
>Any practical applications of AGI (and certainly any real-time uses) must inherently be able to process temporal data as patterns in time – not just as static patterns with a time dimension. Furthermore, AGIs must cope with data from different sense probes (e.g., visual, auditory, and data), and deal with such attributes as: noisy, scalar, unreliable, incomplete, multi-dimensional (both space/ time dimensional, and having a large number of simultaneous features), etc. Fuzzy pattern matching helps deal with pattern variability and noise.
>Another essential requirement of general intelligence is to cope with an overabundance of data. Reality presents massively more features and detail than is (contextually) relevant, or that can be usefully processed. This is why the system needs to have some control over what input data is selected for analysis and learning – both in terms of which data, and also the degree of detail. Senses (‘probes’) are needed not only for selection and focus, but also in order to ground concepts – to give them (reality-based) meaning.
http://en.wikipedia.org/wiki/Hierarchical_temporal_memory
> A typical HTM network is a tree-shaped hierarchy of levels that are composed of smaller elements called nodes or columns. A single level in the hierarchy is also called a region. Higher hierarchy levels often have fewer nodes and therefore less spacial resolvability. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns.
> Each HTM node has the same basic functionality. In learning and inference modes; sensory data comes into the bottom level nodes. In generation mode; the bottom level nodes output the generated pattern of a given category. The top level usually has a single node that stores the most general categories (concepts) which determine, or are determined by, smaller concepts in the lower levels which are more restricted in time and space. When in inference mode; a node in each level interprets information coming in from its child nodes in the lower level as probabilities of the categories it has in memory.
>Each HTM region learns by identifying and memorizing spatial patterns - combinations of input bits that often occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to occur one after another.
Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.
Also, are there useful books, courses or papers that go into general AI research?
Of course there are. See:
https://agi.mit.edu
https://agi.reddit.com
http://www.agi-society.org/
https://opencog.org/
https://www.amazon.com/Engineering-General-Intelligence-Part...
https://www.amazon.com/Engineering-General-Intelligence-Part...
https://www.amazon.com/Artificial-General-Intelligence-Cogni...
https://www.amazon.com/Universal-Artificial-Intelligence-Alg...
https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...
https://www.amazon.com/Intelligence-Understanding-Creation-I...
https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...
https://www.amazon.com/Unified-Theories-Cognition-William-Le...
https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...
https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...
https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...
https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...
See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,
https://en.wikipedia.org/wiki/Cognitive_architecture
"Neuvoevolution"
https://en.wikipedia.org/wiki/Neuroevolution
and "Biologically Inspired Computing"
https://en.wikipedia.org/wiki/Biologically_inspired_computin...