Found in 5 comments on Hacker News
samuell · 2020-09-30 · Original thread
For people wanting to look into HTM (Hierarchical Temporal Memory), do check out Numenta's main website [1], in particular the papers [2] and videos [3] sections.

Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.

But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.

[1] https://numenta.org/

[2] https://numenta.com/neuroscience-research/research-publicati...

[3] https://www.youtube.com/user/OfficialNumenta

[4] https://www.amazon.com/Intelligence-Understanding-Creation-I...

Edit: Fixed book link.

ckrailo · 2019-10-29 · Original thread
I wish there was more content about that on HN in general.

The book On Intelligence by Jeff Hawkins was a fantastic read on HTM and similar concepts. (https://amzn.to/2JyQDF3)

mindcrime · 2019-08-05 · Original thread
I'm curious how close the research community is to general AI

Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.

Also, are there useful books, courses or papers that go into general AI research?

Of course there are. See:

https://agi.mit.edu

https://agi.reddit.com

http://www.agi-society.org/

https://opencog.org/

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Artificial-General-Intelligence-Cogni...

https://www.amazon.com/Universal-Artificial-Intelligence-Alg...

https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...

https://www.amazon.com/Intelligence-Understanding-Creation-I...

https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...

https://www.amazon.com/Unified-Theories-Cognition-William-Le...

https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...

https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...

https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...

https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...

See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,

https://en.wikipedia.org/wiki/Cognitive_architecture

"Neuvoevolution"

https://en.wikipedia.org/wiki/Neuroevolution

and "Biologically Inspired Computing"

https://en.wikipedia.org/wiki/Biologically_inspired_computin...

daly · 2019-04-22 · Original thread
I have a long background in Ai (robotics, PDP, expert systems, symbolic math, vision, planning).

There appear to be two classes of knowledge. Pattern knowledge, such as riding a bicycle, which we tend to learn in ways similar to the current machine learning trend. In some ways, this is "deductive knowledge". On the other hand, Explicit knowledge, such a learning to reason about proofs, which we tend to learn by teaching is symbolic. In some ways, this is "inductive knowledge.

The current machine learning trend leans heavily on Pattern knowledge. I don't believe it will extend into the Explicit knowledge domain. I fear that once this distinction becomes important it will be seen as a "limit of AI", leading to yet another AI winter. I tried to bring this up in the Open AI Gym (https://gym.openai.com/) but it went nowhere.

My experience leads me to hold the very unpopular opinion that AI requires a self-modifying system. Computers differ from calculators because they can modify their own behavior. I'm of the opinion that there is an even deeper kind of self-modification that is important for general AI. The physical realization of this in animals is due to the ability to grow new brain connections based on experience. One side-effect is that two identical self-modifying systems placed in different contexts will evolve differently. (A trivial example would be the notion of a "table" which is a wood structure to one system and a spreadsheet to the other system). Since they evolve different symbolic meanings they can't "copy their knowledge" but have to transfer it by "teaching".

Self-modification allows for adaptation based on internal feedback rather than external patterns (e.g. imagination). It allows a kind of hardware implementation of "genetic algorithms" (https://en.wikipedia.org/wiki/Genetic_algorithm). It allows "Explicit knowledge" to be "compiled" into "Pattern knowledge". This effect can be seen when you learn a skill like music or knitting. After being taught a manual skill you eventually "get it into your fingers", likely by self-modification, growing neural pathways.

Of all of the approches I've seen I think Jeff Hawkins of Numenta (https://www.amazon.com/Intelligence-Understanding-Creation-I...) is on the right track. However, he needs to extend his theories to handle self-modification in order to get past the "pattern knowledge" behavior.

rptr_87 · 2018-11-17 · Original thread
I suggest you read this book “On Intelligence” by Jeff Hawkins on similar topic:

https://www.amazon.com/Intelligence-Understanding-Creation-I...

Fresh book recommendations delivered straight to your inbox every Thursday.