Found in 15 comments on Hacker News
> study textbooks. Do exercises. Treat it like academic studying

This. Highly recommend Russel & Norvig [1] for high-level intuition and motivation. Then Bishop's "Pattern Recognition and Machine Learning" [2] and Koller's PGM book [3] for the fundamentals.

Avoid MOOCs, but there are useful lecture videos, e.g. Hugo Larochelle on belief propagation [4].

FWIW this is coming from a mechanical engineer by training, but self-taught programmer and AI researcher. I've been working in industry as an AI research engineer for ~6 years.

[1] https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

[2] https://www.amazon.com/Pattern-Recognition-Learning-Informat...

[3] https://www.amazon.com/Probabilistic-Graphical-Models-Princi...

[4] https://youtu.be/-z5lKPHcumo

henning · 2018-12-25 · Original thread
Would you enjoy something that gives a broad overview? Norvig's AI book https://www.amazon.com/Artificial-Intelligence-Modern-Approa... should give you a very broad perspective of the entire field. There will be many course websites with lecture material and lectures to go along with it that you may find useful.

The book website http://aima.cs.berkeley.edu/ has lots of resources.

But it sounds like you are specifically interested in deep learning. A Google researcher wrote a book on deep learning in Python aimed at a general audience - https://www.amazon.com/Deep-Learning-Python-Francois-Chollet... - which might be more directly relevant to your interests.

There's also what I guess you would call "the deep learning book". https://www.amazon.com/Deep-Learning-Adaptive-Computation-Ma...

(People have different preferences for how they like to learn and as you can see I like learning from books.)

(I apologize if you already knew about these things.)

I think you misunderstood me.

I do not believe "you need to understand all these deep and hard concepts before you start to touch ML." That is a contortion of what I said.

First point: ML is not a young field- term was coined in 1959. Not to mention the ideas are much older. *

Second Point: ML/'AI' relies on a slew of various concepts in maths. Take any 1st year textbook -- i personally like Peter Norvig's. I find the breadth of the field quite astounding.

Third Point: Most PhDs are specialists-- aka, if I am getting a PhD in ML, i specialize in a concrete problem domain/subfield, so I can specialize in all subfields. For example, I work on event detection and action recognition in video models. Before being accepted into a PhD you must pass a Qual, which ensures you understand the foundations of the field. So comparing to this is a straw man argument.

If your definition of ML is taking a TF model and running it, then I believe we have diverging assumptions of what the point of a course in ML is. Imo the point of an undergraduate major is to become acquainted with the field and be able to perform reasonably well in it professionally.

The reason why so many companies (Google,FB,MS etc) are paying for this talent, is that it is not easy to learn and takes time to master. Most people who just touch ML have a surface level understanding.

I have seen people who excel at TF (applied to deep learning) without having an ML background, but even they have issues when it comes to understanding concepts in optimization, convergence, model capacity that have huge bearings on how their models perform.

https://en.wikipedia.org/wiki/Machine_learning *https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

jadedhacker · 2018-07-31 · Original thread
For what it's worth, I feel like we already have a version of that book by Peter Norvig:

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

mindcrime · 2017-10-29 · Original thread
What? No. Why in the world do people even ask this kind of question. To a first approximation, the answer to "is it too late to get started with ..." question is always "no".

If no, what are the great resources for starters?

The videos / slides / assignments from here:

http://ai.berkeley.edu/home.html

This class:

https://www.coursera.org/learn/machine-learning

This class:

https://www.udacity.com/course/intro-to-machine-learning--ud...

This book:

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

This book:

https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-T...

This book:

https://www.amazon.com/Introduction-Machine-Learning-Python-...

These books:

http://greenteapress.com/thinkstats/thinkstats.pdf

http://www.greenteapress.com/thinkbayes/thinkbayes.pdf

This book:

https://www.amazon.com/Machine-Learning-Hackers-Studies-Algo...

This book:

https://www.amazon.com/Thoughtful-Machine-Learning-Test-Driv...

These subreddits:

http://artificial.reddit.com

http://machinelearning.reddit.com

http://semanticweb.reddit.com

These journals:

http://www.jmlr.org

http://www.jair.org

This site:

http://arxiv.org/corr/home/

Any tips before I get this journey going?

Depending on your maths background, you may need to refresh some math skills, or learn some new ones. The basic maths you need includes calculus (including multi-variable calc / partial derivatives), probability / statistics, and linear algebra. For a much deeper discussion of this topic, see this recent HN thread:

https://news.ycombinator.com/item?id=15116379

Luckily there are tons of free resources available online for learning various maths topics. Khan Academy isn't a bad place to start if you need that. There are also tons of good videos on Youtube from Gilbert Strang, Professor Leonard, 3blue1brown, etc.

Also, check out Kaggle.com. Doing Kaggle contests can be a good way to get your feet wet.

And the various Wikipedia pages on AI/ML topics can be pretty useful as well.

cr0sh · 2017-01-11 · Original thread
TL;DR - read my post's "tag" and take those courses!

---

As you can see in my "tag" on my post - most of what I have learned came from these courses:

1. AI Class / ML Class (Stanford-sponsored, Fall 2011)

2. Udacity CS373 (2012) - https://www.udacity.com/course/artificial-intelligence-for-r...

3. Udacity Self-Driving Car Engineer Nanodegree (currently taking) - https://www.udacity.com/course/self-driving-car-engineer-nan...

For the first two (AI and ML Class) - these two MOOCs kicked off the founding of Udacity and Coursera (respectively). The classes are also available from each:

Udacity: Intro to AI (What was "AI Class"):

https://www.udacity.com/course/intro-to-artificial-intellige...

Coursera: Machine Learning (What was "ML Class"):

https://www.coursera.org/learn/machine-learning

Now - a few notes: For any of these, you'll want a good understanding of linear algebra (mainly matrices/vectors and the math to manipulate them), stats and probabilities, and to a lessor extent, calculus (basic info on derivatives). Khan Academy or other sources can get you there (I think Coursera and Udacity have courses for these, too - plus there are a ton of other MOOCs plus MITs Open Courseware).

Also - and this is something I haven't noted before - but the terms "Artificial Intelligence" and "Machine Learning" don't necessarily mean the same thing. Based on what I have learned, it seems like artificial intelligence mainly revolves around modern understandings of artificial neural networks and deep learning - and is a subset of machine learning. Machine learning, though, also encompasses standard "algorithmic" learning techniques, like logistic and linear regression.

The reason why neural networks is a subset of ML, is because a trained neural network ultimately implements a form of logistic (categorization, true/false, etc) or linear regression (range) - depending on how the network is set up and trained. The power of a neural network comes from not having to find all of the dependencies (iow, the "function"); instead the network learns them from the data. It ends up being a "black box" algorithm, but it allows the ability to work with datasets that are much larger and more complex than what the algorithmic approaches allow for (that said, the algorithmic approaches are useful, in that they use much less processing power and are easier to understand - no use attempting to drive a tack with a sledgehammer).

With that in mind, the sequence to learn this stuff would probably be:

1. Make sure you understand your basics: Linear Algebra, stats and probabilities, and derivatives

2. Take a course or read a book on basic machine learning techniques (linear regression, logistic regression, gradient descent, etc).

3. Delve into simple artificial neural networks (which may be a part of the machine learning curriculum): understand what feed-forward and back-prop are, how a simple network can learn logic (XOR, AND, etc), how a simple network can answer "yes/no" and/or categorical questions (basic MNIST dataset). Understand how they "learn" the various regression algorithms.

4. Jump into artificial intelligence and deep learning - implement a simple neural network library, learn tensorflow and keras, convolutional networks, and so forth...

Now - regarding self-driving vehicles - they necessarily use all of the above, and more - including more than a bit of "mechanical" techniques: Use OpenCV or another machine vision library to pick out details of the road and other objects - which might then be able to be processed by a deep learning CNN - ex: Have a system that picks out "road sign" object from a camera, then categorizes them to "read" them and use the information to make decisions on how to drive the car (come to a stop, or keep at a set speed). In essence, you've just made a portion of Tesla's vehicle assist system (first project we did in the course I am taking now was to "follow lane lines" - the main ingredient behind "lane assist" technology - used nothing but OpenCV and Python). You'll also likely learn stuff about Kalman filters, pathfinding algos, sensor fusion, SLAM, PID controllers, etc.

I can't really recommend any books to you, given my level of knowledge. I've read more than a few, but most of them would be considered "out of date". One that is still being used in university level courses is this:

http://aima.cs.berkeley.edu/

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

Note that it is a textbook, with textbook pricing...

Another one that I have heard is good for learning neural networks with is:

https://www.amazon.com/Make-Your-Own-Neural-Network/dp/15308...

There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).

Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:

https://en.wikipedia.org/wiki/Artificial_neuron

Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:

http://www.alanturing.net/turing_archive/pages/reference%20a...

...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).

You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

There's also the "lesser known" or "controversial" historical people:

* Hugo De Garis (CAM-Brain Machine)

* Igor Aleksander

* Donald Michie (MENACE)

...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).

Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".

Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old comp.ai newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:

http://www.nothingisreal.com/mentifex_faq.html

Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.

Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).

EDIT: formatting

cr0sh · 2017-01-11 · Original thread
TL;DR - read my post's "tag" and take those courses!

---

As you can see in my "tag" on my post - most of what I have learned came from these courses:

1. AI Class / ML Class (Stanford-sponsored, Fall 2011)

2. Udacity CS373 (2012) - https://www.udacity.com/course/artificial-intelligence-for-r...

3. Udacity Self-Driving Car Engineer Nanodegree (currently taking) - https://www.udacity.com/course/self-driving-car-engineer-nan...

For the first two (AI and ML Class) - these two MOOCs kicked off the founding of Udacity and Coursera (respectively). The classes are also available from each:

Udacity: Intro to AI (What was "AI Class"):

https://www.udacity.com/course/intro-to-artificial-intellige...

Coursera: Machine Learning (What was "ML Class"):

https://www.coursera.org/learn/machine-learning

Now - a few notes: For any of these, you'll want a good understanding of linear algebra (mainly matrices/vectors and the math to manipulate them), stats and probabilities, and to a lessor extent, calculus (basic info on derivatives). Khan Academy or other sources can get you there (I think Coursera and Udacity have courses for these, too - plus there are a ton of other MOOCs plus MITs Open Courseware).

Also - and this is something I haven't noted before - but the terms "Artificial Intelligence" and "Machine Learning" don't necessarily mean the same thing. Based on what I have learned, it seems like artificial intelligence mainly revolves around modern understandings of artificial neural networks and deep learning - and is a subset of machine learning. Machine learning, though, also encompasses standard "algorithmic" learning techniques, like logistic and linear regression.

The reason why neural networks is a subset of ML, is because a trained neural network ultimately implements a form of logistic (categorization, true/false, etc) or linear regression (range) - depending on how the network is set up and trained. The power of a neural network comes from not having to find all of the dependencies (iow, the "function"); instead the network learns them from the data. It ends up being a "black box" algorithm, but it allows the ability to work with datasets that are much larger and more complex than what the algorithmic approaches allow for (that said, the algorithmic approaches are useful, in that they use much less processing power and are easier to understand - no use attempting to drive a tack with a sledgehammer).

With that in mind, the sequence to learn this stuff would probably be:

1. Make sure you understand your basics: Linear Algebra, stats and probabilities, and derivatives

2. Take a course or read a book on basic machine learning techniques (linear regression, logistic regression, gradient descent, etc).

3. Delve into simple artificial neural networks (which may be a part of the machine learning curriculum): understand what feed-forward and back-prop are, how a simple network can learn logic (XOR, AND, etc), how a simple network can answer "yes/no" and/or categorical questions (basic MNIST dataset). Understand how they "learn" the various regression algorithms.

4. Jump into artificial intelligence and deep learning - implement a simple neural network library, learn tensorflow and keras, convolutional networks, and so forth...

Now - regarding self-driving vehicles - they necessarily use all of the above, and more - including more than a bit of "mechanical" techniques: Use OpenCV or another machine vision library to pick out details of the road and other objects - which might then be able to be processed by a deep learning CNN - ex: Have a system that picks out "road sign" object from a camera, then categorizes them to "read" them and use the information to make decisions on how to drive the car (come to a stop, or keep at a set speed). In essence, you've just made a portion of Tesla's vehicle assist system (first project we did in the course I am taking now was to "follow lane lines" - the main ingredient behind "lane assist" technology - used nothing but OpenCV and Python). You'll also likely learn stuff about Kalman filters, pathfinding algos, sensor fusion, SLAM, PID controllers, etc.

I can't really recommend any books to you, given my level of knowledge. I've read more than a few, but most of them would be considered "out of date". One that is still being used in university level courses is this:

http://aima.cs.berkeley.edu/

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

Note that it is a textbook, with textbook pricing...

Another one that I have heard is good for learning neural networks with is:

https://www.amazon.com/Make-Your-Own-Neural-Network/dp/15308...

There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).

Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:

https://en.wikipedia.org/wiki/Artificial_neuron

Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:

http://www.alanturing.net/turing_archive/pages/reference%20a...

...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).

You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

There's also the "lesser known" or "controversial" historical people:

* Hugo De Garis (CAM-Brain Machine)

* Igor Aleksander

* Donald Michie (MENACE)

...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).

Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".

Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old comp.ai newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:

http://www.nothingisreal.com/mentifex_faq.html

Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.

Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).

EDIT: formatting

showerst · 2016-11-22 · Original thread
Introduction to Algorithms (CLRS) - probably the closest you'll get for Algorithms, next to Knuth. It's also updated relatively often to stay cutting edge

https://mitpress.mit.edu/books/introduction-algorithms

AI, a modern approach (Norvig & Russel) - For classic AI stuff, although nowadays it might fade a bit with all the deep learning advances.

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

While it's not strictly CS, Tufte's Visual Display of Quantitative information should probably be on every programmer's shelf.

https://www.amazon.com/Visual-Display-Quantitative-Informati...

todd8 · 2016-10-09 · Original thread
Depending on your level of programming ability, one algorithm a day, IMHO, is completely doable. A number of comments and suggestions say that one per day is an unrealistic goal (yes, maybe it is) but the idea of setting a goal and working through a list of algorithms is very reasonable.

If you are just learning programming, plan on taking your time with the algorithms but practice coding every day. Find a fun project to attempt that is within your level of skill.

If you are a strong programmer in one language, find a book of algorithms using that language (some of the suggestions here in these comments are excellent). I list some of the books I like at the end of this comment.

If you are an experienced programmer, one algorithm per day is roughly doable. Especially so, because you are trying to learn one algorithm per day, not produce working, production level code for each algorithm each day.

Some algorithms are really families of algorithms and can take more than a day of study, hash based look up tables come to mind. First there are the hash functions themselves. That would be day one. Next there are several alternatives for storing entries in the hash table, e.g. open addressing vs chaining, days two and three. Then there are methods for handling collisions, linear probing, secondary hashing, etc.; that's day four. Finally there are important variations, perfect hashing, cuckoo hashing, robin hood hashing, and so forth; maybe another 5 days. Some languages are less appropriate for playing around and can make working with algorithms more difficult, instead of a couple of weeks this could easily take twice as long. After learning other methods of implementing fast lookups, its time to come back to hashing and understand when its appropriate and when alternatives are better and to understand how to combine methods for more sophisticated lookup methods.

I think you will be best served by modifying your goal a bit and saying that you will work on learning about algorithms every day and cover all of the material in a typical undergraduate course on the subject. It really is a fun branch of Computer Science.

A great starting point is Sedgewick's book/course, Algorithms [1]. For more depth and theory try [2], Cormen and Leiserson's excellent Introduction to Algorithms. Alternatively the theory is also covered by another book by Sedgewick, An Introduction to the Analysis of Algorithms [3]. A classic reference that goes far beyond these other books is of course Knuth [4], suitable for serious students of Computer Science less so as a book of recipes.

After these basics, there are books useful for special circumstances. If your goal is to be broadly and deeply familiar with Algorithms you will need to cover quite a bit of additional material.

Numerical methods -- Numerical Recipes 3rd Edition: The Art of Scientific Computing by Tuekolsky and Vetterling. I love this book. [5]

Randomized algorithms -- Randomized Algorithms by Motwani and Raghavan. [6], Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher, [7]

Hard problems (like NP) -- Approximation Algorithms by Vazirani [8]. How to Solve It: Modern Heuristics by Michalewicz and Fogel. [9]

Data structures -- Advanced Data Structures by Brass. [10]

Functional programming -- Pearls of Functional Algorithm Design by Bird [11] and Purely Functional Data Structures by Okasaki [12].

Bit twiddling -- Hacker's Delight by Warren [13].

Distributed and parallel programming -- this material gets very hard so perhaps Distributed Algorithms by Lynch [14].

Machine learning and AI related algorithms -- Bishop's Pattern Recognition and Machine Learning [15] and Norvig's Artificial Intelligence: A Modern Approach [16]

These books will cover most of what a Ph.D. in CS might be expected to understand about algorithms. It will take years of study to work though all of them. After that, you will be reading about algorithms in journal publications (ACM and IEEE memberships are useful). For example, a recent, practical, and important development in hashing methods is called cuckoo hashing, and I don't believe that it appears in any of the books I've listed.

[1] Sedgewick, Algorithms, 2015. https://www.amazon.com/Algorithms-Fourth-Deluxe-24-Part-Lect...

[2] Cormen, et al., Introduction to Algorithms, 2009. https://www.amazon.com/s/ref=nb_sb_ss_i_1_15?url=search-alia...

[3] Sedgewick, An Introduction to the Analysis of Algorithms, 2013. https://www.amazon.com/Introduction-Analysis-Algorithms-2nd/...

[4] Knuth, The Art of Computer Programming, 2011. https://www.amazon.com/Computer-Programming-Volumes-1-4A-Box...

[5] Tuekolsky and Vetterling, Numerical Recipes 3rd Edition: The Art of Scientific Computing, 2007. https://www.amazon.com/Numerical-Recipes-3rd-Scientific-Comp...

[6] https://www.amazon.com/Randomized-Algorithms-Rajeev-Motwani/...

[7]https://www.amazon.com/gp/product/0521835402/ref=pd_sim_14_2...

[8] Vazirani, https://www.amazon.com/Approximation-Algorithms-Vijay-V-Vazi...

[9] Michalewicz and Fogel, https://www.amazon.com/How-Solve-Heuristics-Zbigniew-Michale...

[10] Brass, https://www.amazon.com/Advanced-Data-Structures-Peter-Brass/...

[11] Bird, https://www.amazon.com/Pearls-Functional-Algorithm-Design-Ri...

[12] Okasaki, https://www.amazon.com/Purely-Functional-Structures-Chris-Ok...

[13] Warren, https://www.amazon.com/Hackers-Delight-2nd-Henry-Warren/dp/0...

[14] Lynch, https://www.amazon.com/Distributed-Algorithms-Kaufmann-Manag...

[15] Bishop, https://www.amazon.com/Pattern-Recognition-Learning-Informat...

[16] Norvig, https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

nlawalker · 2016-01-19 · Original thread
To anyone who's never done the Pacman projects: I highly recommend them[1]. They are an absolute blast and incredibly satisfying. Plus, if you don't know Python, they are a great way to learn.

The course I took used the Norvig text[2] as a textbook, which I also recommend.

[1]http://ai.berkeley.edu/project_overview.html. See the "Lectures" link at the top for all the course videos/slides.

[2]http://www.amazon.com/Artificial-Intelligence-Modern-Approac... Note that the poor reviews center on the price, the digital/Kindle edition and the fact that the new editions don't differ greatly from the older ones. If you've never read it and you have the $$, a hardbound copy makes a great learning and reference text, and it's the kind of content that's not going to go out of date.

selleck · 2015-07-17 · Original thread
Peter Norvig's Artificial Intelligence:

http://www.amazon.com/Artificial-Intelligence-Modern-Approac...

Has plenty of examples in Python. You can also look at different Udacity courses. They have a couple dealing with ML with Python.

austinl · 2015-03-21 · Original thread
I recently implemented depth-first iterative deepening in an Artificial Intelligence class project to solve the classic missionaries and cannibals problem [0]. The professor remarked that while there have been some optimizations over the last few decades, using them can be quite messy – to the point where the combination of A* and iterative deepening is still commonly used in the field.

I'm fairly certain that the claim in the introduction – "Unfortunately, current AI texts either fail to mention this algorithm or refer to it only in the context of two-person game searches" – is no longer true.

From my current textbook (Artificial Intelligence: A Modern Approach [1]):

"Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth-first tree search, that finds the best depth limit. It does this by gradually increasing the limit — first 0, then 1, then 2, and so on — until a goal is found... In general, iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known."

[0] http://en.wikipedia.org/wiki/Missionaries_and_cannibals_prob...

[1] http://www.amazon.com/Artificial-Intelligence-Modern-Approac...

Jach · 2012-05-25 · Original thread
Combinatorial game theory would be the wrong tool for this, I believe, since this isn't a strictly finite turn-based combinatorial game. (And in my opinion combinatorial game theory is more useful for analysis and proof techniques than for creating AI algorithms, but I've only had one class on it.) The blog mentioned "fruitwalk-ab" uses minimax (with the alpha-beta optimization of course), which is the bread-and-butter algorithm for these kinds of games and I expected it to be in 1st place. Sure enough, it is at the moment. (Edit2: No longer.)

In an intro machine learning course you'd learn about minimax and others, but skip paying any money and just read the basic algorithms here and look up the wiki pages for even more examples: http://www.stanford.edu/~msirota/soco/blind.html (The introduction has some term definitions.)

Edit: Also, the obligatory plug for Artificial Intelligence: A Modern Approach http://www.amazon.com/Artificial-Intelligence-Modern-Approac...

fendrak · 2010-12-27 · Original thread
My Multiagent Systems class at UT Austin (taught by Peter Stone) discussed the Kiva system, along with many, many other topics pertaining to AI today.

A couple of videos about Kiva: http://www.raffaello.name/KivaSystems.html

A paper on Kiva: http://www.raffaello.name/Assets/Publications/CoordinatingHu...

http://www.cs.utexas.edu/~pstone/Courses/344Mfall10/assignme... has all of the readings for the class (with more on the 'resources' page). If you want to learn something about AI, it's certainly a good place to start!

If you're more into the algorithms side of AI, you should certainly read AI: A Modern Approach (http://www.amazon.com/Artificial-Intelligence-Modern-Approac...). It's a text book, don't get me wrong, but it clearly explains many relevant algorithms in AI today with accompanying pseudocode and theory. If you've got a CS background, it's a great reference/learning tool. I bought mine for a class, and won't be returning it!

Fresh book recommendations delivered straight to your inbox every Thursday.