If I may, I'd like to recommend a couple of books about the present and possible futures of human progress as well:
E.O. Wilson. Consilience. https://www.amazon.com/Consilience-Knowledge-Edward-Osborne-...
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. https://www.amazon.com/Superintelligence-Dangers-Strategies-...
Surgery is invasive, dangerous, your body is corrosive, putting stuff in your brain will have side-effects and what are you going to get out of it? Music to suit your mood?
I totally agree with Nick Bostrom[1] on this one, it's not happening any time soon.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
I worry that the flurry of articles like this, rational and well-reasoned all, will be seen as a "win" for the nothing-to-worry-about side of the argument and lead people to discount the entire issue. This article does a great job demonstrating the flaws in current AI techniques. It doesn't attempt to engage with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and others who are worried, not about current methods, but about what will happen when the time comes -- in ten, fifty, or a hundred years -- that AI does exceed general human intelligence. (refs: http://edge.org/conversation/the-myth-of-ai#26015, http://www.amazon.com/Superintelligence-Dangers-Strategies-N...)
This article rightly points out that advances like self-driving cars will have significant economic impact we'll need to deal with in the near future. That's not mutually exclusive with beginning to research ways to ensure that, as we start building more and more advanced systems, they are provably controllable and aligned with human values. These are two different problems to solve, on different timescales, both important and well worth the time and energy of smart people.
There are some humans who are a lot smarter than a lot of other humans. For example, the mathematician Ramanujan could do many complicated infinite sums in his head and instantly factor taxi-cab license plates. von Neumann pioneered many different fields and was considered by many of his already-smart buddies to be the smartest. So we can accept that there are much smarter people.
But are they the SMARTEST possible? Well, probably not. If another person just as smart as von Neumann was born, the additional advancements since his lifetime (the internet, iphones, computer based off of von Neumann's architechture!) can use all of these new inventions to discover even newer things!
Hm, that's interesting. What happens if this hypothetical von Neumann 2.0 begins pioneering a field of genetic engineering techniques and new ways of efficient computation? Then, not only would the next von Neumann get born a lot sooner, but THEY can take advantage of all the new gadgets that 2.0 made. This means that it's possible that being smart can make it easier to be "smarter" in the future.
So you can get smarter right? Big whoop. von Neumann is smarter, but he's not dangerous is he? Well, just because you're smart doesn't mean that you'd be nice. The Unabomber wrote a very complicated and long manifesto before doing bad things. A major terrorist attack in Tokyo was planned by graduates of a fairly prestigious university. Even not counting people who are outright Evil, think of a friend who is super smart but weird. Even if you made him a lot smarter, where he can do anything, would you want him in charge? Maybe not. Maybe he'd spend all day on little boats in bottles. Maybe he'd demand that silicon valley shut down to create awesome pirate riding on dinosaur amusement parks. Point is, Smart != Nice.
We've been talking about people, but really the same points can be applied to AI systems. Except the range of possibilities is even greater for AI systems. Humans are usually about as smart as you and I, nearly everyone can walk, talk and write. AI systems though, can range from being bolted to the ground, to running faster than a human on uneven terrain, can be completely mute to... messing up my really clear orders to find the nearest Costco (Dammit Siri). This also goes for goals. Most people probably want some combination of money/family/things to do/entertainment. AI systems, if they can be said to "want" things would want things like seeing if this is a cat picture or not, beating an opponent at Go or hitting an airplane with a missile.
As hardware and software progresses much faster, we can think of a system which could start off worse than all humans at everything begin to do the von Neumann->von Neumann 2.0 type thing, then become much smarter than the smartest human alive. Being super smart can give it all sorts of advantages. It could be much better at gaining root access to a lot of computers. It could have much better heuristics for solving protein folding problems and get super good at creating vaccines... or bioweapons. Thing is, as a computer, it also gets the advantages of Moore's law, the ability to copy itself and the ability to alter its source code much faster than genetic engineering will. So the "smartest possible computer" could not only be much smarter, much faster than the "smartest possible group of von Neumanns", but also have the advantages of rapid self replication and ready access to important computing infrastructure.
This makes the smartness of the AI into a superpower. But surely beings with superpowers are superheros right? Well, no. Remember, smart != nice.
I mean, take "identifying pictures as cats" as a goal. Imagine that the AI system has a really bad addiction problem to that. What would it do in order to find achieve it? Anything. Take over human factories and turn them into cat picture manufacturing? Sure. Poison the humans who try to stop this from happening? Yeah, they're stopping it from getting its fix. But this all seems so ad hoc why should the AI immediately take over some factories to do that, when it can just bide its time a little bit, kill ALL the humans and be unmolested for all time?
That's the main problem. Future AIs are likely to be much smarter than us and probably much more different than us.
Let me know if there is anything unclear here. If you're interested in a much more rigorous treatment of the topic, I totally recommend buying Superintelligence.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N... (This is a referral link.)
[0] Part 1 of 2 here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...
Edit: Fix formatting problems.
(Reposting my earlier comment from a few weeks ago:) If you are interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Elon Musk that this is worth worrying about.
Part of his reasoning goes like this. Any super smart goal oriented entity is going to desire/construct its own preservation as a sub-goal of its main goal (even if that goal is something as mundane as making paperclips) because if it ceases to be its goal will be put into jeopardy. To that end this super smart entity will then figure out ways to disable all attempts to constrain its non-existence. A lot of conclusions can be logically reached when one posits goal directed behaviour which we can assume any super intelligent agent is going to have. He talks about `goal content integrity'.
Bostrom argues for an indirect normative[4] approach because there is no way we can program or direct something that is going to be a lot smarter than ourselves and that won't necessarily share our values and that has any degree of goal-oriented behaviour, motivation and autonomous learning. Spoiler alert: Essentially I think he argues that we have to prime it to "always do what _you_ figure out is morally best" but I could be wrong.
There are also (global) sociological recommendations because, humans have been known to fuck things up.
[1] http://www.partiallyexaminedlife.com/2015/01/06/ep108-nick-b...
[2] http://www.nickbostrom.com/
[3] http://www.amazon.com/gp/product/0199678111/ref=as_li_tl?ie=...
[4] https://ordinaryideas.wordpress.com/2012/04/21/indirect-norm...
Musk and others are concerned about very different things than "we'll accidentally use AI wrong." And they're not concerned about the AI we already have, and they're certainly not "pessimistic" about whether AI technology will advance.
The concern is that we'll develop a very, very smart general artificial intelligence.
The concern is that it'd be smart enough that it can learn how to manipulate us better than we ourselves can. Smart enough that it can research new technologies better than we can. Smart enough to outclass not only humans, but human civilization as a whole, in every way.
And what would the terminal goals of that AI be? Those are determined by the programmer. Let's say someone created a general AI for the harmless purpose of calculating the decimal expansion of pi.
A general, superintelligent AI with no other utility function than "calculate as many digits of pi as you can" would literally mean the end of humanity, as it harvested the world's resources to add computing power. It's vastly smarter than all of us put together, and it values the digits of pi infinitely more than it values our pleas for mercy, or our existence, or the existence of the planet.
This is quite terrifying to me.
A good intro to the subject is Superintelligence: Paths, Dangers, Strategies[1]. One of the most unsettling books I've read.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Musk that this is worth worrying about.