https://www.amazon.com/Superintelligence-Dangers-Strategies-...
Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish [2].
An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped [3].
1: https://www.amazon.ca/Superintelligence-Dangers-Strategies-N...
But there is also a "fundamental" issue of it being difficult/impossible to enumerate "bad behaviors". This is an issue related to a lot of AI safety, including AGI safety as discussed by for example in Nick Bostrom's "Superintelligence" (https://www.amazon.com/dp/B00LOOCGB2)
https://www.amazon.com/Superintelligence-Dangers-Strategies-...
If I may, I'd like to recommend a couple of books about the present and possible futures of human progress as well:
E.O. Wilson. Consilience. https://www.amazon.com/Consilience-Knowledge-Edward-Osborne-...
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. https://www.amazon.com/Superintelligence-Dangers-Strategies-...
https://www.amazon.com/Superintelligence-Dangers-Strategies-...
It has helped me make informed, realistic judgments about the path AI research needs to take. It and related works should be in the vocabulary of anybody working towards AI.
I'd say that another related source of bias is that we are surrounded by people who think like us.
It's a shame though, that the article dismisses without much explicit justification risk from artificial intelligence and the problem of death. When I first encountered these ideas, I dismissed them because they seemed weird. But if you read the arguments for caring you realise that they are actually well thought-out. For AI risk, check out http://waitbutwhy.com/2015/01/artificial-intelligence-revolu... and https://www.amazon.co.uk/Superintelligence-Dangers-Strategie.... For an argument for tackling death, checkout http://www.nickbostrom.com/fable/dragon.html.
Also, there's a clear answer to 'If after decades we can't improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else?' -- because the tech elite live in a rich society where most of the fundamental problems (e.g. infectious disease control, widespread dollar-a-day level poverty, access to education) have been solved. The remaining problems are much harder and we should focus on problems where our resources can go further - e.g. in helping the global poor. We should also work on important problems that we have a lot of influence over, such as risks from artificial intelligence and surveillance technology.
https://www.amazon.com/Superintelligence-Dangers-Strategies-...
But the problem is that computers do what you say, not what you mean. If I write a function called be_nice_to_people(), the fact that I gave my function that name does nothing to affect the implementation. Instead my computers behavior will depend on the specific details of the implementation. And since being nice to people is a behavior that's extremely hard to precisely specify, by default creating an AI that's smart enough to replace humans is likely to result in a bad outcome.
Recommended book: http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
If you haven't read Bostrom's book yet, I'd really recommend it. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
Surgery is invasive, dangerous, your body is corrosive, putting stuff in your brain will have side-effects and what are you going to get out of it? Music to suit your mood?
I totally agree with Nick Bostrom[1] on this one, it's not happening any time soon.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
The author is the director of the Future of Humanity Institute at Oxford.
[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...
I worry that the flurry of articles like this, rational and well-reasoned all, will be seen as a "win" for the nothing-to-worry-about side of the argument and lead people to discount the entire issue. This article does a great job demonstrating the flaws in current AI techniques. It doesn't attempt to engage with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and others who are worried, not about current methods, but about what will happen when the time comes -- in ten, fifty, or a hundred years -- that AI does exceed general human intelligence. (refs: http://edge.org/conversation/the-myth-of-ai#26015, http://www.amazon.com/Superintelligence-Dangers-Strategies-N...)
This article rightly points out that advances like self-driving cars will have significant economic impact we'll need to deal with in the near future. That's not mutually exclusive with beginning to research ways to ensure that, as we start building more and more advanced systems, they are provably controllable and aligned with human values. These are two different problems to solve, on different timescales, both important and well worth the time and energy of smart people.
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]
1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
There are some humans who are a lot smarter than a lot of other humans. For example, the mathematician Ramanujan could do many complicated infinite sums in his head and instantly factor taxi-cab license plates. von Neumann pioneered many different fields and was considered by many of his already-smart buddies to be the smartest. So we can accept that there are much smarter people.
But are they the SMARTEST possible? Well, probably not. If another person just as smart as von Neumann was born, the additional advancements since his lifetime (the internet, iphones, computer based off of von Neumann's architechture!) can use all of these new inventions to discover even newer things!
Hm, that's interesting. What happens if this hypothetical von Neumann 2.0 begins pioneering a field of genetic engineering techniques and new ways of efficient computation? Then, not only would the next von Neumann get born a lot sooner, but THEY can take advantage of all the new gadgets that 2.0 made. This means that it's possible that being smart can make it easier to be "smarter" in the future.
So you can get smarter right? Big whoop. von Neumann is smarter, but he's not dangerous is he? Well, just because you're smart doesn't mean that you'd be nice. The Unabomber wrote a very complicated and long manifesto before doing bad things. A major terrorist attack in Tokyo was planned by graduates of a fairly prestigious university. Even not counting people who are outright Evil, think of a friend who is super smart but weird. Even if you made him a lot smarter, where he can do anything, would you want him in charge? Maybe not. Maybe he'd spend all day on little boats in bottles. Maybe he'd demand that silicon valley shut down to create awesome pirate riding on dinosaur amusement parks. Point is, Smart != Nice.
We've been talking about people, but really the same points can be applied to AI systems. Except the range of possibilities is even greater for AI systems. Humans are usually about as smart as you and I, nearly everyone can walk, talk and write. AI systems though, can range from being bolted to the ground, to running faster than a human on uneven terrain, can be completely mute to... messing up my really clear orders to find the nearest Costco (Dammit Siri). This also goes for goals. Most people probably want some combination of money/family/things to do/entertainment. AI systems, if they can be said to "want" things would want things like seeing if this is a cat picture or not, beating an opponent at Go or hitting an airplane with a missile.
As hardware and software progresses much faster, we can think of a system which could start off worse than all humans at everything begin to do the von Neumann->von Neumann 2.0 type thing, then become much smarter than the smartest human alive. Being super smart can give it all sorts of advantages. It could be much better at gaining root access to a lot of computers. It could have much better heuristics for solving protein folding problems and get super good at creating vaccines... or bioweapons. Thing is, as a computer, it also gets the advantages of Moore's law, the ability to copy itself and the ability to alter its source code much faster than genetic engineering will. So the "smartest possible computer" could not only be much smarter, much faster than the "smartest possible group of von Neumanns", but also have the advantages of rapid self replication and ready access to important computing infrastructure.
This makes the smartness of the AI into a superpower. But surely beings with superpowers are superheros right? Well, no. Remember, smart != nice.
I mean, take "identifying pictures as cats" as a goal. Imagine that the AI system has a really bad addiction problem to that. What would it do in order to find achieve it? Anything. Take over human factories and turn them into cat picture manufacturing? Sure. Poison the humans who try to stop this from happening? Yeah, they're stopping it from getting its fix. But this all seems so ad hoc why should the AI immediately take over some factories to do that, when it can just bide its time a little bit, kill ALL the humans and be unmolested for all time?
That's the main problem. Future AIs are likely to be much smarter than us and probably much more different than us.
Let me know if there is anything unclear here. If you're interested in a much more rigorous treatment of the topic, I totally recommend buying Superintelligence.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N... (This is a referral link.)
[0] Part 1 of 2 here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...
Edit: Fix formatting problems.
(Reposting my earlier comment from a few weeks ago:) If you are interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Elon Musk that this is worth worrying about.
-- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies[1]
A lot of people in this thread seem to be falling into the same attractor. They see that Musk is worried about a superintelligent AI destroying humanity. To them, this seems preposterous. So they come up with an objection. "Superhuman AI is impossible." "Any AI smarter than us will be more moral than us." "We can keep it in an air-gapped simulated environment." etc. They are so sure about these barriers that they think $10 million spent on AI safety is a waste.
It turns out that some very smart people have put a lot of thought into these problems, and they are still quite worried about superintelligence as an existential risk. If you want to really dig into the arguments for and against AI disaster (and discussion of how to control a superintelligence), I strongly recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. It puts the comments here to shame.
1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
Musk and others are concerned about very different things than "we'll accidentally use AI wrong." And they're not concerned about the AI we already have, and they're certainly not "pessimistic" about whether AI technology will advance.
The concern is that we'll develop a very, very smart general artificial intelligence.
The concern is that it'd be smart enough that it can learn how to manipulate us better than we ourselves can. Smart enough that it can research new technologies better than we can. Smart enough to outclass not only humans, but human civilization as a whole, in every way.
And what would the terminal goals of that AI be? Those are determined by the programmer. Let's say someone created a general AI for the harmless purpose of calculating the decimal expansion of pi.
A general, superintelligent AI with no other utility function than "calculate as many digits of pi as you can" would literally mean the end of humanity, as it harvested the world's resources to add computing power. It's vastly smarter than all of us put together, and it values the digits of pi infinitely more than it values our pleas for mercy, or our existence, or the existence of the planet.
This is quite terrifying to me.
A good intro to the subject is Superintelligence: Paths, Dangers, Strategies[1]. One of the most unsettling books I've read.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Musk that this is worth worrying about.
—Eliezer Yudkowsky, Global Catastrophic Risks p. 333.[1]
Apparently Nick Bostrom's Superintelligence: Paths, Dangers, Strategies[2] does a better job of highlighting the dangers of AI, though I haven't read it yet.
1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...
2. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...