Found in 23 comments on Hacker News
0xDEAFBEAD · 2023-11-29 · Original thread
I dunno if that's true, have you read Superintelligence for example? https://www.amazon.com/Superintelligence-Dangers-Strategies-...
Bud · 2022-06-10 · Original thread
Pretty decent article in some ways, but the book Superintelligence covered all this ground in much more detail in 2014.

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

Strilanc · 2020-12-30 · Original thread
Both your arguments so far are standard ones addressed in "Superintelligence: Paths, Dangers, Strategies" [1].

Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish [2].

An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped [3].

1: https://www.amazon.ca/Superintelligence-Dangers-Strategies-N...

2: https://en.wikipedia.org/wiki/AlphaZero

3: https://en.wikipedia.org/wiki/Stuxnet

ivalm · 2019-11-22 · Original thread
There are a lot of direct technical reason this might not work (not all edge cases are sufficiently sampled).

But there is also a "fundamental" issue of it being difficult/impossible to enumerate "bad behaviors". This is an issue related to a lot of AI safety, including AGI safety as discussed by for example in Nick Bostrom's "Superintelligence" (https://www.amazon.com/dp/B00LOOCGB2)

Danihan · 2017-06-09 · Original thread
You're not the only one who finds it scary, as there are massively popular books on the topic..

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

nopinsight · 2017-02-27 · Original thread
You appear erudite and very confident in your interpretation of history. So could you explain to us why you assign greater historical importance to energy than information technologies such as paper and the printing press, which amplified and spread the crucial cultural shift towards scientific methods and experimentation? I favor the latter since energy has always been available--We simply lacked the knowledge to harness it efficiently.

If I may, I'd like to recommend a couple of books about the present and possible futures of human progress as well:

E.O. Wilson. Consilience. https://www.amazon.com/Consilience-Knowledge-Edward-Osborne-...

Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. https://www.amazon.com/Superintelligence-Dangers-Strategies-...

eduren · 2016-12-21 · Original thread
I highly recommend the book referenced in the article: Nick Bostrom's Superintelligence.

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

It has helped me make informed, realistic judgments about the path AI research needs to take. It and related works should be in the vocabulary of anybody working towards AI.

richardbatty · 2016-06-28 · Original thread
This article brings up an important source of bias that tech people risk - that we overuse models from programming when thinking about other aspects of the world. We should be learning alternative models from other subjects like economics, philosophy, sociology, etc so that we can improve our mental toolbox and avoid thinking everything works like a software system.

I'd say that another related source of bias is that we are surrounded by people who think like us.

It's a shame though, that the article dismisses without much explicit justification risk from artificial intelligence and the problem of death. When I first encountered these ideas, I dismissed them because they seemed weird. But if you read the arguments for caring you realise that they are actually well thought-out. For AI risk, check out http://waitbutwhy.com/2015/01/artificial-intelligence-revolu... and https://www.amazon.co.uk/Superintelligence-Dangers-Strategie.... For an argument for tackling death, checkout http://www.nickbostrom.com/fable/dragon.html.

Also, there's a clear answer to 'If after decades we can't improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else?' -- because the tech elite live in a rich society where most of the fundamental problems (e.g. infectious disease control, widespread dollar-a-day level poverty, access to education) have been solved. The remaining problems are much harder and we should focus on problems where our resources can go further - e.g. in helping the global poor. We should also work on important problems that we have a lot of influence over, such as risks from artificial intelligence and surveillance technology.

toomuchtodo · 2016-05-16 · Original thread
Thanks! I wish I could take credit, but I recall reading it in a passage about future tech possibilities in the book "Superintelligence":

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

astrofinch · 2016-03-13 · Original thread
Important distinction: children get their genes from us and share many of our values by default. Computers do not share many of our values by default. Instead they do what they are programmed to do.

But the problem is that computers do what you say, not what you mean. If I write a function called be_nice_to_people(), the fact that I gave my function that name does nothing to affect the implementation. Instead my computers behavior will depend on the specific details of the implementation. And since being nice to people is a behavior that's extremely hard to precisely specify, by default creating an AI that's smart enough to replace humans is likely to result in a bad outcome.

Recommended book: http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

SonicSoul · 2016-01-27 · Original thread
I recommend Superintelligence [0]. It explores different plausible paths that AI could take to 1. come up to / surpass human intelligence, and 2. take over control. For example if human level intelligence is achieved in a computer it can be compounded by spawning 100x or 1000x the size of earth population which could statistically produce 100 Einsteins to live simultaneously. Another way is shared consciousness which would make collaboration instantaneous between virtual beings. Some of the outcomes are not so rosy to humans and it's not due to lack of jobs! Great read.

[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

BenjaminTodd · 2015-12-12 · Original thread
The purpose of the profile isn't to argue a risk exists. We largely defer to the people we take to be experts on the issue, especially Nick Bostrom. We think he presents compelling arguments in Superintelligence, and although it's hard to say anything decisive in this area, if you think there's even modest uncertainty about whether AGI will be good or bad, it's worth doing more research into the risks.

If you haven't read Bostrom's book yet, I'd really recommend it. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

mattmanser · 2015-12-08 · Original thread
We are not going to be able to put a chip in the brain to detect your mood for decades, and even then will it be worth doing?

Surgery is invasive, dangerous, your body is corrosive, putting stuff in your brain will have side-effects and what are you going to get out of it? Music to suit your mood?

I totally agree with Nick Bostrom[1] on this one, it's not happening any time soon.

[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

This book by Nick Bostrom will help you find answers: Superintelligence: Paths, Dangers, Strategies[1]

The author is the director of the Future of Humanity Institute at Oxford.

[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...

davmre · 2015-03-16 · Original thread
A lot of respected AI researchers and practitioners are writing these "AIs are really stupid" articles to rebut superintelligence fearmongering in the popular press. That's a valuable service, and everything this article says is correct. Deepmind's Atari network is not going to kick off the singularity.

I worry that the flurry of articles like this, rational and well-reasoned all, will be seen as a "win" for the nothing-to-worry-about side of the argument and lead people to discount the entire issue. This article does a great job demonstrating the flaws in current AI techniques. It doesn't attempt to engage with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and others who are worried, not about current methods, but about what will happen when the time comes -- in ten, fifty, or a hundred years -- that AI does exceed general human intelligence. (refs: http://edge.org/conversation/the-myth-of-ai#26015, http://www.amazon.com/Superintelligence-Dangers-Strategies-N...)

This article rightly points out that advances like self-driving cars will have significant economic impact we'll need to deal with in the near future. That's not mutually exclusive with beginning to research ways to ensure that, as we start building more and more advanced systems, they are provably controllable and aligned with human values. These are two different problems to solve, on different timescales, both important and well worth the time and energy of smart people.

ggreer · 2015-03-03 · Original thread
Unlike your warp drive or teleporter examples, we're pretty sure human-level AI is possible because human-level natural intelligence exists. The brain isn't magic. Eventually, people will figure out the algorithms running on it, then improve them. After that, there's nothing to stop the algorithms from improving themselves. And they can be greatly improved. Current brains are nowhere near the pinnacle of possible intelligences.

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

Note: The not ELI5 version is Nick Bostrum's Superintelligence, a lot of what follows derives from my idiosyncratic understanding of Tim Urban's (waitbutwhy) summary of the situation [0]. I think his explanation is much better than mine, but doubtless longer.

There are some humans who are a lot smarter than a lot of other humans. For example, the mathematician Ramanujan could do many complicated infinite sums in his head and instantly factor taxi-cab license plates. von Neumann pioneered many different fields and was considered by many of his already-smart buddies to be the smartest. So we can accept that there are much smarter people.

But are they the SMARTEST possible? Well, probably not. If another person just as smart as von Neumann was born, the additional advancements since his lifetime (the internet, iphones, computer based off of von Neumann's architechture!) can use all of these new inventions to discover even newer things!

Hm, that's interesting. What happens if this hypothetical von Neumann 2.0 begins pioneering a field of genetic engineering techniques and new ways of efficient computation? Then, not only would the next von Neumann get born a lot sooner, but THEY can take advantage of all the new gadgets that 2.0 made. This means that it's possible that being smart can make it easier to be "smarter" in the future.

So you can get smarter right? Big whoop. von Neumann is smarter, but he's not dangerous is he? Well, just because you're smart doesn't mean that you'd be nice. The Unabomber wrote a very complicated and long manifesto before doing bad things. A major terrorist attack in Tokyo was planned by graduates of a fairly prestigious university. Even not counting people who are outright Evil, think of a friend who is super smart but weird. Even if you made him a lot smarter, where he can do anything, would you want him in charge? Maybe not. Maybe he'd spend all day on little boats in bottles. Maybe he'd demand that silicon valley shut down to create awesome pirate riding on dinosaur amusement parks. Point is, Smart != Nice.

We've been talking about people, but really the same points can be applied to AI systems. Except the range of possibilities is even greater for AI systems. Humans are usually about as smart as you and I, nearly everyone can walk, talk and write. AI systems though, can range from being bolted to the ground, to running faster than a human on uneven terrain, can be completely mute to... messing up my really clear orders to find the nearest Costco (Dammit Siri). This also goes for goals. Most people probably want some combination of money/family/things to do/entertainment. AI systems, if they can be said to "want" things would want things like seeing if this is a cat picture or not, beating an opponent at Go or hitting an airplane with a missile.

As hardware and software progresses much faster, we can think of a system which could start off worse than all humans at everything begin to do the von Neumann->von Neumann 2.0 type thing, then become much smarter than the smartest human alive. Being super smart can give it all sorts of advantages. It could be much better at gaining root access to a lot of computers. It could have much better heuristics for solving protein folding problems and get super good at creating vaccines... or bioweapons. Thing is, as a computer, it also gets the advantages of Moore's law, the ability to copy itself and the ability to alter its source code much faster than genetic engineering will. So the "smartest possible computer" could not only be much smarter, much faster than the "smartest possible group of von Neumanns", but also have the advantages of rapid self replication and ready access to important computing infrastructure.

This makes the smartness of the AI into a superpower. But surely beings with superpowers are superheros right? Well, no. Remember, smart != nice.

I mean, take "identifying pictures as cats" as a goal. Imagine that the AI system has a really bad addiction problem to that. What would it do in order to find achieve it? Anything. Take over human factories and turn them into cat picture manufacturing? Sure. Poison the humans who try to stop this from happening? Yeah, they're stopping it from getting its fix. But this all seems so ad hoc why should the AI immediately take over some factories to do that, when it can just bide its time a little bit, kill ALL the humans and be unmolested for all time?

That's the main problem. Future AIs are likely to be much smarter than us and probably much more different than us.

Let me know if there is anything unclear here. If you're interested in a much more rigorous treatment of the topic, I totally recommend buying Superintelligence.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N... (This is a referral link.)

[0] Part 1 of 2 here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

Edit: Fix formatting problems.

jessriedel · 2015-01-30 · Original thread
You really should think of this more like AGI as an amoral, extremely powerful technology, like nuclear explosions. One could easily have objected that "no one would be so stupid as to design a doomsday device", but this is really relying too much on your intuition about people's motivations and not giving enough respect for the large uncertainty for how things will develop when powerful new technologies are introduced.

(Reposting my earlier comment from a few weeks ago:) If you are interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Elon Musk that this is worth worrying about.

https://twitter.com/elonmusk/status/495759307346952192

ggreer · 2015-01-15 · Original thread
Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

-- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies[1]

A lot of people in this thread seem to be falling into the same attractor. They see that Musk is worried about a superintelligent AI destroying humanity. To them, this seems preposterous. So they come up with an objection. "Superhuman AI is impossible." "Any AI smarter than us will be more moral than us." "We can keep it in an air-gapped simulated environment." etc. They are so sure about these barriers that they think $10 million spent on AI safety is a waste.

It turns out that some very smart people have put a lot of thought into these problems, and they are still quite worried about superintelligence as an existential risk. If you want to really dig into the arguments for and against AI disaster (and discussion of how to control a superintelligence), I strongly recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. It puts the comments here to shame.

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

enoch_r · 2015-01-15 · Original thread
> using AI the same way we use all tools -- for our benefit

Musk and others are concerned about very different things than "we'll accidentally use AI wrong." And they're not concerned about the AI we already have, and they're certainly not "pessimistic" about whether AI technology will advance.

The concern is that we'll develop a very, very smart general artificial intelligence.

The concern is that it'd be smart enough that it can learn how to manipulate us better than we ourselves can. Smart enough that it can research new technologies better than we can. Smart enough to outclass not only humans, but human civilization as a whole, in every way.

And what would the terminal goals of that AI be? Those are determined by the programmer. Let's say someone created a general AI for the harmless purpose of calculating the decimal expansion of pi.

A general, superintelligent AI with no other utility function than "calculate as many digits of pi as you can" would literally mean the end of humanity, as it harvested the world's resources to add computing power. It's vastly smarter than all of us put together, and it values the digits of pi infinitely more than it values our pleas for mercy, or our existence, or the existence of the planet.

This is quite terrifying to me.

A good intro to the subject is Superintelligence: Paths, Dangers, Strategies[1]. One of the most unsettling books I've read.

[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

jessriedel · 2015-01-06 · Original thread
If you're actually interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Musk that this is worth worrying about.

https://twitter.com/elonmusk/status/495759307346952192

ggreer · 2014-09-20 · Original thread
"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."

—Eliezer Yudkowsky, Global Catastrophic Risks p. 333.[1]

Apparently Nick Bostrom's Superintelligence: Paths, Dangers, Strategies[2] does a better job of highlighting the dangers of AI, though I haven't read it yet.

1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...

2. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

clumsysmurf · 2014-09-20 · Original thread
The most interesting book I've been able to find about this topic is "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. At the moment its a #1 Bestseller in AI.

http://www.amazon.com/dp/0199678111

Fresh book recommendations delivered straight to your inbox every Thursday.