by Nick Bostrom, Milan M. Cirkovic
ISBN: 0199606501
Buy on Amazon
Found in 5 comments on Hacker News
0xDEAFBEAD · 2025-07-20 · Original thread
Someone should start a company selling USB sticks pre-loaded with lots of prepper knowledge of this type. In addition to making money, your USB sticks could make a real difference in the event of a global catastrophe. You could sell the USB stick in a little box which protects it from electromagnetic interference in the event of a solar flare or EMP.

I suppose the most important knowledge to preserve is knowledge about global catastrophic risks, so after the event, humanity can put the pieces back together and stop something similar from happening again. Too bad this book is copyrighted or you could download it to the USB stick: https://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostro... I imagine there might be some webpages to crawl, however: https://www.lesswrong.com/w/existential-risk

A little bit of history/context around this.

The genesis for most of this public facing, high profile threat warning came right after Musk read the Nick Bostrom book: Global Catastrophic Risks in 2011 [1]. That seems to have been the catalyst for being publicly vocal about concerns. That accelerated into the OpenAI issue after Bostrom published Superintelligence.

For years before that, the most outspoken chorus of concerned people were non-technical AI folks from the Oxford Future of Humanity Institute and what is now called MIRI, previously the Singularity Institute with E. Yudkowski as their loudest founding member. Their big focus had been on Bayesian reasoning and the search for so called "Friendly AI." If you read most of what Musk puts out it mirrors strongly what the MIRI folks have been putting out for years.

Almost across the board you'll never find anything specific about how these doomsday scenarios will happen. They all just say something to the effect of, well the AI gets human level, then becomes either indifferent or hostile to humans and poof everything is a paperclip/gray goo.

The language being used now is totally histrionic compared to where we, the practitioners of Machine Learning/AI/whatever you want to call it, know the state of things are. That's why you see LeCun/Hinton/Ng/Goertzel etc... saying, no, really folks, nothing to be worried about for the forseeable future.

In reality there are real existential issues and there are real challenges to making sure that AI systems, that are less than human-level don't turn into malware. But those aren't anywhere near immediate concerns - if ever.

So the short answer is, we're nowhere near close to you needing to worry about it.

Is it a good philosophical debate? Sure! However it's like arguing the concern about nuclear weapons proliferation with Newton.

[1]https://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostro...

ggreer · 2014-09-20 · Original thread
"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."

—Eliezer Yudkowsky, Global Catastrophic Risks p. 333.[1]

Apparently Nick Bostrom's Superintelligence: Paths, Dangers, Strategies[2] does a better job of highlighting the dangers of AI, though I haven't read it yet.

1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...

2. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

ggreer · 2014-03-03 · Original thread
To summarize: At this point, humanity is its own greatest extinction risk. If we don't destroy ourselves in the next century, we will almost certainly inherit the stars.

For a much deeper treatment of this subject, I recommend Global Catastrophic Risks, edited by Nick Bostrom and Milan Ćirković. The overarching point is straightforward (see the paragraph above), but the details of each threat are interesting on their own.

1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...

MikeCapone · 2013-09-26 · Original thread
Indeed. If you want to find out more, the best book that I've found on this subject is _Global Catastrophic Risks_, edited by Nick Bostrom and Milan M. Cirkovic:

http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...

"In Global Catastrophic Risks 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes. "