Found in 2 comments on Hacker News
atombender · 2020-01-03 · Original thread
Read Blindsight [1] by Peter Watts. A much better book, in my opinion. It's without doubt the most existentially unnerving novel about alien intelligence that I've read.

I'm not going to spoil anything, suffice to say that the object that is encountered in space is nothing like anything portrayed in science fiction before. The exploration narrative goes far beyond Clarke's Rama, and Watts poses some very interesting philosophical questions along the way.

The book has some minor narrative issues that annoyed me, but still a great read.

[1] https://www.amazon.com/Blindsight-Peter-Watts/dp/0765319640

api · 2014-06-23 · Original thread
Great comment.

Things like the military bureaucracy or the financial system can be hostile to us because their incentives are very much out of line with humanistic goals.

I do think it's legitimate to be at least a little concerned about future even more powerful and non-human AIs being hostile in the same way. What happens when/if there are corporations out there run by alien computer minds that seek goals that have no overlap at all with human beings? Would an AI care about climate change? Would it care about the health of the world's biological ecosystem? Would it care about providing a decent life for its employees? It could -- out of mere indifference -- seem quite sociopathic and evil to any humans that live under its influence. Imagine a super-intelligent mind whose goal function is maximizing short term quarterly shareholder value. This isn't just a programmed-in goal function either... this is actually a form of intelligence whose embodiment is the corporation, so it's a survival and fertility imperative. In a sense you can't blame the thing, but you could blame its creators.

Humans can of course do the same, but even with sociopathic humans there is some sense of common overlapping interest. At the very least their motives are comprehensible. AIs may have motives that seem almost "Lovecraftian" to us-- completely alien and bizarre. On top of that biological short term shareholder value imperative, try layering an Aspergers-like obsession about certain patterns in information, or some kind of bizarre AI-conceived religious mission, etc. Extremely intelligent humans can be religious fundamentalists, so why couldn't similar forms of "functional madness" exist among non-human intelligences? One of the pollyanna assumptions of the singularity crowd is that AI would necessarily be rational. Why?

Edit:

I cannot recommend this book enough. It's probably the best and most overlooked work of SF in the past 20 years:

http://www.amazon.com/Blindsight-Peter-Watts-ebook/dp/B003K1...

It deals very intelligently with this kind of thing. It's basically a monster story where the monster is something out of Ph.D level evolutionary ecology-- and it manages to actually be scary. Very intellectually satisfying "reveal." :)

I think what it ultimately comes down to is this: we do not live in a post-scarcity world, and we're not going to for the foreseeable future. Given that our existence involves a certain amount of unavoidable haggling over resources -- and given that our own labor is itself a resource -- we should be a little concerned about what sorts of beings we might end up having to haggle with. This also applies to human genetic engineering and augmentation, since that could also produce essentially alien intelligences.

I'm not against researching these topics. In fact I'd almost call myself a transhumanist / posthumanist in sentiment. But I do think it's advisable to give the matter some serious thought. Bad things can and do happen. We want to try to create positive outcomes, not blunder into the future.

Fresh book recommendations delivered straight to your inbox every Thursday.