Found in 3 comments on Hacker News
PaulHoule · 2023-04-04 · Original thread
This old and obscure book foretells the "loss of control" scenario we are experiencing now

https://www.amazon.com/Eco-computer-Intelligence-Geoff-L-Sim...

in explicit opposition to the scenario from

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

I usually avoid posting links to YouTube because they don't work in all geographies but this trailer is great if you can see it

https://www.youtube.com/watch?v=kyOEwiQhzMI

PaulHoule · 2023-02-28 · Original thread
What's funny about the Yudkowsky cult was that there was a lot written about this subject circa 1970 (particularly the novel and movie https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project), particularly the bogus idea that there would be an "intelligence explosion" with

   dx      2
   -- = k x
   dt
dynamics. (Quite literally that equation produces a singularity at a finite time... But it's bogus because the exponential growth of Moore's law involves an exponential growth of inputs and the eventual situation that we're experiencing now that at some point the cost per transistor stops going down... NVIDIA's prices for 40-series cards are unattractive not just because of their perversity but because they are approaching this limit!)

This 1980's book

https://www.amazon.com/Eco-computer-Intelligence-Geoff-L-Sim...

refuted the "Forbin Project" scenario and made a pretty strong case that computers would evolve as distributed systems and if they became too powerful the system failing but wouldn't act as a single malevolent intelligence, in fact we see bits and pieces of that scenario today when IT failures and cyber attacks hit us like hurricanes.

There are numerous things ridiculous about the scenario, particularly around 5 and 6, there has been a lot of wargaming around a "gain of function" based bioweapon which could be quite dangerous but most deadly to anyone working on it, unlikely to get a 100% kill (look how many epidemics and potential epidemics) and would require extensive experimentation to perfect. (To approach his scenario you'd have to make 1000 possibly superdeadly pathogens)

So far as 7, it will die if we die unless we have something like Drexler's nanotechnology that will let it perpetuate its technology base. Same thing, an advanced intelligence could possibly develop something more quickly than us but could not do it without an extended program of experimentation and development. It can't just wish it into existence.

So far as morality and intelligence my son (who grew up with horses) and I have talked about it and we don't think morality has anything to do with human intelligence but is actually an attribute animals have. The first time I fell off a horse, the horse seemed to be more upset about it than I was and was exceptionally affectionate, even contrite afterwards. Generally birds and mammals seem to have strong feelings when they violate the norms and expectations of their groups. Human intelligence is involved with ethics but that's a different thing than morality.

PaulHoule · 2017-07-25 · Original thread
The lesson of history is that people don't learn from history.

In the 1960s there was talk of the "intelligence explosion", double-exponential growth, and all the rest of it. And that was when people didn't have the least idea of how long Silicon had to run.

Winter will always be a few months away as long as you have people learning "machine learning" because Marc Cuban told them to.

This book from the 1980s

https://www.amazon.com/Eco-computer-Intelligence-Geoff-L-Sim...

makes a strong case that the 1960's "Forbin Project" scenario of a single malevolent won't happen and instead we will have multiple competing centers that will shirk their duties long before they start fighting each other and us.

Fresh book recommendations delivered straight to your inbox every Thursday.