The book mentions 3-miles island, where a problem in a secondary system (an added safety system) spread and caused the system as a whole to fail. This is a tongue-in-cheek way of illustrating a serious issue when designing systems, though I wonder if the interpretation of what happened at 3-miles island is a bit of a stretch? (And I may misremember the book.)
"The accident to unit 2 happened at 4 am on 28 March 1979 when the reactor was operating at 97% power. It involved a relatively minor malfunction in the secondary cooling circuit which caused the temperature in the primary coolant to rise..."
If you look at the stacks we build everything on top of, it’s insanely brittle and barely works. We simply lie to ourselves that it’s otherwise so we can get out of bed in the morning actually ship something that’s useful to another human being.
If you don’t believe me, look at the organization or source code behind the black box you’ve built everything on. By some miracle it works, but barely.
I’ve found the best book that describes this is “The Systems Bible” — https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/d...
You’ll chuckle as you read it and learn to embrace the miracle that anything works.
Inevitably when these become government programs, they simply throw money at the problem without addressing fundamental supply and demand issues and accomplish very little except drive up prices. Then, when investors see prices going up and to the left, they rationally throw money at it in a way to make even more money, which drives up prices even more.
There’s so many charts like https://ritholtz.com/wp-content/uploads/2018/02/pricechanges... that clearly show the trend. It’s insane.
There’s a really good book I recommend called “Systems Bible” [https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/d...] that beautifully articulates this phenomenon in a way that applies to government policy, software systems, management, org structures, or any complex system of people or machines.
(I pulled my copy out just a few weeks ago so I could quote a different Gallism that was relevant to an HN discussion: https://news.ycombinator.com/item?id=18859680)
(If you want to pick it up, be aware that in its most current printing the title changed from Systemantics to The Systems Bible: https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/d...)
We present the Fundamental Law of Administrative Workings (F.L.A.W.): THINGS ARE WHAT THEY ARE REPORTED TO BE...
The net effect of this Law is to ensure that people in Systems are never dealing with the real world that the rest of us have to live in, but instead with a filtered, distorted, and censored version which is all that can get past the sensory organisms of the System itself...
This effect has been studied in detail by a small group of dedicated General Systemanticists. In an effort to introduce quantitative methodology into this important area of research they have paid particular attention to the amount of information that reaches, or fails to reach, the attention of the relevant administrative officer or corresponding Control Unit.
The crucial variable, they have found, is the fraction Ro/Rs, where Ro represents the amount of Reality which fails to reach the Control Unit, and Rs equals the total amount of Reality presented to the System. The fraction Ro/Rs varies from zero (full awareness of outside reality) to unity (no reality getting through). It is known, naturally enough, as the COEFFICIENT OF FICTION.
The next big idea invariably seems to grow out of the next small idea; ideas that are big from the beginning almost never work.
That doesn't really make sense with respect to ideas. The real quote is about SYSTEMS, from this book:
The system is the realization of the idea. You can have a big idea, but you can't implement it all at once. TBL had a big idea, which is necessarily a big system. So he grew it from a very small piece of code (HTTP 1.0 was ridiculously simple.) There was an unbroken chain from small system to big system.
The misleading thing about Linux is that it IS IN FACT a big idea -- it's just not a technological idea. We already knew how to write monolithic kernels. But the real innovation is the software development process. The fact that thousands of programmers can ship a working kernel with little coordination is amazing. That Linus wrote git is not an accident; he's an expert in software collaboration and
Linux is a universal hardware abstraction layer, which is an easy idea in theory, but extremely difficult in practice until Linus figured out how to make it work.
So Linux is a big idea too, as well as a small system that grew into a big system.
This reminds me of Paul Graham's advice: http://www.paulgraham.com/ambitious.html
Let me conclude with some tactical advice. If you want to take on a problem as big as the ones I've discussed, don't make a direct frontal attack on it. Don't say, for example, that you're going to replace email. If you do that you raise too many expectations. Your employees and investors will constantly be asking "are we there yet?" and you'll have an army of haters waiting to see you fail. Just say you're building todo-list software. That sounds harmless.
Empirically, the way to do really big things seems to be to start with deceptively small things. Want to dominate microcomputer software? Start by writing a Basic interpreter for a machine with a few thousand users. Want to make the universal web site? Start by building a site for Harvard undergrads to stalk one another.
I think that's pretty much in line with what's said here. You can have a big idea, a big 10 year goal, but you have to break in into steps. Gates had an explicit goal of "a PC on every desk" and Zuckerberg had an explicit goal of "connecting the world" (at some point, not at the very beginning). But they necessarily started small.
I really enjoyed these books, but I am not super well-read in this area, so there may be better ones out there. You could try searching for "systems theory" and see what other resources are out there.
Systems theory is a very broad topic, so you'll find it attached to many specific disciplines, but the general idea is that you can take a bunch of simple things, hook them together, and produce a "being" that has totally weird behavior in aggregate.
A great humorous look at how systems react is: http://www.amazon.com/The-Systems-Bible-Beginners-Guide/dp/0...
I found it very insightful, and very relevant to software engineering, too.
Oh yes. It's a classic that deserves to be much better known. Anybody engaged with complex systems - such as software or software projects - will find all kinds of suggestive things in there. As for "dry"... come now, it's hilarious and has cartoons.
Basically, just get it. Here, I'll help:
(They ruined the title but it's the same book.)
More here: http://news.ycombinator.com/item?id=182937
Edit: oh yeah, the book is http://www.amazon.com/Systems-Bible-Beginners-Guide-Large/dp...
One of its more famous aphorisms (relevant to the software startup crowd) is "A complex system that works is invariably found to have evolved from a simple system that worked".
Fresh book recommendations delivered straight to your inbox every Thursday.