There is a great book which I think should be on a table of every single person (especially leadership) working in any place which involves humans interacting with machines:
https://www.amazon.com/Field-Guide-Understanding-Human-Error...
It focuses on air crash investigations. But it's very useful to tech people in understanding the right way to approach incident investigations. It can be very easy to blame individuals ("stupid pilot shouldn't have dropped his iPad", etc), but that focus prevents improving safety over the long term. Dekker's book is a great argument for, as here, thinking about what actually happened and why as a systemic thing. Which provides much more fertile ground for making sure it doesn't happen again.
The author of the httpie blog post does not apply blame. In fact, he goes out of his way to explain his error, then suggests changes that would have prevented his mistake, to hopefully save others from the same mistake.
So we can blame humans as you seem to want to do, or we can accept human behavior and design our systems to be more forgiving.
cdelsolar at the top of this thread wrote that "I get emails, every single day" and that "It literally drives me crazy."
So what should cdelsolar do? Blame all the humans accidentally deleting their accounts, or find a way to design around it?
I know which approach I would take.
Let me suggest this book btw:
https://www.amazon.com/Field-Guide-Understanding-Human-Error...
[1] - https://www.amazon.com/Field-Guide-Understanding-Human-Error...
[2] - https://www.abebooks.com/servlet/BookDetailsPL?bi=3123581163...
It's about investigating airplane crashes, and in particular two different paradigms for understanding failure. It deeply changed how I think and talk about software bugs, and especially how I do retrospectives. I strongly recommend it.
And the article made me think of Stewart Brand's "How Buildings Learn": https://www.amazon.com/dp/0140139966
It changed my view of a building from a static thing to a dynamic system, changing over time.
The BBC later turned it into a 6-part series, which I haven't seen, but which the author put up on YouTube, starting here: https://www.youtube.com/watch?v=AvEqfg2sIH0
I especially like that in the comments he writes: "Anybody is welcome to use anything from this series in any way they like. Please don’t bug me with requests for permission. Hack away. Do credit the BBC, who put considerable time and talent into the project."
I think it's much more interesting to understand the subtle dynamics that result in bad outcomes. As an example, Sidney Dekker's book, "The Field Guide to Understanding Human Error" [1] makes an excellent case that if you're going to do useful aviation accident investigation, you have to decline the simple-minded approach of blame, and instead look at the web of causes and experiences that lead to failure.
[1] https://www.amazon.com/Field-Guide-Understanding-Human-Error...
Another note, I wondered what the root cause of the financial meltdown was for a number of years, but looking at it from this point of view, it's obvious that a number of things have to go wrong simultaneously; but it is not obvious beforehand which failed elements, broken processes, and bypassed limits lead to catastrophe.
For your own business/life, think about things that you live with that you know are not in a good place. Add one more problem and who knows what gives.
This is not intended to scare or depress, but maybe have some compassion when you hear about someone else's failure.
0 https://www.amazon.com/Field-Guide-Understanding-Human-Error...
It's this sort of blame-driven, individual-focused, ask-the-unachieveable answer that makes it completely impossible for organizations to move beyond a relatively low level of quality/competence. It's satisfying to say, because it can always be applied and always makes the speaker feel smart/superior. But its universal applicability is a hint that it's not going to actually solve many problems.
If you'd like to learn why and what the alternative is, I strongly recommend Sidney Dekker's "Field Guide to Understanding Human Error":
https://www.amazon.com/Field-Guide-Understanding-Human-Error...
His field of study is commercial airline accident review, so all the examples are about airplane crashes. But the important lessons are mostly about how to think about error and what sort of culture creates actual safety. The lessons are very much applicable in software. And given our perennially terrible bug rates, I'd love to see our thinking change on this.
Most problems are systemic, which is a nice way of saying “ultimately management’s fault”.
Most things that most people do, most of the time, are reasonable in the circumstances. Management creates the circumstances. “Human error” is a non-explanation.
Here’s a book on the topic, often called systems thinking: http://www.amazon.com/Field-Guide-Understanding-Human-Error/...
Getting even more bookish: firing “bad apples” for “human error” is a form of substituting an easier question when presented with a harder one, as Kahneman describes in Thinking Fast and Slow.