Found in 8 comments on Hacker News
mindcrime · 2021-03-08 · Original thread
Depending on the context, I'm a fan of the work of Douglas Hubbard, in his book How to Measure Anything[1]. His approach involves working out answers to things that might sometimes be done as a "back of the napkin" kind of thing, but in a slightly more rigorous way. Note that there are criticisms of his approach, and I'll freely admit that it doesn't guarantee arriving at an optimal answer. But arguably the criticisms of his approach ("what if you leave out a variable in your model?", etc.) apply to many (most?) other modeling approaches.

On a related note, one of the last times I mentioned Hubbard here, another book came up in the surrounding discussion, which looks really good as well. Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin[2] - I bought a copy but haven't had time to read it yet. Maybe somebody who is familiar will chime in with their thoughts?



mindcrime · 2020-01-19 · Original thread
How To Measure Anything[1] by Douglas Hubbard.

The basic gist of the book goes something like this: in the real world (especially in a business setting) there are many things which are hard to measure directly, but which we may care about. Take, for example, "employee morale" which matters because it may affect, say, retention, or product quality. Hubbard suggests that we can measure (many|most|all|??) of these things by using a combination of "calibrated probability assessments"[2], awareness of nth order effects, and Monte Carlo simulation.

Basically, "if something matters, it's because it affects something that can be measured". So you identify the causal chain from "thing" to "measurable thing", have people who are trained in "calibrated probability assessment" estimate the weights of the effects in the causal chain, then build a mathematical model, and use a Monte Carlo simulation to work out how inputs to the system affect the outputs.

Of course it's not perfect, since estimation is always touchy, even using the calibration stuff. And you could still commit an error like leaving an important variable out of the model completely, or sampling from the wrong distribution when doing your simulation. But generally speaking, done with care, this is a way to measure the "unmeasurable" with a level of rigor that's better than just flat out guessing, or ignoring the issue altogether.



perl4ever · 2019-10-21 · Original thread
I don't think that we're on the same wavelength and no, we are not at all agreed on what a metric is, but here's a link to a book that might be interesting:

bordercases · 2017-11-20 · Original thread
I also like this guide: "Learning, Remembering, and Thinking". I recommend checking out his other work for a model of working through problems coming from physicists.

One more thing. Oftentimes the key step to thinking is figuring out what you're questions are, and questions are always determined by what uncertainties you have in a domain, as specifically relevant as you can make them.

I'm gonna quote Venkat Rao (of Breaking Smart and Ribbonfarm fame) from an article he deleted years ago:

> Real questions, useful questions, questions with promising attacks, are always motivated by the specific situation at hand. They are often about situational anomalies and unusual patterns in data that you cannot explain based on your current mental model of the situation… Real questions frame things in a way that creates a restless tension, by highlighting the potentially important stuff that you don’t know. You cannot frame a painting without knowing its dimensions. You cannot frame a problem without knowing something about it. Frames must contain situational information. There are two types of questions. Formulaic questions and insight questions. …. Formulaic questions can be asked without knowing much. If they can be answered at all, they can be answered via a formulaic process. …. Insight questions can only be asked after you develop situation awareness. They are necessarily local and unique to the situation.

The world is /extremely/ information rich to the point of absurdity, and what fails is not the richness of our input data but rather our awareness of how we ought to use it. George Polya tried to teach his students how to problem solve in mathematics by means of getting people to ask questions. By verbalizing his thought process he hoped to convey these principles, as well as giving them a standard template to prompt their cycle of questions. But to adhere to a strict plan like that is to defeat the point. The real point is to maintain a conversation with yourself, giving yourself and refining your own questions until insight develops, and keeping yourself talking.

Ultimately I like to take an information-theoretic approach as the basis of my philosophy here. /Some/ information is /always/ going to be contained in /any/ comparison that I can make between two phenomena in the world. Most of this "information" would be considered noise relative to most reference frames. But it is always possible to extract /something/ from a situation by creating these tensions between yourself and your uncertainties in the world.

You can muddle around questioning things for awhile, but gradually things come up. The key is to let your uncertainty start off however it is and keep pruning away at it until your solution is sculpted from the clay. It can and will happen.

If you've ever tried doing Fermi Estimates (like those prescribed in , , ,, then you'll be able to perceive the mindset that has significant transfer to many problems that have even just approximate answers.

blowski · 2016-07-20 · Original thread
If they could be easily derived, we'd all be doing it all the time. Spend some time doing it before you apply for your next job (or salary review) and you might be pleasantly surprised at how well the conversation goes. I linked to "How to Measure Anything" in another comment, and that's a good read -

If you really can't find a way for your current job, then say how many downloads your open source project has got. Or how many comments or page views your blog gets. For some reason, employers get excited when I tell them "I'm in the top 3% on StackOverflow". (Yes, I know how ridiculous that sounds.)

But that guy who earns twice as much you and does half the work? This is what he does. He talks in the language of the people who decide his salary, and that language involves specific numbers that matter to the business.

I'm a huge believer in going back to primary texts, and understanding where ideas came from. If you've liked a book, read the books it references (repeat). I also feel like book recommendations often oversample recent writings, which are probably great, but it's easy to forget about the generations of books that have come before that may be just as relevant today (The Mythical Man Month is a ready example). I approach the reading I do for fun the same way, Google a list of "classics" and check for things I haven't read.

My go to recommendations: - The Structure of Scientific Revolutions, Thomas Kuhn, (1996) - The Pragmatic Programmer, Andrew Hunt and David Thomas (1999)

Things I've liked in the last 6 months: - How to Measure Anything, Douglas Hubbard (2007) - Mythical Man Month: Essays in Software Engineering, Frederick Brooks Jr. (1975, but get the 1995 version) - Good To Great, Jim Collins (2001)

Next on my reading list (and I'm really excited about it): - The Best Interface is No Interface, Golden Krishna (2015)

SkyMarshal · 2013-02-12 · Original thread
>because I'm not writing a check for something I can't measure.

This is interesting. My first response is that not everything of value can be measured [1], but then I thought better of it and realized there probably are ways to measure everything of value [2], they're just not easy, obvious, or intuitive, and the odds of convincing a national educational bureaucracy that does things as much for appearance and expedience as effectiveness are probably not great.



Fresh book recommendations delivered straight to your inbox every Thursday.