Found in 11 comments on Hacker News
mindcrime · 2023-01-19 · Original thread
Maybe not "made me better at math" per-se, but definitely "made me more enthusiastic about math":

The Universe Speaks in Numbers[1] by Graham Farmelo

I found this very motivating and insightful, in terms of developing even more of an appreciation for how much math underpins other branches of science. Not that that is a novel insight by any means... but the details of the incidents where breakthroughs in mathematics allowed further advances in physics, etc. and looking at the "back and forth" between the domains, that was wildly interesting to me. Reading this book definitely helped motivate me to get serious about committing more time / focus to studying mathematics.

I also enjoyed the "counterpoint" book by Sabine Hosenfelder, Lost in Math[2]. I think these two books complement each other nicely.

Then the handful of additional (no pun intended) books that jump to mind would be:

- How Mathematicians Think by William Byers[3]

- How to Think Like a Mathematician by Kevin Houston[4]

- Discrete Mathematics with Applications[5] by Susanna Epp

- How Not To Be Wrong[6] by Jordan Ellenberg

- Introduction to Mathematical Thinking[7] by Keith Devlin

- How to Measure Anything[8] by Douglas Hubbard

[1]: https://www.amazon.com/Universe-Speaks-Numbers-Reveals-Natur...

[2]: https://www.amazon.com/Lost-Math-Beauty-Physics-Astray/dp/15...

[3]: https://www.amazon.com/How-Mathematicians-Think-Contradictio...

[4]: https://www.amazon.com/How-Think-Like-Mathematician-Undergra...

[5]: https://www.amazon.com/Susanna-S-Epp-Mathematics-Application...

[6]: https://www.amazon.com/How-Not-Be-Wrong-Mathematical/dp/0143...

[7]: https://www.amazon.com/Introduction-Mathematical-Thinking-Ke...

[8]: https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...

mindcrime · 2022-12-29 · Original thread
I see this in tech all the time. It is indeed very hard to measure the return on investment for tooling and infrastructure in tech! Any infra work, whether it's splitting up your monolith into components, improved developer tooling, or fixing flaky builds has a vague and hand-wavy return on investment and has to compete with "Well customer FooBar will sign a $10M contract in Q2 if we build feature BashBaz instead," and now good luck as an engineer explaining how and when exactly your investment in developer tooling is going to make the company $10M in Q2.

I agree that it's hard to quantify those kinds of things, but I might quibble over the degree of difficulty. At the very least, some techniques for dealing with these kinds of "hard to quantify" scenarios are known, published, and have been analyzed to varying degrees. For a company the size of SouthWest, with the amount of money that's at stake, I find it surprising that they wouldn't use such methods (or maybe they do and just still managed to screw the pooch, I don't know).

What means am I talking about? Well, you could start with by the book Scenario Planning[1], and then have domain experts assign costs to different scenarios and use calibrated probability estimation[2] to assign probabilities to those scenarios. And then you could use Monte Carlo simulation[3] to get an idea of the likely distribution of outcomes. From there you can calculate a range of likely costs (or returns, depending on which way you frame the problem) for dealing with, or not dealing with, specific scenarios.

This is a very high level and hand-wavy description, btw, of the methodology Douglas Hubbard[4] lays out in his book How to Measure Anything[5].

[1]: https://en.wikipedia.org/wiki/Scenario_planning

[2]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...

[3]: https://en.wikipedia.org/wiki/Monte_Carlo_method

[4]: https://en.wikipedia.org/wiki/Douglas_W._Hubbard

[5]: https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...

mindcrime · 2021-03-08 · Original thread
Depending on the context, I'm a fan of the work of Douglas Hubbard, in his book How to Measure Anything[1]. His approach involves working out answers to things that might sometimes be done as a "back of the napkin" kind of thing, but in a slightly more rigorous way. Note that there are criticisms of his approach, and I'll freely admit that it doesn't guarantee arriving at an optimal answer. But arguably the criticisms of his approach ("what if you leave out a variable in your model?", etc.) apply to many (most?) other modeling approaches.

On a related note, one of the last times I mentioned Hubbard here, another book came up in the surrounding discussion, which looks really good as well. Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin[2] - I bought a copy but haven't had time to read it yet. Maybe somebody who is familiar will chime in with their thoughts?

[1]: https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...

[2]: https://www.amazon.com/gp/product/0691129495/ref=ppx_yo_dt_b...

mindcrime · 2020-01-19 · Original thread
How To Measure Anything[1] by Douglas Hubbard.

The basic gist of the book goes something like this: in the real world (especially in a business setting) there are many things which are hard to measure directly, but which we may care about. Take, for example, "employee morale" which matters because it may affect, say, retention, or product quality. Hubbard suggests that we can measure (many|most|all|??) of these things by using a combination of "calibrated probability assessments"[2], awareness of nth order effects, and Monte Carlo simulation.

Basically, "if something matters, it's because it affects something that can be measured". So you identify the causal chain from "thing" to "measurable thing", have people who are trained in "calibrated probability assessment" estimate the weights of the effects in the causal chain, then build a mathematical model, and use a Monte Carlo simulation to work out how inputs to the system affect the outputs.

Of course it's not perfect, since estimation is always touchy, even using the calibration stuff. And you could still commit an error like leaving an important variable out of the model completely, or sampling from the wrong distribution when doing your simulation. But generally speaking, done with care, this is a way to measure the "unmeasurable" with a level of rigor that's better than just flat out guessing, or ignoring the issue altogether.

[1]: https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...

[2]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...

perl4ever · 2019-10-21 · Original thread
I don't think that we're on the same wavelength and no, we are not at all agreed on what a metric is, but here's a link to a book that might be interesting:

https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...

bordercases · 2017-11-20 · Original thread
I also like this guide: https://www.av8n.com/physics/thinking.htm "Learning, Remembering, and Thinking". I recommend checking out his other work for a model of working through problems coming from physicists.

One more thing. Oftentimes the key step to thinking is figuring out what you're questions are, and questions are always determined by what uncertainties you have in a domain, as specifically relevant as you can make them.

I'm gonna quote Venkat Rao (of Breaking Smart and Ribbonfarm fame) from an article he deleted years ago:

> Real questions, useful questions, questions with promising attacks, are always motivated by the specific situation at hand. They are often about situational anomalies and unusual patterns in data that you cannot explain based on your current mental model of the situation… Real questions frame things in a way that creates a restless tension, by highlighting the potentially important stuff that you don’t know. You cannot frame a painting without knowing its dimensions. You cannot frame a problem without knowing something about it. Frames must contain situational information. There are two types of questions. Formulaic questions and insight questions. …. Formulaic questions can be asked without knowing much. If they can be answered at all, they can be answered via a formulaic process. …. Insight questions can only be asked after you develop situation awareness. They are necessarily local and unique to the situation.

The world is /extremely/ information rich to the point of absurdity, and what fails is not the richness of our input data but rather our awareness of how we ought to use it. George Polya tried to teach his students how to problem solve in mathematics by means of getting people to ask questions. By verbalizing his thought process he hoped to convey these principles, as well as giving them a standard template to prompt their cycle of questions. But to adhere to a strict plan like that is to defeat the point. The real point is to maintain a conversation with yourself, giving yourself and refining your own questions until insight develops, and keeping yourself talking.

Ultimately I like to take an information-theoretic approach as the basis of my philosophy here. /Some/ information is /always/ going to be contained in /any/ comparison that I can make between two phenomena in the world. Most of this "information" would be considered noise relative to most reference frames. But it is always possible to extract /something/ from a situation by creating these tensions between yourself and your uncertainties in the world.

You can muddle around questioning things for awhile, but gradually things come up. The key is to let your uncertainty start off however it is and keep pruning away at it until your solution is sculpted from the clay. It can and will happen.

If you've ever tried doing Fermi Estimates (like those prescribed in https://www.amazon.com/Street-Fighting-Mathematics-Educated-... , https://www.amazon.com/Art-Insight-Science-Engineering-Compl... , https://web.archive.org/web/20160309161649/http://www.its.ca... , https://www.amazon.com/How-Measure-Anything-Intangibles-Busi...), then you'll be able to perceive the mindset that has significant transfer to many problems that have even just approximate answers.

blowski · 2016-07-20 · Original thread
If they could be easily derived, we'd all be doing it all the time. Spend some time doing it before you apply for your next job (or salary review) and you might be pleasantly surprised at how well the conversation goes. I linked to "How to Measure Anything" in another comment, and that's a good read - https://www.amazon.co.uk/How-Measure-Anything-Intangibles-Bu....

If you really can't find a way for your current job, then say how many downloads your open source project has got. Or how many comments or page views your blog gets. For some reason, employers get excited when I tell them "I'm in the top 3% on StackOverflow". (Yes, I know how ridiculous that sounds.)

But that guy who earns twice as much you and does half the work? This is what he does. He talks in the language of the people who decide his salary, and that language involves specific numbers that matter to the business.

I'm a huge believer in going back to primary texts, and understanding where ideas came from. If you've liked a book, read the books it references (repeat). I also feel like book recommendations often oversample recent writings, which are probably great, but it's easy to forget about the generations of books that have come before that may be just as relevant today (The Mythical Man Month is a ready example). I approach the reading I do for fun the same way, Google a list of "classics" and check for things I haven't read.

My go to recommendations:

http://www.amazon.com/Structure-Scientific-Revolutions-50th-... - The Structure of Scientific Revolutions, Thomas Kuhn, (1996)

http://www.amazon.com/Pragmatic-Programmer-Journeyman-Master... - The Pragmatic Programmer, Andrew Hunt and David Thomas (1999)

Things I've liked in the last 6 months:

http://www.amazon.com/How-Measure-Anything-Intangibles-Busin... - How to Measure Anything, Douglas Hubbard (2007)

http://www.amazon.com/Mythical-Man-Month-Software-Engineerin... - Mythical Man Month: Essays in Software Engineering, Frederick Brooks Jr. (1975, but get the 1995 version)

http://www.amazon.com/Good-Great-Some-Companies-Others/dp/00... - Good To Great, Jim Collins (2001)

Next on my reading list (and I'm really excited about it):

http://www.amazon.com/Best-Interface-No-brilliant-technology... - The Best Interface is No Interface, Golden Krishna (2015)

SkyMarshal · 2013-02-12 · Original thread
>because I'm not writing a check for something I can't measure.

This is interesting. My first response is that not everything of value can be measured [1], but then I thought better of it and realized there probably are ways to measure everything of value [2], they're just not easy, obvious, or intuitive, and the odds of convincing a national educational bureaucracy that does things as much for appearance and expedience as effectiveness are probably not great.

[1]: http://blogs.hbr.org/davenport/2010/10/what_cant_be_measured...

[2]: http://www.amazon.com/How-Measure-Anything-Intangibles-Busin...

Fresh book recommendations delivered straight to your inbox every Thursday.