by Andrew Oram, Greg Wilson
ISBN: 9780596808310
Buy from O’Reilly
Found in 11 comments on Hacker News
elric · 2024-12-23 · Original thread
If you're interested in some scientific background to Software Engineering, I can recommend the book "Making Software" (O'reilly) by Andy Oram & Greg Wilson. It's a bit old now, but addresses and challenges many common beliefs about Software Engineering.

https://www.oreilly.com/library/view/making-software/9780596...

Jach · 2024-09-15 · Original thread
Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?

Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.

I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html

What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)

A basically identical version at the top most level is here https://github.com/mikelevins/folio2/blob/master/src/maps-sy... that turns the map into an fset immutable map instead, minor changes would let you avoid needing to use folio2's "as" function.

neves · 2021-05-24 · Original thread
This well written book from O'Reilly covers the same subject: https://www.oreilly.com/library/view/making-software/9780596...

It is from 2010, I don't know if their is really new things on the subject

rwoerz · 2020-07-24 · Original thread
We software engineers are still more like alchimists rather than chemists.

That list reminds me of [1], which rants about this state of affairs and [2] that puts many beliefs to the test.

[1] https://youtu.be/WELBnE33dpY

[2] https://www.oreilly.com/library/view/making-software/9780596...

ruraljuror · 2015-11-24 · Original thread
It seems a lot of these discussions surrounding testing are very anecdotal; this link being a prime example. I remember hearing an interview with Greg Wilson regarding his book Making Software, [1] the premise of which is to apply a more rigorous methodology to understanding what makes software work.

If I remember correctly from the interview (I think it is here[2]), one conclusion was that TDD doesn't have a clear benefit when you add it to a project. On the other hand, in a survey, TDD projects are more likely to succeed because it is a habit common to good developers. I hope I am capturing the subtlety there. Essentially, TDD is not a silver bullet, but rather a good habit shared by many good developers. That was enough to convince me of the merits.

It's another problem altogether to try to institute TDD for a project, especially for a team. Like so many things in programming, TDD could be used and abused. The same could be said for JavaScript or [insert proper noun here]. If misunderstood or used incorrectly, TDD could be a drain on the project. A benefit--and this ties back into the idea of TDD as a habit--is that it forces the code you write to have at least one other client. This requirement would alter the way you write code and arguably for the better.

[1] http://shop.oreilly.com/product/9780596808303.do

[2] https://blog.stackoverflow.com/2011/06/se-podcast-09/

chas · 2014-09-21 · Original thread
If you want an overview of the ideas behind this sort of research and a quick summary of some results, Greg Wilson gave a great talk on it[0].

I haven't read through the site to see what is there, but software engineering methodology and technique research* uses techniques from research of management techniques in business, making it closer to psychology or sociology. For more information, the blog "It Will Never Work in Theory"[1] does a good job of highlighting these sorts of results that are directly useful and has some explanation of the tools they are using to study software engineering practices. The book Making Software[2] goes into much more detail on software engineering research methodologies if you are interested.

*As opposed to CS theory research that could be used in software engineering, which is usually math.

[0] http://vimeo.com/9270320 [1] http://neverworkintheory.org/index.html [2] http://shop.oreilly.com/product/9780596808303.do

toolslive · 2013-12-01 · Original thread
It was mentioned in this talk: http://vimeo.com/9270320 and in the corresponding book "making software" http://shop.oreilly.com/product/9780596808303.do

have fun.

mikebike · 2013-04-25 · Original thread
There's "Making Software" by Oram and Wilson: http://shop.oreilly.com/product/9780596808303.do
icefox · 2012-08-23 · Original thread
Well O'reilly recently did put out "Making Software What Really Works, and Why We Believe It" http://shop.oreilly.com/product/9780596808303.do which is a collection of essays backed by not lore, but actual scientific studies about software development. A few topics touched on in the book: • How much time should you spend on a code review in one sitting? • Is there a limit to the number of LOC you can accurately review? • How much better/faster is pair programming? • Does using design patterns make software better? • Does test-driven development work as well as they say? • How much do languages matter? • What matters more: How far apart people are geographically, or how far apart they are in the org chart? • Can code metrics predict the number of bugs in a piece of software? • Which is better: offices or cubes? • Does code coverage predict the number of bugs that will be later found? • What is right/wrong with our bug tracking systems today? • Why are graduates so lost in their first job?

If you haven't yet run across this book I highly recommend you check it out. At least for me it really meshed with my own quest to further delve into the mix of social and technical issues around software development. For more info on the book besides amazon reviews etc I also wrote up a blog entry last year which goes into more depth on the book http://benjamin-meyer.blogspot.com/2011/02/book-review-makin...

spenrose · 2011-11-09 · Original thread
Greg Wilson did a book, which I did not enjoy, on this topic: http://shop.oreilly.com/product/9780596808303.do and a slideshow, which I recommend: http://www.slideshare.net/gvwilson/bits-of-evidence-2338367 .

My take is that there is much to learn from science about how to evaluate propositions regarding software engineering (most, but not all, of them are unsupported) but few new useful ideas.

Another reference along these lines: http://www.amazon.com/Facts-Fallacies-Software-Engineering-R...