Found in 4 comments on Hacker News
elric · 2024-12-23 · Original thread
If you're interested in some scientific background to Software Engineering, I can recommend the book "Making Software" (O'reilly) by Andy Oram & Greg Wilson. It's a bit old now, but addresses and challenges many common beliefs about Software Engineering.

https://www.oreilly.com/library/view/making-software/9780596...

Jach · 2024-09-15 · Original thread
Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?

Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.

I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html

What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)

A basically identical version at the top most level is here https://github.com/mikelevins/folio2/blob/master/src/maps-sy... that turns the map into an fset immutable map instead, minor changes would let you avoid needing to use folio2's "as" function.

neves · 2021-05-24 · Original thread
This well written book from O'Reilly covers the same subject: https://www.oreilly.com/library/view/making-software/9780596...

It is from 2010, I don't know if their is really new things on the subject

rwoerz · 2020-07-24 · Original thread
We software engineers are still more like alchimists rather than chemists.

That list reminds me of [1], which rants about this state of affairs and [2] that puts many beliefs to the test.

[1] https://youtu.be/WELBnE33dpY

[2] https://www.oreilly.com/library/view/making-software/9780596...