Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.
I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html
What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)
A basically identical version at the top most level is here https://github.com/mikelevins/folio2/blob/master/src/maps-sy... that turns the map into an fset immutable map instead, minor changes would let you avoid needing to use folio2's "as" function.
It is from 2010, I don't know if their is really new things on the subject
That list reminds me of [1], which rants about this state of affairs and [2] that puts many beliefs to the test.
[1] https://youtu.be/WELBnE33dpY
[2] https://www.oreilly.com/library/view/making-software/9780596...
If I remember correctly from the interview (I think it is here[2]), one conclusion was that TDD doesn't have a clear benefit when you add it to a project. On the other hand, in a survey, TDD projects are more likely to succeed because it is a habit common to good developers. I hope I am capturing the subtlety there. Essentially, TDD is not a silver bullet, but rather a good habit shared by many good developers. That was enough to convince me of the merits.
It's another problem altogether to try to institute TDD for a project, especially for a team. Like so many things in programming, TDD could be used and abused. The same could be said for JavaScript or [insert proper noun here]. If misunderstood or used incorrectly, TDD could be a drain on the project. A benefit--and this ties back into the idea of TDD as a habit--is that it forces the code you write to have at least one other client. This requirement would alter the way you write code and arguably for the better.
I haven't read through the site to see what is there, but software engineering methodology and technique research* uses techniques from research of management techniques in business, making it closer to psychology or sociology. For more information, the blog "It Will Never Work in Theory"[1] does a good job of highlighting these sorts of results that are directly useful and has some explanation of the tools they are using to study software engineering practices. The book Making Software[2] goes into much more detail on software engineering research methodologies if you are interested.
*As opposed to CS theory research that could be used in software engineering, which is usually math.
[0] http://vimeo.com/9270320 [1] http://neverworkintheory.org/index.html [2] http://shop.oreilly.com/product/9780596808303.do
have fun.
If you haven't yet run across this book I highly recommend you check it out. At least for me it really meshed with my own quest to further delve into the mix of social and technical issues around software development. For more info on the book besides amazon reviews etc I also wrote up a blog entry last year which goes into more depth on the book http://benjamin-meyer.blogspot.com/2011/02/book-review-makin...
My take is that there is much to learn from science about how to evaluate propositions regarding software engineering (most, but not all, of them are unsupported) but few new useful ideas.
Another reference along these lines: http://www.amazon.com/Facts-Fallacies-Software-Engineering-R...
https://www.oreilly.com/library/view/making-software/9780596...