Then a few weeks later I came to the conclusion that Python is the new Lisp when it comes to metaprogramming. (and async in Python does the same thing that he coded up with continuations.) I think homoiconicity and the parenthesis are a red herring, the real problem is that we're still stuck with parser generators that aren't composable. You really ought to be able to add
unless(X) { ... } to Java by adding 1 production to the grammar, a new object for the AST tree, and a transformation for the compiler that rewrites to if(!X) { ... } probably the actual code would be smaller than the POM file if the compiler was built as if extensibility mattered.Almost all the examples in this book (which claims to be a tutorial for Common Lisp programming)
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
are straightforward to code up in Python. The main retort to this I hear from Common Lisp enthusiasts is that some CL implementations are faster, which is true. Still, most languages today have a big helping of "Lisp, the good parts". Maybe some day the Rustifarians will realize the wide-ranging impacts of garbage collection, not least that you can smack together an unlimited number of frameworks and libraries into one program and never have to think about making the memory allocation and deallocation match up.
Then I start thinking things like: "if he was using Clojure he wouldn't be having the problems with nconc that he talks about" and "I can work most of the examples in Python because the magic is mostly in functions, not in the macros" and "I'm disappointed that he doesn't do anything that really transform the tree"
(It's still a great book that's worth reading but anything about Lisp has to be seen in the context the world has moved on... Almost every example in https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro... can be easily coded up in Python because it was the garbage collection, hashtables on your fingertips, first class functions that changed the world, not the parens)
Lately I've been thinking about the gradient from the various tricks such as internal DSLs and simple forms of metaprogramming which are weak beer compared to what you can do if you know how compilers work.
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
in Python without doing anything too out of the ordinary. I just finished
https://www.amazon.com/Lisp-Advanced-Techniques-Common/dp/01...
and came to almost the same conclusion in that most of the macro use in that book is in the syntactic sugar or performance optimization category. If I didn't have a lot of projects in the queue and people demanding my time I'd try coding up the ATN and Prolog examples in Python (sorta kinda did the "CLOS" example in that I built a strange kind of meta-object facility that made "objects" backed by RDF triples)
In Java I did enough hacking on this project
https://github.com/paulhoule/ferocity/blob/main/ferocity0/sr...
to conclude I could create something that people who think "Java Sux" and "Common Lisp Sux" would really hate. If I went forward on that it would be to try "code golf not counting POM file size" by dividing ferocity into layers (like that ferocity0 which is enough to write a code generator that can stub the whole stdlib) so I could use metaprogramming to fight back against the bulkification you get from writing Java as typed S-expressions. (I'm pretty sure type erasure can be dealt with if you aren't interested in JDK 10+ features like var and switch expressions, on the other hand a system like that doesn't have to be able to code generate syntactic sugar sorts of structures because ferocity is all about being able to write syntatic sugar)
That is, other programming languages have adopted many of the features of Lisp that made Lisp special such as garbage collection (Rustifarians are learning the hard way that garbage collection is the most important feature for building programs out of reusable modules), facile data structures (like the scalar, list, dict trinity), higher order functions, dynamic typing, REPL, etc.
People struggled to specify programming languages up until 1990 or so, some standards were successful such as FORTRAN but COBOL was a hot mess that people filed lawsuits over it, PL/I a failure, etc. C was a clear example of "worse is better" with some kind of topological defect in the design such that there's a circularity in the K&R book that makes it confusing if you read it all the way through. Ada was a heroic attempt to write a great language spec but people didn't want it.
I see the Common Lisp spec as the first modern language spec written by adults which inspired the Java spec and the Python spec and pretty much all languages developed afterwards. Pedants will consistently deny that the spec is influenced by the implementation but that's absolutely silly: modern specifications are successful because somebody thinks through questions like "How do we make a Lisp that's going to perform well on the upcoming generation of 32 bit processors?"
In 1980 you had a choice of Lisp, BASIC, PASCAL, FORTRAN, FORTH, etc. C wound up taking PL/I's place. The gap between (say) Python and Lisp is much smaller than the gap between C and Lisp. I wouldn't feel that I could do the macro-heavy stuff in
https://www.amazon.com/Lisp-Advanced-Techniques-Common/dp/01...
in Python but I could write most of the examples in
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
pretty easily.
Maybe a use case for new AI models could be creating more old fashioned expert systems written in LISP or Prolog that are easier for humans to audit. Everything tends to come back full circle.
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
2. Generative Programming (https://www.amazon.com/Generative-Programming-Methods-Tools-...)
3. PAIP (https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...)
4. Lisp In Small Pieces (https://www.amazon.com/Lisp-Small-Pieces-Christian-Queinnec/...)
5. The C Programming Language (https://www.amazon.com/Programming-Language-Dennis-M-Ritchie...)
https://github.com/onetrueawk/awk - UNIX philosophy of small tools, DSLs, CS theory: state machines / regular expressions, Thompson algorithm ...
https://github.com/mirrors/emacs - Both a program and a VM for a programming language, hooks, before/after/around advices, modes, asynchronous processing with callbacks, ... Worth to think of challenges of designing interactive programs for extensibility.
https://github.com/rails/rails - Metaprogramming DSLs for creating powerful libraries, again a lesson in hooks (before_save etc.), advices (around_filter etc.), ...
https://github.com/git/git - The distributed paradigm, lots of CS theory again: hashing for ensuring consistency, DAGs everywhere, ... By the way, the sentence "yet the underlying git magic sometimes resulted in frustration with the students" is hilarious in the context of a "software architecture" course.
One of computer algebra systems - the idea of a http://en.wikipedia.org/wiki/Canonical_form
One of computer graphics engines - Linear algebra
...
There are loads of things one can learn from those projects by studying the source in some depth, but I can't think of any valuable things one could learn by just drawing pictures of the modules and connecting them with arrows. There are also several great books that explore real software design issues and not that kind of pretentious BS, they all come from acknowledged all-time master software "architects", yet all of them almost never find diagrams or "viewpoints" useful for saying the things they want to say, and they all walk you through real issues in real programs:
http://www.amazon.com/Programming-Addison-Wesley-Professiona...
http://www.amazon.com/Paradigms-Artificial-Intelligence-Prog...
http://www.amazon.com/Structure-Interpretation-Computer-Prog...
http://www.amazon.com/Unix-Programming-Environment-Prentice-...
http://www.amazon.com/Programming-Environment-Addison-Wesley...
To me, the kind of approach pictured in the post, seems like copying methods from electrical or civil engineering to appear more "serious", without giving due consideration to whether they really are helpful for anything for real-world software engineering or not. The "software engineering" class which taught those kind of diagram-drawing was about the only university class I did not ever get any use from, in fact I had enough industry experience by the point I took it that it just looked silly.
Graham's On Lisp is a really interesting book
https://paulgraham.com/onlisptext.html
which is allegedly about programming with macros but I'd say 80% of the time he implements something with closures and then makes a macro-based implementation that peforms better. That 80% can be done in Python and the other 20% you wouldn't want to do in Python because Python already has those features... And if you wanted to implement meta-objects in Python you would do it Pythonically.
Graham unfortunately doesn't work any examples that involve complex transformations on the expression trees because these are hard and if you want to work that hard you're better off looking at the Dragon book.
You can work almost all the examples in Norvig's Common Lisp book
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
in Python and today Norvig would advocate that you do.