That is, other programming languages have adopted many of the features of Lisp that made Lisp special such as garbage collection (Rustifarians are learning the hard way that garbage collection is the most important feature for building programs out of reusable modules), facile data structures (like the scalar, list, dict trinity), higher order functions, dynamic typing, REPL, etc.
People struggled to specify programming languages up until 1990 or so, some standards were successful such as FORTRAN but COBOL was a hot mess that people filed lawsuits over it, PL/I a failure, etc. C was a clear example of "worse is better" with some kind of topological defect in the design such that there's a circularity in the K&R book that makes it confusing if you read it all the way through. Ada was a heroic attempt to write a great language spec but people didn't want it.
I see the Common Lisp spec as the first modern language spec written by adults which inspired the Java spec and the Python spec and pretty much all languages developed afterwards. Pedants will consistently deny that the spec is influenced by the implementation but that's absolutely silly: modern specifications are successful because somebody thinks through questions like "How do we make a Lisp that's going to perform well on the upcoming generation of 32 bit processors?"
In 1980 you had a choice of Lisp, BASIC, PASCAL, FORTRAN, FORTH, etc. C wound up taking PL/I's place. The gap between (say) Python and Lisp is much smaller than the gap between C and Lisp. I wouldn't feel that I could do the macro-heavy stuff in
https://www.amazon.com/Lisp-Advanced-Techniques-Common/dp/01...
in Python but I could write most of the examples in
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
pretty easily.
Maybe a use case for new AI models could be creating more old fashioned expert systems written in LISP or Prolog that are easier for humans to audit. Everything tends to come back full circle.
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
There is a fairly simple program in
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
that solves word problems using the methods of the old AI. The point is that is is efficient and effective to use real math operators and not expect to fit numbers through the mysterious bottleneck of neural encoding.
2. Generative Programming (https://www.amazon.com/Generative-Programming-Methods-Tools-...)
3. PAIP (https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...)
4. Lisp In Small Pieces (https://www.amazon.com/Lisp-Small-Pieces-Christian-Queinnec/...)
5. The C Programming Language (https://www.amazon.com/Programming-Language-Dennis-M-Ritchie...)
While it deals with classical AI techniques, it is worth working through this book. Especially the AI example chapters where Norvig teaches how to go from specification to implementation and iterate over the design to fix problems etc. Backed by Common Lisp which allows this quick iteration by getting out of your way, this book is one way to fall in love with programing.
Warning: Once you are done with this book, be prepared to handle the less powerful systems and I am not implying here CL is the most powerful programming environment.
This book goes into some of the more advanced uses of macros and I don't believe most carries over to other "lisps".
I really loved the section on reader macros!! That's a topic that doesn't get enough attention from people coming to common lisp.
I don't believe clojure, for example, supports user defined reader macros, atleast I can't remember it having them the last time I used it(circa 2011).
EDIT, it looks like clojure does have reader macros now. Clojure just keeps getting better:)
http://en.wikibooks.org/wiki/Learning_Clojure/Reader_Macros
In addition to Let over Lambda, my common lisp reading list includes:
http://www.amazon.ca/Lisp-Small-Pieces-Christian-Queinnec-eb...
http://www.amazon.ca/Compiling-Continuations-Andrew-W-Appel-...
and http://www.amazon.ca/Paradigms-Artificial-Intelligence-Progr...
I'd love to hear if anyone else has book recommendations in a similar vein. I'm in the middle of a month off to read books and research papers so this is pretty timely for me:)
https://github.com/onetrueawk/awk - UNIX philosophy of small tools, DSLs, CS theory: state machines / regular expressions, Thompson algorithm ...
https://github.com/mirrors/emacs - Both a program and a VM for a programming language, hooks, before/after/around advices, modes, asynchronous processing with callbacks, ... Worth to think of challenges of designing interactive programs for extensibility.
https://github.com/rails/rails - Metaprogramming DSLs for creating powerful libraries, again a lesson in hooks (before_save etc.), advices (around_filter etc.), ...
https://github.com/git/git - The distributed paradigm, lots of CS theory again: hashing for ensuring consistency, DAGs everywhere, ... By the way, the sentence "yet the underlying git magic sometimes resulted in frustration with the students" is hilarious in the context of a "software architecture" course.
One of computer algebra systems - the idea of a http://en.wikipedia.org/wiki/Canonical_form
One of computer graphics engines - Linear algebra
...
There are loads of things one can learn from those projects by studying the source in some depth, but I can't think of any valuable things one could learn by just drawing pictures of the modules and connecting them with arrows. There are also several great books that explore real software design issues and not that kind of pretentious BS, they all come from acknowledged all-time master software "architects", yet all of them almost never find diagrams or "viewpoints" useful for saying the things they want to say, and they all walk you through real issues in real programs:
http://www.amazon.com/Programming-Addison-Wesley-Professiona...
http://www.amazon.com/Paradigms-Artificial-Intelligence-Prog...
http://www.amazon.com/Structure-Interpretation-Computer-Prog...
http://www.amazon.com/Unix-Programming-Environment-Prentice-...
http://www.amazon.com/Programming-Environment-Addison-Wesley...
To me, the kind of approach pictured in the post, seems like copying methods from electrical or civil engineering to appear more "serious", without giving due consideration to whether they really are helpful for anything for real-world software engineering or not. The "software engineering" class which taught those kind of diagram-drawing was about the only university class I did not ever get any use from, in fact I had enough industry experience by the point I took it that it just looked silly.
Talking about stuff like this is bound to sound esoteric, I think. So
I want to put this disclaimer upfront that I detest esotericism.
I can only assume that your problems are similar to mine, so I can
only suggest what works for me. And that might not completely work out
for you in the end, but it's worth a try for sure.
Concentration: The problem of not being able to keep distracting
thoughts away can be lessened with meditation. I came across this
suggestion in the book Pragmatic Thinking and Learning [1] and have
found an excellent CD to listen to called Guided Mindfulness
Meditation [2] by Jon Kabat-Zinn.
I tend to try to avoid meditation because for a while I seem to do
fine and so long as I do fine it just feels like a waste of time for
me. Time that I could invest reading a book. But eventually I always
end up having an extreme amount of distracting thoughts to the point
that I cannot learn anymore. I've now had this problem crop up often
enough with meditation always helping that I'm now a lot more willing
to spend the time and meditate. I want to emphasize that for *me* it
was necessary to get to the dead end and suffer from it to become
willing to change something. Maybe you can relate.
Structure: Well well, the way you write it sounds a little bit rigid
to me. I tightened up imagining all that structure you strive for and
I'm thinking you should relax a little bit. Or at least I should (and
do). So maybe we are different in this regard.
I do think you should lay back a bit and think about what really
interests you deep down in your heart. I assume you've been working
too much on hopelessly boring stuff, because with that I can relate
again. I've been working a little bit on a little server in erlang but
somehow at some point I couldn't bring myself to working further on
it. Well I could, but all the time I felt something was wrong.
As I'm happy to learn interesting programming languages and have heard
all the hype about lisp for so long (I'm looking at you, pg) I finally
gave in and started reading Practical Common Lisp [3] and now
Paradigms of Artificial Intelligence Programming [4] and what can I
say. I see now that what disappoints me in erlang but also in other
languages is having forced upon me one paradigm and/or a rigid set of
rules. In the case of erlang that might be perfectly fine as the
language can make certain guarantees that way. I've realized though
that I would much rather enjoy the lisp-ish freedom while molding a
solution. So this is my story of disappointment and fresh wind.
One quick addition in the end: In an xkcd comic [5] there is a
description of a solution (see the alt-text of the image) that delays
access to certain websites (like reddit, hn for me) but does not block
them completely. It just delays the access (-- more discussion on the
xkcd blog [6]). This serves the purpose of destroying the notion of
instant reward these stupid little bits of new information might give
you, however irrelevant they may be. I've found this to be helpful for
me because sometimes in the past I've procrastinated the hell out of
the day. I got fed up with repeatedly spending hours with unproductive
stuff and feeling sorry for the time in the end. See the pattern? I
needed to run into this problem several times before I decided that I
have to change something. I don't want to make some point here. I just
find this pattern interesting.
What I have done is I have taken an existing little chrome extension
called delaybot which by default only delays for rand(1.5) seconds and
changed the delay to 30 secs. This has worked wonders in the
beginning. I say in the beginning because I've now disabled the
extension as it is getting in my way now. No, this is not the
procrastinator disabling a helpful little tool. :-) I've found that
since I've picked up meditation again I didn't run into this problem
anymore anyways. I also tend to just bookmark away a lot of actually
interesting discussions to read them later, which of course I never
do. I do this bookmarking and closing of tabs because I tend to
accumulate too many tabs easily otherwise.
Not all is great though, the article made me realise that I'm a little
bit too hard with myself when I'm excerting will-power. I try to go
through the mentioned lisp books fast (as there are more to come
still) and at some point I notice that I can't bring myself to read a
lot more at that point. To me this looks similar to the cookie
experiment where a group of people is less productive after excerting
will power in a previous task.
So, to conclude: Even if not all is roses I can say with certainty
that meditation is the single most helpful tool to increase my
productivity. It changes me from being helpless to being more in
control of what I'd like to do with my time.
Regarding your lack of passion: Man, search your feelings. If you find
something that really interests you, you probably wouldn't think much
about what other people could do better than you. That AI book [4] I'm
reading? It features ancient techniques at the point where I am right
now but it's still a great read and I'm learning a heck of a
lot. That's what keeps me going. Also, lisp.
Phew, that was long.
I would love to hear feedback. :-)
[1] http://www.amazon.com/Pragmatic-Thinking-Learning-Refactor-P...[2] http://www.amazon.com/Guided-Mindfulness-Meditation-Jon-Kaba...
[3] http://www.gigamonkeys.com/book/
[4] http://www.amazon.com/Paradigms-Artificial-Intelligence-Prog...
[6] http://blog.xkcd.com/2011/02/18/distraction-affliction-corre...
EDIT: I've changed the formatting because it renders with long lines otherwise.
"Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp" by Peter Norving
http://www.amazon.com/Paradigms-Artificial-Intelligence-Prog...
Guess Lisp is 'really' making a come back.
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
in Python without doing anything too out of the ordinary. I just finished
https://www.amazon.com/Lisp-Advanced-Techniques-Common/dp/01...
and came to almost the same conclusion in that most of the macro use in that book is in the syntactic sugar or performance optimization category. If I didn't have a lot of projects in the queue and people demanding my time I'd try coding up the ATN and Prolog examples in Python (sorta kinda did the "CLOS" example in that I built a strange kind of meta-object facility that made "objects" backed by RDF triples)
In Java I did enough hacking on this project
https://github.com/paulhoule/ferocity/blob/main/ferocity0/sr...
to conclude I could create something that people who think "Java Sux" and "Common Lisp Sux" would really hate. If I went forward on that it would be to try "code golf not counting POM file size" by dividing ferocity into layers (like that ferocity0 which is enough to write a code generator that can stub the whole stdlib) so I could use metaprogramming to fight back against the bulkification you get from writing Java as typed S-expressions. (I'm pretty sure type erasure can be dealt with if you aren't interested in JDK 10+ features like var and switch expressions, on the other hand a system like that doesn't have to be able to code generate syntactic sugar sorts of structures because ferocity is all about being able to write syntatic sugar)