Found in 4 comments on Hacker News
anchpop · 2019-07-16 · Original thread
This probably isn't it, but it sounds a lot like a compiler. If you're interested in that stuff you might be interested in checking out "Advanced Compiler Design and Implementation" [0] or the /r/programminglanguages subreddit.


nostrademons · 2016-04-27 · Original thread
It's been pretty common for compilers to require 3 different intermediate representations (parse tree, MIR, and LIR). My copy of Advanced Compiler Design and Optimization [1] (over 10 years old at this point) lists the 3-IR architecture on page 8. It references Sun's SPARC compilers, DEC's Alpha compilers, Intel's x86 compilers, and SGI's MIPS compilers as implementations that use it.

What has changed are the boundaries. Traditionally, semantic analysis (including typechecking) operated on the parse tree, which resembled the human-written language extensively. Optimization operated on MIR, then it'd be lowered to an architecture-specific LIR via register allocation and instruction selection, and another round of optimization (instruction scheduling, etc.) would be applied. The purpose of all of this was to provide multiple language front-ends on top of a single compiler. In the 80s and 90s, compilers were often written by the hardware vendor, so they would tightly optimize the MIR and LIR for their own architecture, and use the parse-tree => MIR lowering to support multiple surface languages.

LLVM chose a LIR that's closer to what many compilers had been using as MIR, and then moved the optimization passes into the LIR, and hid the architecture-specific backends within the LLVM project itself, out of the eyes of language designers. It could do this because of open-source: with a shared body of compiler code owned by everyone and gradual consolidation of the hardware market, it became easier to contribute your backend to LLVM than to maintain your own compiler stack and fight for adoption. (GCC actually had a fairly similar architecture first, but the GCC IR was very difficult to comprehend if you weren't a GCC maintainer, which made it impractical as a compilation target for outside projects. They generated C code instead and let GCC compile it.) That in turn made it much easier to write a compiler and experiment with language design, since you only had to figure out how to translate your language to LLVM's IR rather than work out the details of scheduling and register allocation. That, in turn, allowed greater complexity in language features: Swift and Rust have language features that go beyond what cutting-edge research was <10 years ago, and do so in a production language that you can use now. And so it's not surprising that they're now re-introducing a MIR to manage the additional complexity introduced by the new language features.


nickik · 2014-11-15 · Original thread
The good intro to compilers is Engineering a Compiler. It focuse more on optimization then most intro books, and its all SSA.

If you want to go all in on SSA optimization, there is a book called Static Single Assignment Book and its written by a hole list of compiler writers. Its not finished but there is still a lot of information.

You can find it here:

Or you can go with the classic, Advanced Compiler Design & Implementation. See here,

All of them will teach you a lot about LLVM.

tlrobinson · 2008-01-25 · Original thread
Compilers is one of the few classes I really regret not taking in college. Fortunately, I discovered the unprotected URL to my university's online lectures site (no you can't have it) and might just have to watch this semester's class...

I started watching them, and the professor recommends this book (and the new edition of the Dragon Book):

Fresh book recommendations delivered straight to your inbox every Thursday.