Found in 5 comments on Hacker News
vsupalov · 2017-11-20 · Original thread
Good point. But that's not limited to automation. As with data engineering, or software development, you're likely to overdo it and build something which does not agree with reality if you "try to build the whole thing" at once. Building the smallest possible deployment pipeline (a skeleton pipeline, as it's called in the continuous delivery book (from 2010 but still the best thing on the topic out there) [1]).

You start covering the most essential needs just so they become useful and usable. Then you go ahead and iterate from there, building something which suits your team, company & product. Learning and adapting in the process.


jjmiv · 2017-06-29 · Original thread
devops is more of a principle than a job. that's my opinion though.

i've been reading through this book and it does a decent job of covering the principles for CICD, builds, tests releases:

you'll run across various "thought leaders" in devops and its important to remember that a) each employer treats devops and cicd differently and you'll want to learn their practices as you bring about your own ideas to the culture and b) form your own opinions, just b/c thought leaders and books are out there its important to learn what you like to do and improve how you like to do it.

sciurus · 2016-08-29 · Original thread
If you haven't read it already, read Continuous Delivery.

MalcolmDiggs · 2015-04-10 · Original thread
Just started reading "Continuous Delivery" by Jez Humble, et al. Really wish I would have grabbed this years ago, great primer on what (for me) is a confusing topic.
joshpadnick · 2014-07-17 · Original thread
In my opinion, this book[1] is the authority on continuous integration and continuous deployment.

Continuous Integration is fundamentally about creating a tight feedback loop between your developers and your code. When you program in your IDE, the instant you write uncompilable code, you get red squigglies, so you're getting an instant feedback loop on something you just wrote.

CI is the same thing, but at a higher level. The instant you commit your code, some automated process should take over and start analyzing / compiling / testing your code and look for things to give you feedback on. If your code doesn't even compile -- one of the first milestones of CI, you should know that immediately.

Since you just committed it, making the fix is easy. This is compared to a developer who downloads your code the next day, can't compile, comes and bugs you about it, etc...

As far as some real world use cases, we just setup Jenkins for a new Java project we're writing. It does an automated build test that compiles and executes all unit tests automatically on any commit to GitHub on any branch. It's a little slower than I like -- our still growing app takes a full 3 minutes to compile and give feedback.

But, it's been great. For example, the GitHub client on Mac OS X doesn't recognize when I change uppercase letters to lowercase and vice versa, so while my local compiles worked fine, my repo actually had a failing build. Once I committed, I got an automated email within 5 minutes telling me the build failed, and I fixed it. Without CI, I may not have found about that issue for weeks, making the change more difficult.

For production deployment, we're still in alpha, but we've got a 1-button push to deploy. Again, slower than it should be -- in this case 5 minutes -- but the automation is awesome and makes doing any deployment -- whether hot fixes or new releases that much more pleasant.

Regarding the performance, I see it as a win just to get anything automated, however slow it may be. Because once you're there, you can always look for ways to optimize it. For example, our current build process, re-downloads dependencies every single time. This could clearly be cached. When it's a priority for us, we'll do it.


Fresh book recommendations delivered straight to your inbox every Thursday.