Found in 8 comments on Hacker News
josephcooney · 2017-02-03 · Original thread
I bought it from O'Reilly....it came with mobi and epub versions. http://shop.oreilly.com/product/0636920041528.do
torinmr · 2016-12-25 · Original thread
The SRE Book (http://shop.oreilly.com/product/0636920041528.do) has a number of chapters on load balancing that are well worth reading.

One thing that the book covers that I think this article glossed over was the fact that in sufficiently large systems there's never a single "load balancer" - instead there's many layers of load balancing systems at different levels of the stack. E.g.:

DNS load balancing -> high capacity network-level load balancing -> shared reverse HTTP proxy -> application server -> database (with a "load balancer" internal to the application server load balancing among DB replicas).

remus · 2016-11-14 · Original thread
This is mostly speculation base don having read this http://shop.oreilly.com/product/0636920041528.do

Hopefully someone who actually knows what theyre talking about will be along shortly!

> Are they charged the same as external customers or do they get a 'wholesale' rate?

Id be quite surprised if internal customers are charged a markup. Presumably the whole point in operating an internal service is that you lower the cost as much as possible for your internal customers.

> As internal clients, do they run under the same conditions as external clients? Or is there a shared internal server pool that they use?

From the above book, it seems that the hardware is largely abstracted away so most services aren't really aware of servers. I assume there's some separation between internal and external customers, but at a guess that'd largely be because of the external facing services being forks of existing internal tools that have been untangled from other internal services.

> Do they get any say in the hardware or low-level configuration of the systems they use? (ie. if someone needs ultra low latency or more storage, can they just ask Joe down the hall for a machine on a more lightly loaded network, or with bunch more RAM, for the week?)

As above, the hardware is largely abstracted away. From memory, teams usually say "we think we need ~x hrs of cpu/day, y Gbps of network,..." then there's some very clever scheduling that goes on to fit all the services on to the available hardware. There's a really good chapter on this in the above book.

> Do they have the same type of performance constraints as the ones encountered by gitlab?

Presumably it depends entirely on the software being written.

kyrra · 2016-11-14 · Original thread
I think this is because most companies do pager duty wrong. I highly recommend the google SRE book[0] (notes here[1], chapter 11 covers oncall/pager). One thing mentioned in this book is compensation for being oncall. At Google we get fairly decent pay compensation for holding the pager, enough where it can incentivize people to be on the rotation.

(I'm a software engineer at google who is oncall at this moment)

[0] http://shop.oreilly.com/product/0636920041528.do

[1] http://danluu.com/google-sre-book/

jzelinskie · 2016-10-15 · Original thread
If you liked this post and want to know the full details to this change in ops, please read the SRE book[0]. It's a great read for both devs and ops and can immediately help you make changes to company policy for the better.

[0]: http://shop.oreilly.com/product/0636920041528.do

geerlingguy · 2016-04-11 · Original thread
Looks like it's up on O'Reilly's site: http://shop.oreilly.com/product/0636920041528.do - also on Amazon.
sargun · 2016-04-11 · Original thread
This is still largely at commodity prices / performance points. It's been quite some time since any of their hardware has looked consumer-oriented, but comparing this to what enterprises buy, it's apples and oranges.

[1] http://shop.oreilly.com/product/0636920041528.do

[2] http://research.google.com/pubs/pub35290.html