For example, Austin's Measuring and Managing Performance in Organizations gives a helpful 3-party model for understanding how simplistic measurement-by-numbers goes awry. He starts with a Principal-Agent and then adds a Customer as the 3rd party; the net effect is that as a Principal becomes more and more energetic in enforcing a numerical management scheme, the Customer is at first better served and then served much worse.
As a side effect he recreates or overlaps with the "Equal Compensation Principle" (described in Milgrom & Roberts' Economics, Organization and Management). Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.
Then there's the annoyance that most goals set are just made the hell up. Just yanked out from an unwilling fundament. Which means you're not planning, you're not objective, you're not creating comparative measurement. It's a lottery ticket with delusions of grandeur. In Wheeler & Chambers' Understanding Statistical Process Control, the authors emphasise that you cannot improve a process that you have not first measured and then stablised. If you don't have a baseline, you can't measure changes. If it's not a stable process, you can't tell if changes are meaningful or just noise. As they put it, more pithily:
> This is why it is futile to try and set a goal on an unstable process -- one cannot know what it can do. Likewise it is futile to set a goal for a stable process -- it is already doing all that it can do! The setting of goals by managers is usually a way of passing the buck when they don't know how to change things.
That last sentence summarises pretty much how I feel about my strawperson impressions of OKRs.
 https://www.amazon.com/Understanding-Statistical-Process-Con..., though I prefer Montgomery's Introduction to Statistical Quality Control as a much broader introduction with less of an old-man-yells-at-cloud vibe -- https://www.amazon.com/Introduction-Statistical-Quality-Cont...
This was the only book worth reading when I was researching metrics for our team at work.
TL;DR: Don't use performance metrics for human beings. You almost certainly won't get what you want, and you'll probably get nasty side effects instead.
If anything, this is a brilliant example of how applying measurable incentives can distort motivations and make people do stupid things to please whatever metrics are being measured.
Left to their own devices, these very smart and ambitious people would no doubt make up their own mind about the value of their time and ensure they don't waste time milling about when they're busy, and so go to lunch early or late or in the middle if they're not too busy anyway or want to chat with someone in the queue. Instead, they're now forcing themselves to fit a stupid "penalty window" to save a few bucks, because that's what the incentive system in place dictates.
Measurements are a very, very dangerous beast. Apply with caution.
(Great book on the topic: http://www.amazon.co.uk/Measuring-Managing-Performance-Organ... )
Dysfunction may arise when you are unable to measure all the relevant dimensions of the work being performed. People will often shift their effort to the dimensions that are being measured and ignore the remaining tasks, no matter how important they are. This results in less value being delivered compared to a scenario with no measurement based incentives.
The author mentions software development as an area that is specially prone to dysfunction.
Fresh book recommendations delivered straight to your inbox every Thursday.