According to Hennessy & Patterson, 90% of performance gains come from better architecture. That's the tock. The other 10% came from clock speed [1], which was a side effect of better fab, the tick. But you mostly want better fab to get more transistors so you can build a better architecture.
So here's Intel's problem. They sink a huge amount of money (costs also follow Moore's Law) to upgrade their fabs before they release a new chip. Then they have nothing new to sell for a year or more while they fit it to the fab.
In the nineties they smoothed out demand by selling up-clocked versions of old chips, training consumers to think that more MHz = more faster. That's less effective now that clock speeds have stabilized, so they've been pitching lower power consumption instead. It's not really the same, because "faster CPU" in the 1990s really meant "it can run new software."
It will be interesting to see how this model will fare in the cloud era. If most CPUs live in data centers, most purchasers will choose based on power consumption and pay less attention to architecture.
[1] Until clock speed stopped changing. Clock speeds may even drop for a while to make parallel engineering easier. cf. http://www.amazon.com/Computer-Architecture-Quantitative-App...
You're welcome.