The soft side of performance

Many suggestions and guidelines have been written about increasing the performance of software code. And very few code is written these days that does not have some measures in place to increase performance. Unfortunately, the steps taken are usually laboratory-like steps, ie work under some test circumstances, but not in real life when the end users have to work with the software.
I would like to find out if anyone is familiar with a softer approach to software performance. For instance:

  • Is neat code (adherence to coding conventions and naming conventions) more likely to perform better, because all coders in the team understand the code better and are able to design highly performant software?
  • Is there empirical evidence that static code analysis leads to better performing software; again, not just lab, but real life.
  • Does something similar hold true for dynamic code analysis?

Coding guidelines are for the benefit of humans; the processor doesn't care at all what you name your variables, but being able to see their type at a glance can make humans less puzzled and hence write less buggily. Some guidelines can actually make code slower.

Whether analysis really helps you optimize your program for real-world performance depends what you do with the analysis and what you mean by 'real world'. Users often care about things developers didn't consider, like load-time and memory use.

Nonetheless, I dont think research in computer science has delved into this area. Not all performance guidelines actually improve performance. But one could conduct an experiment of two teams, one adhering to strict coding guidelines and another without any such guideline and test the hypothesis whether the team adhering to the guidelines comes with on average better performing code.