Measuring and evaluating the impact of compiler optimisations on power dissipation and energy consumption is a difficult task.
Traditionally performance in terms of execution cycles has been used to evaluate the impact of optimisations. Energy consumption and power dissipation have been neglected. It has been argued that if an optimisation improves performance of an application then the reduced execution time will also result in an overall reduction of energy consumption.
However, this assumption may not hold in the presence of complex instructions that may lead to significantly increased activity in the underlying microarchitecture. In this case, trade-offs between performance gains and energy consumption need to be considered. There also exist optimisations that retain the cycle-count but modify the encoding of instructions trying to optimise for power/energy.
In our research we try to tackle the problem of measuring and evaluating the effects of such optimisations by building simulation tools that allow us to quickly and correctly evaluate the power dissipation and overall energy consumption of an application. With such tools a compiler can quickly chose the optimal set of optimisations in a huge search space aimed at reducing power dissipation, and an operating system can select the best combination of energy reducing measures (i.e. DVFS). Furthermore we want to devise a strategy for how to build such simulation tools with minimal effort in order to make our results practical for adoption by industry.