Virtualization was one of those pioneering efforts that changed the dynamics of computing and introduced significant operational and energy efficiency. There is however a misconception that Data Center energy efficiency measures stop with Virtualization.
From server and storage point of view, there are two others that can contribute to significant savings:
- Retirement of legacy servers
- Optimization between production and test & development systems
Servers hosting legacy applications that are no longer being used should be decommissioned as they neither offer any business value nor contribute to uptime. If the only reasons they are being kept are for audit or compliance requirements of historical data, Information Lifecycle Management (ILM) strategies should be deployed.
Just as we observe that nearly 55% power consumption are from non-computing that adversely impacts PUE, our experience working with Data Centers show that as much as 70% power consumption are from test & development servers and storage. There are a few reasons for this:
- Older servers reaching end-of-life are allocated for non-production usage. However, they are energy inefficient and should be replaced with newer, more energy efficient equipment. The TCO will definitely be lower, as in all likelihood their AMC will also be lower.
- Full sized production databases are cloned for test & development purpose when only a subset can suffice. Worse, equal number back-ups are taken even of the clones as they are taken of production databases!
Going beyond Virtualization, significant power reductions can be achieved through better operating procedures for test & development, retiring legacy servers and replacing older equipment and of course, cooling.