The PUE (Power Usage Effectiveness), a valid indicator of "Green IT" measurement for the Cloud?
Image : Pixabay
Qu’est-ce que le PUE ?
What is PUE?
The acronym is not well known in the world of IT. It must begin to be understood by data center experts. But in these times, it risks being put on the front of the stage.
IT weighs more and more heavily on the front of greenhouse gas (GHG) emissions, and the storage of information has a significant part of it. The European Commission has estimated data center consumption in Europe at 76.8 TW/H, i.e. 2.7% of EU electricity demand, with a projection of 100 MW/H in 2030, i.e. an increase of 28% .
In this time of massive switch from infras to the cloud, it was important to define the possible benefits of this storage mode also according to a “Green IT” approach.
PUE is defined by dividing the amount of energy needed to run a data center, and the energy needed to run the “equipment” it contains:
Since the energy required to operate servers is relatively intangible when the equipment is recent, data center managers will have an interest in focusing on the energy efficiency of the infrastructure. It is a key indicator, obviously not the only one, to allow data center managers to monitor the efficiency of the infrastructure.
In the total power of the installation, we will find:
- Data center hardware,
- Power supply components,
- Cooling systems,
- Lighting systems,
In the energy needed for computer equipment, we will find:
- The energy used to power storage equipment,
- Networking equipment,
- Control equipment,
The most obvious behavior to adopt will be to try to lower the numerator, and therefore the energy required to operate the installation:
- By improving cooling systems,
- By using energy-saving lighting,
- By replacing obsolete hardware,
- ….
But in our view, it is above all the movement towards the Cloud that must be promoted, and especially towards certain Clouds.
The Cloud and its "good" PUE, but variable.
Hyperscalers (whose datacenters often exceed 5000 servers: Azur, Google Cloud, AWS essentially), have PUE scores that are close to 1. Other Cloud providers are much less virtuous, and most datacenters are between 1 .6 and 1.8 (see “Vertiv.”).
It is extremely difficult to calculate, but the PUE of an "on prem" infrastructure, given the size of the installation and the base of means that must necessarily be implemented, is mechanically infinitely higher.
It shows that the energy required for an infra on prem' at iso level of service was calculated at X 100 by a collective (ASHRAE technical Commitee 9.9).
!
The essence of the craze for the Cloud in companies is both the immense plasticity of a Cloud infrastructure, its implementation requiring (almost) no effort, and therefore an infinitely superior energy efficiency, especially when it comes to hyperscalers!
So why are some people still slow to migrate?
The oldest companies have the most complicated on prem legacy systems. They sometimes have mainframe heritages with cobol, DB2, Oracle databases with PL/SQL which has abounded…. These architectures are inextricably linked to the ERP, the CRM, or the core system of the company. And on the other side of the scope, there are myriads of tools of which we no longer know very well whether they are useful or not.
To go effectively to the Cloud, the challenge for these companies will be twofold:
- Avoir une vision de ce qui est utile ou non, afin de procéder à un démantèlement massif, une des pierres angulaires de la préparation d'une migration,
- Avoir une vision granulaire des processus de transport/transformation des données dans les systèmes legacy.
1- Indeed, the detailed understanding of data processing chains associated with log analysis will make it possible to detect dead branches to simplify source architectures. So all you have to do is "move" to the Cloud what is really needs to be. Either way, it's a good first step.
2- In addition, a detailed understanding of the data processing chains will also make it possible to very simply reconstruct identical flows in the Cloud. The difficulty will therefore be to operate a technical parsing (=scan) which will make it possible to deconstruct the complexity (= "reverse engineering"), then to reconstruct the flows in the Cloud (Big Query for Google, Redshift for Amazon, Azure SQL for Azure …). This can be done in a quasi-automated way by "translating" the source code into generic SQL, and associating specific scripts to it to reproduce the subtleties of the source code.
There will remain many things to do, but our point is to say that the current techniques of introspection of the systems in place make it possible to implement projects of this order, even if a priori they seem excessively complex.
We believe that the Cloud is accessible to all types of companies, even those that are weighed down by a seemingly impregnable legacy system. It will first be necessary to analyze simplifying its legacy with tools, and not with the "paper / pencil" method. This will be a real vector for modernizing the information system, but also a step towards improving the real "Power User Effectiveness" of the company. The PUEs also vary from one Cloud provider to another, and the ecological ambition must weigh in your choice, because it will be a factual contribution to the reduction of GHGs.
Commentaires
Enregistrer un commentaire