My previous article dealt with 'Green IT' and 'IT for Green'. I received a very thoughtful comment on the fact that 'Green IT' was about what we can do in the IT department to improve our own environmental footprint, while 'IT for Green' was about what ICT can do to reduce the environmental footprint outside its normal scope. These include reducing the need for travel, reducing energy consumption through telemetering, optimizing the fuel consumption of fleet vehicles, telemedicine for patients and optimizing supply chain processes with RFID technologies.
Today I want to focus on 'Green IT'. One part of this is 'paper-free workflow', including EDI, printing on both sides of the page for handouts and plenty of other initiatives. The other part is related to the IT infrastructure, and it's this I want to look at now.
The first step is to virtualize servers by replacing physical servers dedicated to one application by virtual machines. At Orange we have created more than 10,000 virtual machines. By doing so, we reduced the number of physical servers by a factor of 10, and increased usage of each remaining server from 15% to 60%. This gave us a saving of 30 GWh, which is enough electricity to supply a city of 30,000 people. The second step is to deploy virtual desktops, which helps postpone desktop or laptop renewal and reduce waste. At Orange we have enthusiastically deployed desktop virtualization for nearly 100,000 users.
Once this is done, we can of course consolidate data centers and move from several dozens to less than a handful. The advantage is obvious: by concentrating data centers we can reduce the overall energy consumption. And we can locate these very large data centers in countries with low carbon emissions (why not choose France, where the 80% of electricity produced from nuclear energy leads to quite low CO2 emission per Kwh?). And of course, within the country, we need to choose the region with the most temperate climate, not too cold in winter, not too hot in summer (if it is France, why not choose Normandy, for another D-Day?).
This is exactly the route we are following at Orange. We are also making sure that our data centers have a very good Power Usage Effectiveness, which is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it (e.g. 1.2 for Google's data centers compared to 2.0 for conventional data centers).
Of course all this IT architecture needs to be operated on a 'Cloud Ready Network' with high performance and high scalability, let's say a robust virtual private network based on fiber and gigabit Ethernet technologies. At the end of the day, we will get a very sustainable IT infrastructure with excellent environmental characteristics.
But let us look at it again: a large data center with a huge data processing capability inside... doesn't this ring a bell? Back to the eighties, we called it an IBM 3090 or a Bull DPS 8. And a large number of virtual desktops only giving access to central applications hosted on the data center, was it not what we called IBM 3270 terminals? Of course the ergonomics have been dramatically improved, but the principle is almost the same: the applications and the intelligence are in the data center and the terminal is linked to the server by a faultless network.
And now let us focus on the network. What we need is a shared network that is very secure and priced 'as a service'. Something perhaps like an X25 network, with a pricing based on a 'per packet' basis? Would the ultimate Cloud Computing model be close to the old mainframe-desktop-packet network architecture that enabled the emergence of 'as a service' applications such as the 10,000 apps launched on the French Minitel in the 1980s - another kind of virtual desktop with minute-based pricing applications (pay as you go!).
Of course one could easily argue that the 64 Kbps classic bandwidth of an X25 access is peanuts when compared to the 100 Mbps FTTH available in South Korea today. And that iPhones and smartphones are at light years from the 'black and green' screen of the 3270 terminal and the alphanumeric Minitel terminal. But it is interesting to see that the centralized-decentralized-centralized cycle is based on periods of approximately 15 years and that the winning paradigm for 2010 is not that far from the winning one in 1980, while the 1995 one was the apogee of the heavy client, light server model, with most of the intelligence located in the desktop or emerging laptop, at the dawn of (but before) the Internet era.
Just some food for thought at the dawn of the Cloud Computing era! But do not take it too seriously...