Virtual Machines (VMs) : the Russian nested doll concept

filtering.gif

Some of you may already be familiar with the Perl Purity Test: Are you a Perl geek, and if so, how much of a geek are you? The part of this test that struck me the most was the Perl's 'eval' function. It makes it possible to evaluate on the fly the part of Perl code that is not analysed when the code is compiled. This test asks the question: have you ever written self-modifying evals?  Not often, but yes! And then: have you ever nested an eval inside another eval? I have to admit that yes, on rare occasions, I have. And finally: Have you ever used more than five nested evals? There I had to admit, oh, no, I've never even thought of that! So why implement recursive evals? I still don't know why. But when I think of other situations, sometimes I wonder if there might be an advantage to putting an object within the same object, and so on -- a bit like Russian nested dolls. This concept struck me as viable, and even advantageous, when applied to virtual machines (VMs).  

 

With single server (ESX-type hypervisor, Hyper-V, Xen-Server, etc.) and application virtualisation (ThinApp, App-V, Jail, etc.), we are actually already putting VMs in physical machines, or application bubbles within system bubbles. Today, we're still at a stage with just one level of nesting (official version and software supported). This means that we're putting VMs in hypervisors, but not VMs in VMs, or VMs in VMs in VMs! 

Nevertheless, we are starting to see two-level structures appearing here and there, as is the case for running VMs within an ESX itself in a VM Workstation. Why only two levels? For several reasons, but above all because many processor access management features (IntelVT or AMD-V) are no longer available after the first hypervisor layer. In fact, today it's always a single handler that works best in terms of processor resources and memory. One just works better than two. "Distributed" mechanisms are still very complex and therefore very expensive. Paravirtualised hypervisors can easily make it possible to imagine structures with more than two nesting levels since there is nothing between the OS and the processor. The drivers are another significant obstacle to the virtualisation of virtual machines. However, new kinds of drivers are being developed that don't function directly with the hardware anymore (supervisor mode), but interact with the kernel via standardised APIs (user mode), a bit like hardware does with a USB bus.

In fact, we can see this very clearly in the evolution of virtualisation architecture. First with VMs on a physical server, and then with VMs on physical servers that are confined in a capacity "envelope" (Resource Pool for VMware) and that can be moved from one physical machine to another (vMotion, Live Migration). It's also the case for products such as LabManager, where groups of VMs can be transferred from one physical server to another with their stored data, network, ACLs, and firewall rules within the same Datacenter. More recently, these groups of virtual resources can be moved to other geographic locations or to other providers in the world using Cloud Computing.

 

We know how managing physical universes is much more restrictive than managing virtual universes. It's already been 4 years since the Virtual Appliance concept first appeared. Today, it is directly integrated into the most recent version of vCenter and greatly facilitates making the 'ready to use' environment available. With LabManager, it's possible to go even further and export entire groups of VMs all at once. If virtual elements can be deployed so quickly, and if we can even make them 'ready to use', the next step is to create entire 'ready to use' information systems with pre-configured networks, dimensioned data storage and service quality, and deployed professional applications (directory, file servers, web servers, e-mail, firewall, monitoring, inventory, incident management, customer relations, accounting, and of course the user workstations). It could be a complete range of pre-configured, ready-to-use 'all-in-one' systems, or 'off-the-shelf' information systems.

 

Of course, it's possible to imagine putting all of that directly on a company's physical machines. But we're now in the era of centralisation and super-Datacenters. There is clearly an advantage to consolidating all of these 'ready to use' multi-VMs in other container VMs when you see how powerful linked clones and data deduplication are. These completely structured and autonomous information systems (CPU, RAM, disk, network, drivers, security) would be available in just a few minutes.

 

The ability to put VMs in VMs (with a hypervisor layer between each one) offers the most advanced possibility for creating all possible architecture and meta-organisation universes. Each VM resource can now be modified on the fly, which means that each universe would be adjustable as desired while also maintaining the advantages of resource sharing and their compartmentalisation in relation to physical resources and other virtual universes. In addition, the OVF standard is rich enough to make this structure completely compatible with all resource suppliers. So there's no holding back VMs in terms of hardware and software management from going beyond being just an OS to becoming universal containers.

Blogger Anonymous

-