Virtual Machines (VMs) : the Russian nested doll concept
Some of you may already be familiar with the Perl Purity Test: Are you a Perl geek, and if so, how much of a geek are you? The part of this test that struck me the most was the Perl's 'eval' function. It makes it possible to evaluate on the fly the part of Perl code that is not analysed when the code is compiled. This test asks the question: have you ever written self-modifying evals? Not often, but yes! And then: have you ever nested an eval inside another eval? I have to admit that yes, on rare occasions, I have. And finally: Have you ever used more than five nested evals? There I had to admit, oh, no, I've never even thought of that! So why implement recursive evals? I still don't know why. But when I think of other situations, sometimes I wonder if there might be an advantage to putting an object within the same object, and so on -- a bit like Russian nested dolls. This concept struck me as viable, and even advantageous, when applied to virtual machines (VMs).
With single server (ESX-type hypervisor, Hyper-V, Xen-Server, etc.) and application virtualisation (ThinApp, App-V, Jail, etc.), we are actually already putting VMs in physical machines, or application bubbles within system bubbles. Today, we're still at a stage with just one level of nesting (official version and software supported). This means that we're putting VMs in hypervisors, but not VMs in VMs, or VMs in VMs in VMs!Nevertheless, we are starting to see two-level structures appearing here and there, as is the case for running VMs within an ESX itself in a VM Workstation. Why only two levels? For several reasons, but above all because many processor access management features (
In fact, we can see this very clearly in the evolution of virtualisation architecture. First with VMs on a physical server, and then with VMs on physical servers that are confined in a capacity "envelope" (Resource Pool for VMware) and that can be moved from one physical machine to another (vMotion, Live Migration). It's also the case for products such as LabManager, where groups of VMs can be transferred from one physical server to another with their stored data, network, ACLs, and firewall rules within the same Datacenter. More recently, these groups of virtual resources can be moved to other geographic locations or to other providers in the world using Cloud Computing.
We know how managing physical universes is much more restrictive than managing virtual universes. It's already been 4 years since the Virtual Appliance concept first appeared. Today, it is directly integrated into the most recent version of vCenter and greatly facilitates making the 'ready to use' environment available. With LabManager, it's possible to go even further and export entire groups of VMs all at once. If virtual elements can be deployed so quickly, and if we can even make them 'ready to use', the next step is to create entire 'ready to use' information systems with pre-configured networks, dimensioned data storage and service quality, and deployed professional applications (directory, file servers, web servers, e-mail, firewall, monitoring, inventory, incident management, customer relations, accounting, and of course the user workstations). It could be a complete range of pre-configured, ready-to-use 'all-in-one' systems, or 'off-the-shelf' information systems.
Of course, it's possible to imagine putting all of that directly on a company's physical machines. But we're now in the era of centralisation and super-Datacenters. There is clearly an advantage to consolidating all of these 'ready to use' multi-VMs in other container VMs when you see how powerful linked clones and data deduplication are. These completely structured and autonomous information systems (CPU, RAM, disk, network, drivers, security) would be available in just a few minutes.
The ability to put VMs in VMs (with a hypervisor layer between each one) offers the most advanced possibility for creating all possible architecture and meta-organisation universes. Each VM resource can now be modified on the fly, which means that each universe would be adjustable as desired while also maintaining the advantages of resource sharing and their compartmentalisation in relation to physical resources and other virtual universes. In addition, the OVF standard is rich enough to make this structure completely compatible with all resource suppliers. So there's no holding back VMs in terms of hardware and software management from going beyond being just an OS to becoming universal containers.
April 14, 2010I just found this blog a while ago when a good friend suggested it to me. I have been a regular reader ever since.
March 19, 2010Vincent,
Thoughtful article, but would not a better model be to strip out as many layers as possible? Even to this day, hiding in data centres, there are mainframes running an operating called zOS, and that operating system is supporting many different applications use in turn by many thousands of users. The equivalent of ESX, which we called VM in the 1980s, has disappeared into the hardware and is now a way of logically partitioning the machine. It's used mainly for testing new versions of the OS. zOS itself is inherently capable of running many applications at once and sharing the resources according to pre-set policy, but the metaphor of virtual machines and the overhead of a second layer of OSs are not there.
After all, the definition of an operating system is that which manages the sharing and control of hardware resources amongst all the applications it supports, and provides an API or a set of service calls to simplify the writing of applications.
Today we're using virtualisation to handle two disparities in scale:
Firstly, when we have a number of small applications we want to pack onto one server. We don't trust Windows to run them side by side, so we use ESX and separate them in virtual machines.
But ultimately, wouldn't it be better to have just one OS that can actually run multiple applications and allocate resources according to policies we set? Of course, we need the OS and we need the apps!
Secondly, we have big applications that need to run on many servers. It's a different problem. In this case, wouldn't it be easier to have an operating system that could manage lots of servers as a single resource, and support suitably written applications directly. The applications don't need to know what they're running on - all the see is the API (there's a bit of this in Amazon's EC2 service).
This OS, of course, would manage processors and storage across multiple sites and provide fault tolerance.
There is another dimension to server virtualisation a la ESX. Saving complete system images as files makes them transportable. But if you have a single OS controlling all your processors you can have an inherently fault tolerant environment. Transporting an application becomes actually meaningless, because the OS has de-localised it.
This second vision is what application level virtualisation tries to deliver, but it's still an extra product layered on top of lots of individual OSs.
But I fully realise that this isn't where we are. I suggest that the reason we use server virtualisation is because we're dealing with applications that weren't written that way, running on OSs that are still fundamentally single server. It's perfectly valid, and a low risk step to reducing cost, but there's a lot of waste at every level there.
Perhaps part of the real vision of a "cloud" is what i have described above, not just from the end user's point of view, but from the application's as well.
I predict that OS virtualisation will be of historical interest well within a decade, and possibly much sooner. VMware's focus on the management layer and application level acquisitions suggests they share this view.
would welcome your thoughts.