Virtualization is undoubtedly a very useful technology. However, technology is only as good as its ability to integrate into the larger whole, and integration is often enabled by further abstraction and encapsulation of system components.

Benefits of Abstraction and Encapsulation

Just as in software engineering with object-oriented methods, abstraction and encapsulation are valuable in I.T. We de-compose systems into components in a model that allows state and/or functionality to be encapsulated so that the entire system can be made resilient and scale as needed. Virtualization of “machines” provides abstraction by separating an instantiation of an operating system from physical hardware, i.e. the associated hardware is no longer a distinguishing feature of the machine, nor is it any longer of significant importance. This is achieved by abstracting and encapsulating functionality normally provided directly by hardware and hardware-specific software, e.g. device drivers. Thus, the new operating system instantiation becomes a virtual machine. Because it is encapsulated, the virtual machine can be moved independently of the hardware — it can be loaded on one piece of hardware one day, and following some failure, for example, loaded on another piece of hardware at a different time. But this encapsulation should not require the enclosure of applications or data. The machine acts as a host for the applications, which in turn create, manipulate, transform, and present data.

Virtualization is now a buzzword applied along the technological evolutionary chain to other areas like the desktop, storage, and applications. Each of these are attempting to provide greater value and increased abstraction. While virtualization can provide cost savings, where the lines are drawn will determine how resilient the overall system will be. If individual components are enclosed in the virtualization capsule, it may be difficult to make those components resistant to failure through redundancy, or to achieve recovery time objectives because they are enclosed in a larger whole that enforces a serial recovery process. So what is encapsulated with these different models of virtualization, and where and when is separation of the various components needed? Well, it depends on what is important for a given use case.

To understand why this might be important, consider the original two-tier client/server architecture. Some processing moved to the client, but each client formed a connection to the database server. With an increase in applications and clients, the back-end processing capability on the database server can become overwhelmed. Along came three-tier architectures, which separated the the back-end processing into two layers — splitting apart the application servicing from the database processing. The middleware running on application servers can be scaled up to run on multiple servers, as the need demands. If some abstraction model enforced full encapsulation of both functions, scalability and resilience would be limited.

Encapsulation and Deployment Scenarios

What does this have to do with virtualization? Consider where the boundaries of encapsulation are drawn between components — completely aside from application architecture. Consider this both between the client and server (of whatever kind), and between the operating system, application, and data (whatever is delivered to clients). For server virtualization, think about the 3-tuple of OS, application, and data. Can these be separated? For now, let’s not think about applications that embed configuration with OS-specific data structures, e.g. a registry. A reasonable system integrator would separate OS, application, and data onto separate disk drives. Or the application and data may reside elsewhere on a network and be used locally. This degree of separation has benefits, especially for the data. During the recovery phase of a business continuity plan, services are restored at an alternate site. Because of the difficulty of cold-metal restores to dissimilar hardware, many large organizations will purchase two or more of each system — one for production, and one for test/recovery at the backup site. If you separate and replicate, then application and/or data can be “attached” to any suitable operating system on any suitable hardware platform.

The different virtualization models are summarized in the figure below. Note: All siblings nodes are encapsulated, that is, they exist side-by-side within one entity, e.g. a virtual machine, client. Parent/child relationships show some separation, that is, they work together, but nodes in these relationships can be easily moved/joined to a different instantiation of its parent. The chain of siblings represents the flow of architectural elements.

Virtualization Models

In the server model, option #1 is the most restrictive, option #4 the most flexible, and the other two fall in between. Xen Enterprise is an example of a product which enforces some degree of encapsulation, not by preventing separation of OS, application, and data, but by how you are forced to separate them. For example, while it allows for applications, data, and even the OS to reside on iSCSI LUNs, in some situations the OS cannot access independent iSCSI LUNs (i.e. not under its direct control) through a software iSCSI initiator. It seems to be fine using Windows server operating systems, and Red Hat Enterprise Linux (and presumably CentOS), but not with Debian. In the later case, you are limited to Xen storage repositories, which are not truly portable to other environments — you have to go to tape! This greatly restricts your recovery options, and even flexibility in your architecture. If you want to move the data to another system, e.g. a physical instantiation, you have to copy it. These kinds of weaknesses can force you to choose another product or operating system.

Do any kind of restrictions exist based on the model chosen, rather than a specific implementation. They certainly do. Consider the case of a highly mobile workforce. A typical desktop or application virtualization model may not work for some situations, given where the line is drawn between client and network (if any). Centralized desktops or applications require nearly ubiquitous connectivity for mobile workers. Portable desktop virtualization products, such as VMware ACE, seem to have the right mix of attributes for such situations — a VM can function fine for some amount of time (configurable) in a disconnected state, but can still be time-bound. The last attribute can be used for strict software asset control — VMs do not walk away with an unlimited lifetime.

If your workforce is centralized, desktop and/or application virtualization offers increased control, easier deployment, both of which save money. None of the models are bad, but some may be wrong — choose wisely!

Advertisements