Virtual Computing Overview (Archived Report)

PDF version of this report
You must have Adobe Acrobat reader to view, save, or print PDF files. The reader
is available for free

Archived Report
Virtual Computing Overview

by Lynn Greiner

Docid: 00011145

Publication Date: 1805

Report Type: TUTORIAL


In today’s data center,
space, power, and climate control are expensive and
pervasive issues. Underutilized servers combined with an ever-growing need for
processing power add to the challenges. Virtualization helps solve these
problems by allowing one machine to act as though it were many machines, each
running separate operating systems and applications. It is also key to the
"software defined" infrastructure trend. The technology itself is
not new, but developments in hardware and software have made it more attractive
to the enterprise.

Report Contents:

Executive Summary

[return to top of this report]

Virtual computing comes in many forms, but the benefits of each are
the same: Virtualization allows users to optimize their computing resources. They may be addressing what looks like many servers (hosting Web sites, for
example) that in reality co-exist on the same physical machine. Or, they may be
addressing what they think is a single storage volume that in reality spans an
entire storage area network. In both cases, virtualization is creating the
illusion that one is many – or many are one – to make most efficient use of the

It sounds complicated and the technical machinations behind the process are
not simple to understand. But vendors have developed tools that conceal the
complexity and make virtualization practical.


[return to top of this report]

Virtual computing
was developed in the mainframe era
as a way of getting maximum benefit from expensive hardware. Today, although
hardware is cheaper, companies still face the challenge of maximizing their
investment and minimizing expense.

A virtual computer is not really a physical
entity. Rather, it is a software representation of a physical entity, so many
instances of it can run on a single machine, just as you can run multiple
instances of any other piece of software, dependent on available resources. Each
virtual machine has access to the physical resources of its host, such as
network interfaces, disks, and printers, but runs as an application under a host
operating system. The virtual machines do not have to be running the same
operating system as their host machine, either. For example, a PC running
Microsoft Windows 7 can host virtual machines running Linux, Windows 2000, or
any other operating system that runs on the native hardware. 

Microsoft’s Virtual PC and Virtual Server (now
both obsolete), and its current virtualization platform, Hyper-V, for
example, allow users to create several test environments on the same system,
without interfering with each other. Technicians testing various software
configurations can create several virtual machines on their test system, install
each setup under evaluation on its own “machine,” switch between them
at will, and, if a virtual machine becomes corrupted, simply erase it
and create another. Beta testers who do not have several computers at their
disposal can test unstable products, such as early versions of operating
systems, on virtual machines on the machines they use day-to-day without jeopardizing their other work.

Virtual Server, and now Hyper-V, are also touted as a way to
let companies upgrade off old hardware running obsolete operating systems like NT 4.0 to modern systems
running Windows Server. They can create virtual NT servers on the Windows
Server machine, gaining the benefit of newer hardware and a more stable underlying
operating system while still retaining the ability to run legacy

Maximizing the use of existing hardware is
another area in which virtualization shines. For example, a New York company
gave new life to an old mainframe by installing virtualization software from
VMware (majority owned by Dell Technologies) on it and running 60
Linux-based virtual Web servers. Each virtual server hosts its own sites, and if
it has issues, it does not affect any of the other virtual machines, or its host

Networks can be virtualized as well, split into
tiers, each with its own service levels and security. In storage, a Storage Area Network (SAN) is the
epitome of virtualization. To the user, dozens, or even hundreds, of disks appear
as one, and yet management software allows administrators to provision space on
the fly, expanding or contracting that “one” disk as required. Most
backup software vendors, such as Veritas, also virtualize tape to speed

Current View

[return to top of this report]

Competition is hot in the virtualization world. Expectations are high, making
vendors stretch to create the most transparent, easily manageable virtualization
solutions in their marketplaces. Even the open source movement has gotten into
the game with Xen, a virtual PC product that began its life as an Intel-funded
research project at Cambridge University in the UK. Xen has broad support from
vendors because of its advanced technology and small server footprint. The
commercial support incarnation of Xen, XenSource, was acquired in October 2007
by Citrix Systems. Citrix claims that
99 percent
of the Fortune 500 has deployed its technologies.

In March 2011, released Xen Cloud Platform (XCP) 1.0. Developed as
part of the Xen Cloud Project introduced in 2009, XCP provided a
full-featured solution for small and medium size enterprises wanting to build
private clouds, as well as open source enthusiasts, universities and researchers
wanting to experiment with cloud computing. In April 2013, the Xen Project
became a Collaborative Project of the Linux Foundation, and in June 2013,
XenServer, a superset of XCP, was fully open sourced. XCP was replaced
by full XenServer; users of the newest version, XCP 1.6, had a direct upgrade
path. Citrix offered a commercially supported version, Citrix CloudPlatform,
which it sold to Accelerite in February 2016, however, it still offers
commercial support to XenServer users. Accelerite promised to honor existing
support and partner agreements for CloudPlatform, and committed to its
ongoing development. Since then, it intergrated the product with Kubernetes and Docker, as well as
Datera Elastic Data Fabric storage software.The current version is 4.7.1.

In 2013, the Open Virtualization Alliance was founded to increase awareness
and adoption of the Kernel-Based Virtual Machine (KVM) as a choice for
enterprise-class virtualization solutions.
It was a Linux Foundation Collaborative Project. Its over 250 members included IBM, Intel, NetApp,
Red Hat, Hewlett Packard Enterprise (HPE), and many others. KVM converts the Linux kernel into a bare metal
hypervisor. The Alliance was dissolved on December 1, 2016
after achieving its original goals.

Virtualization is now the default approach for new server
deployments and is the foundation platform for cloud computing. Companies are
also using the technology for disaster recovery, hosting their virtualized data
center on a few offsite physical machines instead of recreating the entire
hardware environment. IDC says that virtual machine and cloud infrastructure grew 18.7 percent year over year in
2014 vs 2013, 2015 growth was 21.9 percent, and what it now calls Cloud IT gre 21.7 percent in 2017. Gartner has stated that
over 75
percent of server workloads are now virtualized, and that the
server virtualization market is approaching saturation.

Management is less of a challenge today. Most vendors
now offer tools that support virtual
machines, helping mitigate the risks of the technology, where the potential for major problems
exists since these servers may be
running at 70 percent of capacity or more at all times; one escalating application could
push the machine over the edge. In addition, technologies like VMware's vMotion
allow workloads to be moved to more suitable hardware when they stress the host
machine, often with zero downtime.

Security has been a major concern. Gartner said
in 2010 that 60 percent of virtual servers
were less secure than the physical machines they replaced. It
was expected that, as
the category maturesd this would decline to 30 percent. Security vendors
have since stepped up and offered
ways to protect virtual infrastructures with minimum effect on performance,
to the point that analysts now say that, as long as the VMs are kept patched and
up to date, this concern has receded.

Another major issue is software and operating system licensing. Today,
many vendors demand that users purchase licenses for each virtual server as though it
were a standalone machine. This cuts the potential cost savings considerably,
and creates a record-keeping nightmare in environments where virtual machines
are constantly built and blown away. Microsoft announced some
concessions, allowing hosting of virtual machines on some editions of
Windows Server 2003 and higher, and introduced "XP Mode"
desktop virtualization in Windows 7; this includes a Windows XP
license. It has also modified its licensing models for some other
products, such as SQL Server, but there can still be confusion as to the
rules. Windows 10 Enterprise, Professional, and Education
versions include Hyper-V, however, the user is responsible for acquiring
licenses for the virtual machines.

IT staffers must become more versatile, too. When before a technician may
have been responsible for a single application on a single server, with
virtualization chances are the scope expands to encompass all of the virtual
machines, and their applications, on a physical server, as well as the
virtualization platform itself. 

In the storage world, things have advanced even
more quickly. A SAN, by
its nature, is virtualized, and vendor tools to manage the structure are a given
that, if not completely mature, are on the whole robust. Virtual tape libraries
(VTLs) consist of banks of disks that look like tapes to backup
software. Since disk is much faster than tape, backups can be completed within
the ever-shrinking window, and then the data can be moved from the virtual tape
to a physical tape. VTLs in conjunction with backup software do automatic
provisioning, unlike systems in which a disk-to-disk copy is performed onto a
storage array. In those cases, the array must be managed and provisioned
manually; a VTL handles all space allocations. 

Software defined networking is now being offered by most
networking vendors, overlaying software on what can be relatively generic
network hardware. Currently, the industry is debating the merits of multiple
approaches, with Cisco touting its more proprietary, hardware-centric approach,
while others like VMware favor a more open software-based structure.

Hyperconverged systems virtualize all components, and are
growing rapidly according to IDC, although since the growth is from virtually
nothing, the initial percentages are deceptive. Suffice it to say that the dollar value
of these shipments exceeded $356 million in Q4 2015, grew to $697 million in Q4
2016, and total revenues for 2016 surpassed $2.2 billion. For 2017, that annual figure increased to over $12.5 billion.


[return to top of this report]

virtualization has been around for a long time, it did not enjoy significant
market penetration until the SAN introduced the concept to the enterprise. It
became more respectable when, in 2003, Microsoft purchased the virtual machine
assets of Connectix and used them to build its own Virtual PC product. In
April, 2006 Microsoft released Virtual Server 2005 R2 as a free
download, and in the first month, over 200,000 copies were
retrieved. In August 2009, it released Hyper-V Server 2008 R2, also at no
charge. Virtual PC 2007, now defunct, was also free, as is the version
released with Windows 7, Windows Virtual PC, which is available with
or without XP Mode. Under Windows 8, a client version of Hyper-V replaces
Virtual PC in enterprise versions. VMWare’s loyal user base has also remained
steadfast, encouraged by its release of a free version of VMware Server, and by its
decision to also release its ESX (now vSphere Hypervisor) hypervisor at no charge. VMware vSphere 5,
released in July 2011, was downloaded by over half a million customers in its
first five months; the current version is vSphere 6.7, released in
April 2018. Products
from Parallels (formerly SWsoft) of Parallels, and
the now discontinued Parallels Virtuozzo,
have also gained a following. The Macintosh
audience has also been addressed with the release of VMWare Fusion
for the Mac and Parallels Desktop for Mac. Oracle’s VirtualBox
5.0, which includes support for
additional operating systems as well as security and performance enhancements, was
released in July 2015, and the most recent maintenance release,
v5.2.12, came out in May 2018.
Oracle also offers Oracle VM Server for x86, a direct competitor to VMware, with
no licensing fees, though it does charge for support.

average standalone server utilization hovering between 10 and 15 percent, virtualization,
which can bring the figure up to 70 percent or more, attracted attention
among cost-conscious CIOs. The overhead generated by the original virtualization technology (up to 35
percent of the processor’s power to sustain the virtual machines), however, made it less
attractive upon examination. 

That changed with the development of hypervisor technology. A hypervisor is a
stripped-down operating system kernel that runs directly on the hardware instead
of making calls for resources through the host operating system. The
administrator decides what percentage of each resource should be allocated to
virtual machines, and the hypervisor does the rest. It can cut the
virtualization performance hit on a CPU to five percent or less.
Developments from Intel and AMD that handle virtualization tasks in
the chipset also help alleviate these problems.

Microsoft has built hypervisor
support into Windows Server 2008 and higher with its Hyper-V
technology. Hyper-V is included with some SKUs of Windows
Server 2008 and up, or as a separate download, but is only supported for
64-bit systems. A client version is also offered with some SKUs of Windows 8,
Windows 8.1, and Windows 10.
And, of course, VMware's ESX (now known as vSphere Hypervisor) technology is

Xen is based on hypervisor technology, and is
backed by major vendors such as Hewlett Packard Enterprise
(HPE), IBM, and Oracle.
Chipmakers such as AMD, IBM, and, of course, Intel, have built Xen hooks into
their next-generation chips, as well as providing other virtualization
support at the hardware level. With so much support, this free virtualization tool
will go a long way toward pushing the technology into the home and the

The latest trend in virtualization is the Software Defined Data Center
(SDDC), in which all components, from servers to storage to network, are
virtualized. It features automated provisioning and management through a
framework of policy-based management of data center applications and services.
This technology is of particular value for cloud-based
enterprises and providers.


[return to top of this report]

Virtualization is now a mainstream technology, thanks in large part to cloud

Any company with a SAN already benefits from it
as well, and devices such as VTLs
will help protect the huge volumes of data that must be maintained in today’s
regulatory environment. Their speed, however, must be balanced against their
drawbacks: cost of the solution, and potential additional licensing costs. Some
VTLs are software-only products, and you supply the disk array that is to become
the virtual tape. Others provide all of the pieces, at a higher cost but with
fewer potential integration problems.

With the advent of the hypervisor and support
from processor manufacturers, concerns about virtual machine performance have been alleviated. Intel
and AMD deliver chips that include
virtualization technology, and modern hypervisors require them. Windows 7’s XP
Mode virtualization depended on this hardware technology, as does Hyper-V.

There are many logical places for virtual machines in an enterprise. Research
and development and testing, where unstable early versions of software
under development means regular
rebuilding of machines, disaster recovery sites, where application environments
can be replicated on virtual machines, and under-utilized, single-function
servers are all candidates for possible virtualization. Wherever consolidation
must occur, there is an opportunity to look at virtualization. Wherever there’s
a cloud implementation, virtualization is a necessity.

Software vendors, however, must realize that their oppressive licensing of
virtual machines hinder acceptance, and continue to make adjustments. Customer pressure
will probably be a driver here. Microsoft has already made a start –
it has announced that dual-core
machines will only need a single license, and that Software Assurance
customers will be permitted to run four virtual machines on the
desktop if they purchase Windows Vista or Windows 7 Enterprise, and XP Mode is
available with the Pro, Ultimate and Enterprise SKUs of Windows 7. Client
Hyper-V is available for the 64-bit Pro and Enterprise versions of Windows 8, 8.1, and 10. It also permits two virtual machines on a Windows Server 2012
or Windows Server 2016 Standard host, and
an unlimited number on the Datacenter edition, and continues to make changes
to further streamline the licensing model. For example, SQL Server
Enterprise allows unlimited instances, and System Center Server
Management Suite Enterprise manages all VMs on one physical server
with one license.

The trend towards subscription-based software as a service is going a long way towards alleviating licensing issues. Customers pay
for what they use, and the prices include license fees.

It is critical, however, to realize
that some application vendors still do not support some or all of their
products on virtual machines. Any migrations need to be carefully
researched to ensure that application support is available in the
desired configuration.

When planning to move to virtualization, it is critical to look at
management of both the physical and virtual infrastructure.
If your enterprise solution does not recognize virtual machines, there could be
problems and point solutions may be necessary. Virtualized machines can be more
sensitive to hardware failure – a single glitch in a piece of hardware could
affect dozens of virtual systems. Robust failover in real-time is a necessity,
as are tools to monitor and manage heterogeneous environments. Again, vendors are addressing this area;
Microsoft System Center Virtual Machine Manager,
CA’s option to manage VMware virtual environments, VMware's native tools, and HPE’s management suite
that manages physical and virtual machines from one interface are a few examples. And as with the systems
they manage, many tools are moving to a cloud-based, software as a service model.

[return to top of this report]

About the Author

[return to top of this report]

Lynn Greiner is Vice President, Technical Services
for a division of a multi-national corporation, and also an award-winning
computer industry journalist. She is a member of Faulkner’s Advisory Panel.

[return to top of this report]