Copyright 2019, Faulkner Information Services. All Rights Reserved.
Publication Date: 1912
Report Type: TUTORIAL
The slow but steady retreat from conventional on-premise data centers to
"virtual data centers" is real and measurable. A virtual
data center is not a physical facility but a philosophy founded on
the desire of business executives to shrink their information
technology (IT) footprint in order to lower costs and enable the enterprise to pursue its "core competencies," which
often do not include IT.
[return to top of this report]
While the term "virtual data center" (VDC) is of relatively recent
vintage, the concept is actually decades old.
Virtual Computing Overview Tutorial
As the adjective "virtual" might imply, a virtual data center is
not a physical facility; it is, instead, a philosophy
founded on the desire of business executives to shrink their
information technology (IT) footprint by: (1) moving computing and
computers outboard of the traditional data center; or (2)
improving hardware utilization through a process called,
appropriately, "virtualization." In most
cases, the impetus behind the adoption of the virtual data center is
lowering costs and enabling the enterprise to pursue its "core competencies," which
often do not include IT.
In broadest terms, the idea of a virtual data center encompasses four
basic phenomena: outsourcing, mobility, cloud computing, and
With outsourcing, the responsibility for certain data
center functions (notably network security) is transferred to a
third-party managed services provider (MSP). Outsourcing relieves the need to recruit and retain scarce (and
often high-priced) technical talent while providing an always-appreciated level of cost certainty.
With mobility, enterprise personnel – empowered by
remote computing technology, e.g., laptops, PDAs, smart phones,
tablets, etc. – are
free to conduct business from their homes or other extra-enterprise
With cloud computing, applications
that would normally be run "on-premise" at an enterprise data center
are instead run "in the cloud." According to the US National Institute of Standards and Technology (NIST), cloud computing services are delivered via three prominent service models:
- Software as a Service (SaaS) – The capability provided to the consumer
is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin
client interface such as a Web browser (e.g., web-based e-mail).
- Platform as a Service (PaaS) – The capability provided to the consumer
is to deploy onto the cloud infrastructure consumer-created or acquired
applications created using programming languages and tools supported by
- Infrastructure as a Service (IaaS) – The capability provided to the
consumer is to provision processing, storage, networks, and other
fundamental computing resources where the consumer is able to deploy and
run arbitrary software, which can include operating systems and
In addition to Public Cloud services, like Amazon Web
Services (AWS) or Microsoft Azure, which cater to multiple clients,
enterprises can create a customized cloud environment, either a:
- Private Cloud, dedicated to enterprise interests only; or
- Hybrid Cloud, combining and integrating public and private
With virtualization, a software-level "hypervisor" partitions a
real server or mainframe into multiple "virtual machines."
Virtualization technology, which can also be applied to storage
systems, is the engine that drives today’s virtual computing.
Virtualization technology first became popular in the 1970s when IBM
released its VM/370 operating system. Originally intended as
a test tool for IBM engineers, VM/370 enabled a real mainframe to
be split into hundreds of virtual machines
capable of running other IBM operating systems such as SVS, MVS,
DOS, and CMS. This capability was exploited by one IBM client,
RCA Corporation, to develop a disaster recovery strategy in which
RCA’s central mainframe was used to emulate the operating
environment of satellite data centers, thus allowing the ready
recovery of those data centers in the event of a disaster.
Today, most enterprise environments feature both:
A conventional data center, which is usually reserved for
high-volume processing, the execution of proprietary applications,
and the handling of sensitive or confidential data.
- A virtual data center, in which, for example, common
applications like e-mail and enterprise resource planning (ERP)
are performed by SaaS providers, and the local server
"farm" is "virtualized" by replacing dedicated
real servers with virtual machines.
While the idea may seem farfetched, the ultimate aim is to replace
today’s data center with its virtual counterpart: a universe in which
every employee has access to a "thin client," i.e.,
"dumb", terminal through which he or she can connect to a computing
superstructure that will process all enterprise transactions while
maintaining the confidentiality and integrity of the enterprise
data. The virtual data center vision is to transform computing into
a public utility, like water and gas. In fact, one of the early
incarnations of cloud computing was referred to as "utility
[return to top of this
Today, the primary technology enabling and encouraging the
development of virtual data centers is, appropriately, virtualization.
Virtualization is a software mechanism for improving hardware utilization,
allowing the elimination of servers and other equipment and reducing the overall
hardware expense. There are five primary forms of virtualization: server
virtualization, storage virtualization, data "deduplication," desktop virtualization, and network virtualization.
The enterprise data center is overcrowded,
hosting dozens of servers connected by miles of cable. Many of these
servers are single-purpose systems, serving
the needs of a single operating system, information system, user application, or
user community. As such, servers are often underutilized and overresourced,
considering the requirements they place on power utilization, air conditioning,
floor space, and IT support services. Server Virtualization is the process
of dividing a physical, or real, server into multiple virtual servers, or
machines (VMs). Since each virtual machine is capable of performing the
functions of a real server, the number of real servers may be reduced, often by
a double-digit factor.
A variation on the VM theme, containers are "simple lightweight virtual elements that also share a
server." According to analyst Tom Nolle, containers "have been exploding in popularity because they use fewer server
resources than VMs, and thus allow more applications to be packed into a given
Figure 1. Server Virtualization
Server virtualization enables enterprises to:
- Reduce the number of physical, or real, servers
- Lower server-related power and air conditioning costs
- Recover data center floor space normally allocated to server hardware
- Shorten the server data backup process
- Improve server reliability, availability, and serviceability
- Lessen the demand for IT support services
- Decrease the total cost of server ownership
- Achieve "sustainability" (or environmental) goals and mandates
Storage virtualization is the
process of aggregating real storage space, which may exist on a
variety of disk arrays, into a single, logical storage pool,
permitting centralized, cross-device provisioning and management, and
eliminating the formation of unusable disk fragments. Storage virtualization is designed to
improve storage utilization and flexibility, increase application uptime, reduce
administrative overhead, preserve the investment in existing storage
infrastructure, and facilitate the introduction and integration of new storage
systems. Storage virtualization enables enterprises to:
storage resources into a single, virtualized storage pool.
a consistent presentation of storage to virtual machines.
high performance access to virtual machine disks.
live migrations of virtual machine disk files across storage arrays.
- Support multiple connectivity options.
virtual machine storage I/O bottlenecks and free up valuable storage
the total cost of storage ownership.
Data deduplication is the process of recognizing and refusing to store redundant
data objects. Instead of storing a duplicate object, deduplication
software simply references the location of the original object. Data deduplication enables enterprises to:
Reduce their initial storage
Prolong the interval between storage upgrades.
Increase data transmission speed and lower
Store more data online, for longer periods.
Desktop virtualization is a process by which enterprise users
can remotely access their office PC environment (applications and data). Convenient for users, desktop virtualization also simplifies the
provisioning and management of "teleworkers", easing the burden on
enterprise IT staff.
Network virtualization is the process of combining network hardware, like
routers and switches, with network software to create logical, virtual
networks (VLANs) that can readily integrate into specific virtual environments.
[return to top of
The slow but steady retreat from conventional (on-premise)
data centers to virtual data centers is real and measurable. Server and storage
virtualization, in particular, are being viewed as instruments for:
- Cutting equipment costs
- Decreasing software license fees
- Lowering utility bills
- Freeing floor space
Achieving a "greener" IT environment, which is also image
enhancing for the enterprise
Providing a less expensive – and more reliable – disaster
- Reducing the number of IT workers
On the subject of workforce reduction, enterprise management is
continuing to examine data center outsourcing opportunities,
engaging, for example, lower-priced foreign workers as application
programmers and service desk personnel.
In another virtual data center development, more and more
enterprises are expanding their "telework," or
telecommuting, programs. According to one estimate, one-third of all employees currently enjoy the option of working full-
or part-time from their home or other remote location.
Reflecting the cloud
revolution, many, if not most, prominent application providers are now
offering their products in two forms: their customary package,
suitable for installation in the client’s data center, and a SaaS package, which enables the client
to use the software without the hassle of installing and maintaining
it – a virtual data center approach. In certain cases, providers are
offering SaaS-only software.
While virtualization – especially the
server variety – is gaining in popularity, more work is necessary to
affect proper virtualization management. The
situation is similar to the early 1990s when client/server computing
captured the IT community. While client/server (or, more
generally, distributed computing) was, itself, a promising virtual
data center concept, the advocates of distributed computing failed
the role of systems management in making the whole thing work. While some dismissed old-fashioned mainframe computing, veteran "mainframers"
understood the vital contribution of disciplines such as change
management, configuration management, problem management, capacity
planning, and simple backup and recovery – disciplines developed and
matured over decades, and disciplines designed to affect computing
"command and control."
[return to top of this report]
Converged Infrastructure Systems Enhance
Virtual Data Center Command and Control
infrastructure (CI) is the combination and integration of server,
storage, and networking resources into a single, standalone
Hyperconverged infrastructure (HCI), takes the CI concept a step farther by integrating management
- Data protection products (particularly backup)
- Deduplication appliances (for space saving)
- Wide area network (WAN) optimization appliances
- Solid state drive arrays
- Public cloud gateways
- Replication appliances or software2
Both CI and HCI are
commanding the attention of IT departments looking to simplify IT
operations, and business leaders looking to lower IT costs.
Although the proponents of CI/HCI would likely reject the
comparison, CI/HCI systems might be described as modern day mainframes where the
distinction between computing components is blurred by a central command and
While the long-term objective of CI/HCI providers is to remake the enterprise
data center into a less complex, more maintainable facility, CI/HCI systems are
often deployed to create so-called "islands of capability,"3
supporting, for example, a virtual desktop or server virtualization initiative,
or providing disaster recovery for mission-critical applications. Many CI/HCI systems are augmenting data center assets, not replacing them.
Over the next few years, the small- to medium-sized business (SMB) community
is expected to explore converged infrastructure, especially if the total cost of
ownership (relative to more traditional IT infrastructures) is low. In order to
meet this goal, CI providers are going to have to supply turnkey systems coupled
with superior customer service.
Software-Defined Data Center Incorporates Virtual and
In some circles, the term "virtual data center" is being replaced by
"software-defined data center" (SDDC). According to Forrester Research, "An SDDC is an integrated abstraction layer that defines a complete data center by means of a layer of software that presents the resources of the data center as pools of virtual
and physical resources, and allows them to be composed into
arbitrary user-defined services."4
Forrester believes that an SDDC must incorporate
legacy physical (i.e., non-virtualized) resources. "Vendors that are
currently focused exclusively on virtualized infrastructure would rather
sidestep this aspect of an SDDC, and while they can be quite successful without
incorporating physical resources, the solutions will remain incomplete and less
useful until these legacy artifacts can be incorporated in the SDDC."5
IaaS + Cloud + HCIS = New
Virtual Enterprise Data Center
Patrick Nelson reports in Network World that according to Gartner analyst Michael Warrilow,
IaaS, cloud, and [hyperconverged integrated systems (HCIS)] might combine to create an
enterprise data center revolution. The notion that “the data center will no
longer be the center of the universe” is a possibility. "A higher mix of elastic
and scalable cloud infrastructure” may substitute. “The data center becomes less
physical and becomes more software defined, and becomes more virtual – a mix of
on and off premises.”6
The Virtual Data Center Concept Is Ever-Evolving
The virtual data center concept, which began with mainframe-based virtual
machine technology, is ever-evolving. Two new candidates for inclusion in the
VDC space are "Edge Computing" and "Shadow IT," as each represents a unique
departure from classic data center operations.
Edge computing is, straightforwardly, computing at the network edge. According to
Gartner, "Edge computing describes a computing topology in which information
processing and content collection and delivery are placed closer to the sources
of this information."7
Edge computing is tied to another emergent technology, the Industrial Internet of Things (IIoT), in which industrial components are transformed into
smart machines capable of collecting and processing data locally and
transmitting it to a central data center, or the cloud. Given the sheer volume
of data being collected by sensors and other intelligent devices, it only makes
sense to conduct as much processing "onsite" as possible; in other words, to
shift processing to the network edge.
If edge computing sounds like the latest incarnation of distributed
computing, it is. The principal difference between edge computing and earlier
distributed forms is that edge computing is essential to certain use cases. The
most frequently cited example involves self-driving or autonomous cars, in which
the onboard AI systems must make immediate, often life-and-death, decisions
based on vehicle sensor data. There is literally no time to transmit data to the
cloud for processing. The processing must take place within the vehicle, or at
Shadow IT is the term applied to an Employee-Defined Data Center. More
specifically, shadow IT refers to an employees’ unauthorized use of third-party,
usually cloud, software and services. As with the bring your own device (BYOD)
movement, the danger derives from the fact that shadow IT applications are not
only unsupported by enterprise IT, but their existence and use, in many
instances, is unknown to enterprise IT. Shadow IT operations violate, in a
fundamental fashion, foundational IT governance disciplines such as
configuration management and change management – which require knowing precisely
which information systems and applications constitute the official enterprise
Enterprise employees establish their own virtual (shadow) data center for a
variety of reasons. As analyst Daniel Davis speculates: "
- "Perhaps they aren’t satisfied with their company-approved options, or
they aren’t even aware there are such options.
- "Maybe the company doesn’t actually have a sanctioned solution for a
- "Or maybe the employee just likes the app they’re used to."8
Whatever the rationale, employees no longer feel constrained to use
enterprise-endorsed applications. As a result, enterprising employees often
elect to build their own de facto virtual data center.
[return to top of
Building a virtual data center requires
careful planning since, in certain instances, the promise of lower
costs – the chief selling point – may be illusory. As analyst
Alan Murphy observes, "Once virtualization hardware and software is acquired, operational expenses can grow unbounded; headcount can increase or existing staff may require training to administer the new virtual machine platforms. Management of these new tools can be a long-term recurring cost, especially if the virtualization is done in-house. There can be additional growth requirements for the application and storage networks as these virtual machines begin to burden the existing infrastructure. Unexpected and unplanned costs can be a serious problem when implementing or migrating from physical to virtual machines, hindering or even completely halting deployment."9
To help combat the
problem, analyst John Maxwell urges an enterprise’s "virtual administrators" to
"control [virtual machine (VM) sprawl. Wasted resources,
such as abandoned and powered-down VMs, are challenging to identify in virtual
data centers. They consume expensive software, hardware and storage resources
and will continue to accumulate unless stopped. Eventually, VM sprawl and
overcrowded VMs reduce performance and increase cost. A good IT pro spends
some time implementing policy controls, identifying wasted resources, and
orchestrating their elimination."10
To realize the costs savings and other benefits that should accrue
when deploying a virtual data center, some experts suggest hiring a
"Virtualization Architect," someone who understands
virtualization technology, including its risks, and can formulate a
strategy for transitioning from conventional to virtual data center
operations. One of the major issues, as analyst Jason Langone
explains is monitoring. "Many
organizations run their critical production systems on a virtualized
infrastructure, leveraging the many benefits of server virtualization. However, many organizations also fail to correctly monitor their VI in
the same holistic fashion as they have historically done with their
physical environment and its hosted services."11
Tom Nolle. "The Virtual Data Center Put in Perspective."
UBM. October 3, 2017.
2 Scott D. Lowe. Hyperconverged Infrastructure for
Dummies: SimpliVity Special Edition. John Wiley & Sons, Inc. 2014:36.
3 Allen Bernard. "Is Converged Infrastructure the Future of the
Data Center?" CIO. March 19, 2013.
4 Richard Fichera with Doug Washburn and Eric Chi. "The
Software-Defined Data Center Is the Future of Infrastructure Architecture."
Forrester Research. November 12, 2012:3.
5 Ibid. p.15.
6 Patrick Nelson. “Gartner
Tips Virtual Data Centers As Future.” Network World.
June 13, 2016.
7 David W. Cearley, Brian Burke, Samantha Searle, and Mike J.
Walker. "Top 10 Strategic Technology Trends for 2018." Gartner. October 3, 2017.
8 Daniel Davis. "What Is Shadow IT? How IT Leaders Can Overcome
the Top Five Collaboration Challenges."
IBM. March 30, 2016.
9 Alan Murphy. "Keeping Your Head Above the Cloud: Seven Data
Center Challenges to Consider Before Going Virtual."
F5 Networks. 2008.
10 John Maxwell. "Top Seven Tips for Optimizing
Virtual Data Centers." The Data Center Journal. October 13, 2014.
11 Jason Langone. "Designing the Virtual Data Center – Part
1." Virtualization.info. October 5, 2009.
[return to top of this
About the Author
[return to top of this
James G. Barr is a leading business continuity analyst and
business writer with more than 30 years’ IT experience. A member
of "Who’s Who in Finance and Industry," Mr. Barr has
designed, developed, and deployed business continuity plans for a
number of Fortune 500 firms. He is the author of several books,
including How to Succeed in Business BY Really Trying, a member
of Faulkner’s Advisory Panel, and a senior editor for Faulkner’s
Security Management Practices. Mr. Barr can be reached via
e-mail at email@example.com.
[return to top of this report]