Storage Technologies

PDF version of this report
You must have Adobe Acrobat reader to view, save, or print PDF files. The
reader is available for free

Storage Technologies

by James G. Barr

Docid: 00011568

Publication Date: 2211

Report Type: TUTORIAL


Data is not only growing, it’s exploding – in both volume and importance.
According to the US Government Accountability Office (GAO), emerging
global data storage needs are currently estimated at around 97 trillion
gigabytes. By 2025, the demand is expected to double. At present,
enterprise storage needs are being served by a variety of storage
technologies including direct attached storage (DAS), storage area
networks (SAN), network attached storage (NAS), solid-state drives (SSD),
software defined storage (SDS), storage virtualization, hyperconverged
storage, and tape. Additional new technologies, like synthetic DNA and 5D
glass, are under development.

Report Contents:

Executive Summary

[return to top of this

Data is not only growing, it’s exploding – in both volume and importance.
According to the US Government Accountability Office (GAO):

  • “Emerging global data storage needs … are currently estimated [at]
    around 97 trillion gigabytes.
  • By 2025, the “demand is expected to double.”1

Data is a key asset at the heart of enterprise value creation and one
that must be managed with the same kind of acumen as employees,
facilities, or finances.

Network Attached Storage Tutorial

Storage Area Networks Tutorial

NAS and SAN Storage Market Trends Market

At present, enterprise storage needs are being served by a variety of
storage technologies, including:

  • Direct attached storage (DAS)
  • Storage area networks (SAN)
  • Network attached storage (NAS)
  • Solid-state drives (SSD)
  • Software defined storage (SDS)
  • Storage virtualization
  • Hyperconverged storage
  • Tape

Other technologies, like synthetic DNA and 5D glass, are under


[return to top of this

Direct Attached Storage

Direct Attached Storage (DAS) is storage connected directly to a
computer. Familiar forms include:

  • Hard drives
  • Solid-state drives
  • CD/DVD drives
  • Flash drives

DAS solutions are ideal for creating local backups.2

Storage Area Networks

Storage area networks (SAN) connect one or more dispersed storage devices
to a server in a way that makes them behave as though they are physically
under the control of the server’s operating system. There are no limits to
the number of optical jukeboxes, tape libraries, or disk arrays that can
be included in a storage area network. Just as a LAN can be used to
connect clients to servers, a SAN can be used to connect servers to
storage, servers to each other, and storage to storage. SANs are very
well suited to data-intensive environments such as video editing, online
transaction processing, data warehousing, storage management, and server
clustering applications. They are also very useful for backing up

Network Attached Storage

Network attached storage (NAS) behaves like a small, special-purpose
server that is optimized for file sharing. It uses file-based
protocols (SMB, NCP, AFP, CIFS, FTP, NFS, or HTTP) to transfer files from
a single storage drive or an array of drives situated in a specialized
enclosure. NAS has the necessary I/O ports to allow for the transmission
protocol of the user’s choice. This option does not require the physical
presence of a computer in order to function, making it one of the most
flexible options available.

Solid-State Drives

Solid-state drives (SSD) are built out in silicon and have no moving
parts. This characteristic makes SSDs much faster than devices like
spinning disk drives or magnetic tape. SSDs are typically comprised of
large aggregations of flash memory and are well matched to applications
where high performance storage is crucial to overall solution success.
Relative to mechanical drives with similar storage capacities, SSDs are
still fairly expensive.

Software Defined Storage

Software defined storage (SDS) allows software to automate storage
decisions and tasks such as managing storage policy, provisioning drives,
and determining the best method of distributing data among the available
storage hardware. For example, physical storage assignments can be
optimized by data type, frequency of access, volume of a particular type
of data, or other factors. When combined with one of the aforementioned
technologies, SDS allows an enterprise to maximize storage potential while
minimizing costs.

Storage Virtualization

Storage virtualization improves storage utilization and flexibility,
increases application uptime, reduces administrative overhead, preserves
the investment in existing storage infrastructure, and facilitates the
introduction and integration of new storage systems. Storage
virtualization enables enterprises to:

  • Consolidate storage resources into a single, virtualized storage pool.
  • Offer a consistent presentation of storage to virtual machines.
  • Provide high performance access to virtual machine disks.
  • Perform live migrations of virtual machine disk files across storage
  • Support multiple connectivity options, including FC and iSCSI SAN,
    NFS, and internal storage disk.
  • Eliminate virtual machine storage I/O bottlenecks and free up valuable
    storage capacity.
  • Reduce the total cost of storage ownership.

Hyperconverged Storage

According to CDW, “Hyperconverged storage (HCS) is the next step up from
storage virtualization and SDS. HCS utilizes the cloud to combine the
functions of computing, virtualization, and storage as a physical unit
that can be managed as a single system. “This is a type of
software-defined storage because each node has a software layer running
virtualization software identical to all other nodes in the cluster. This
software virtualizes the resources in the individual node and shares them
with the other nodes, allowing storage and other resources to be used as a
single storage or compute pool.”3


Tape is the enterprise’s eternal storage medium. While
advances in disk and flash technology have served to marginalize interest
in this decades-old form, tape remains not only relevant but essential to
many enterprise operations, especially archival data storage. As analyst
Samuel Greengard observes:

“[All] storage devices and
use cases eventually lead back to tape – at least, for the foreseeable
future. While tape is not as flexible or convenient as hard drives, SSDs,
and other media, it remains cost-effective and highly reliable.

Tape won’t ever
threaten hard drives and SSD for dominance, but it will remain at the
center of storage
– and provide a strong insurance policy for
the likes of Google.”4

Current View

[return to top of this

Big Data, AI & IoT

The ever-expanding demand to store, process, and share large volumes of
data across the enterprise is being driven by multiple forces,

  • Big Data, especially the processing of customer and
    financial data.
  • Artificial Intelligence, particularly the formation
    of machine learning data sets.
  • IoT, the Internet of Things phenomenon in which even
    the smallest device or component is becoming smart and addressable.

The Big Four Factors

Analyst John Edwards reports that according to Allan Buxton, director of
forensics at data recovery firm Secure Data Recovery Services, there are
four fundamental forces influencing the storage industry. In Edwards
words, “Four distinct factors are currently driving the evolution in
storage technology:

  1. “Cost,
  2. “Capacity,
  3. “Interface speeds, and
  4. “Density.”

“Hard disk manufacturers are competing with solid-state drive (SSD)
makers by decreasing access and seek times while offering higher storage
capacities at a lower cost. Both SSD and hard disk makers tout improved
reliability, but there is no clear-cut winner in real world tests.”5

Never Having to Delete

A recurring theme among storage users – and the storage providers who are
attempting to accommodate them – is to never delete data. For example,
analyst Sascha Brodsky reports that “Toshiba used a type of microwave
recording … to push HDD capacity to 18 terabytes in its Nearline hard
disc drives.” According to Jacky Lee, a Toshiba marketing manager, “The
larger capacity allows consumers the ability to consolidate digital
content from several devices into one HDD. This makes it easier to
organize and back up valuable content.”6

Storage’s Achilles Heel

Storage media are fragile, susceptible to physical damage as well as
normal data erosion. As reported by analyst Samuel Greengard:

  • “A typical hard drive will operate only about three to five years
    before failing.
  • “Portable disk storage technologies such as CDs and DVDs generally
    hold data for 10 to 25 years.
  • “Flash storage – which includes drives, cards, and SSDs – degrades
    with use, rather than with age. This means that the more a user writes
    and rewrites to the device, essentially using it for its intended
    purpose, the greater the risk of failure.”7

Compounding this media stability problem are the planned and unplanned
obsolescence of media recorders and players. Most PCs are no longer
equipped with built-in floppy disc or CD drives; thus, even if an
enterprise were in possession of readable legacy media, data extraction
would be troublesome.

As a consequence of naturally deteriorating storage media, enterprises
may be forced to periodically re-record critical data, especially
financial and operational data where governmental and other third-party
interests insist on long retention periods, and data must be delivered

Dead Data

By one estimate, “Up to 90 percent of the data generated by computers and
other digital systems is never accessed again; it simply lies idle,
consuming ever-growing mountains of storage media or servers.”8
This phenomenon presents storage planners with a real dilemma and should
prompt a comprehensive review of enterprise data storage policies and
practices. Relevant questions include:

  • Can the enterprise reduce the amount of data collected and stored?
  • Can the enterprise effectively and efficiently shuttle little-used
    data from disk to tape?
  • Can the enterprise effect these data relocations without compromising
    data integrity, security, and privacy?


[return to top of this

Just as current storage technologies are evolving, new technologies are
under development.

Synthetic DNA

Analyst Sang Yup Lee reveals that DNA (the basic building blocks of life)
can store “massive amounts of data at a density far exceeding that of
electronic devices.”9 In describing the concept, the GAO
reminds us that “In nature, DNA has been storing information since life
began.” In terms of data storage, “The same coding system can be used to
store digital information in an artificial DNA strand – created in a lab,
not by a biological organism. To read the data, established technology
known as sequencing can decode [the] DNA [please see Figure 1].” By one
estimate, “DNA can hold over 11 trillion gigabytes in a cubic inch of

Figure 1. Synthetic DNA Data Write and Read Process

The letters A, C, G, and T represent the components of the
genetic code.

Figure 1. Synthetic DNA Data Write and Read Process

Source: GAO | GAO-22-105954

Glass Storage

Analyst Sascha Brodsky reports, “Researchers have created ‘5D’ data
storage technology that could allow 500 terabytes of data to be written to
a CD-sized glass disc.”11

Elaborating on the underlying process, the GAO adds that “Data can also
be stored in quartz glass using a fast and precise laser, similar to the
kind used for vision correction surgery. The laser makes etchings that
represent digitally coded ones and zeros. This method is called 5D because
it uses five unique attributes of the etchings. Three of the attributes
relate to the locations of the etchings on the glass, equivalent to the X,
Y, and Z coordinates of a 3D graph. DVD storage uses a similar system, but
glass storage has more capacity because the laser creates multiple layers
of data using two additional attributes – the size and orientation of the

Quantum Memory

Another forward-looking – at this juncture, speculative – storage
technology is quantum memory. As analysts Jane McCallion and Chris
Finnamore explain, “This technology uses quantum photonics, which uses
light particles themselves (photons) to transmit and store data. Using
quantum states for data storage is tricky, in part due to the fact quantum
physics relies on particles behaving in unusual ways. However, some
progress has been made.”13


[return to top of this

In evaluating emerging storage technologies, enterprise planners should:

  • Fully determine how storage architecture choices impact energy
    consumption, an important element in estimating total cost of ownership
  • Prefer solid state storage over disk-based systems for high traffic
    data stores where the technology is stable and delivered by a reputable
  • Perform a pilot using “other people’s” storage (i.e. , Infrastructure
    as a Service) to evaluate and validate storage solution design
    assumptions when implementing a new storage-intensive service.
  • Look for storage models in which the storage is fully integrated with
    other key computing system elements including processing and
  • Include team members with a deep IT security background from the
    earliest stages of planning for any significant evolution of storage


[return to top of this

About the Author

[return to top of this

James G. Barr is a leading business continuity analyst
and business writer with more than 30 years’ IT experience. A member of
“Who’s Who in Finance and Industry,” Mr. Barr has designed, developed, and
deployed business continuity plans for a number of Fortune 500 firms. He
is the author of several books, including How to Succeed in Business
BY Really Trying
, a member of Faulkner’s Advisory Panel, and a
senior editor for Faulkner’s Security Management Practices.
Mr. Barr can be reached via e-mail at

[return to top of this