Storage Area Networks

PDF version of this report
You must have Adobe Acrobat reader to view, save, or print PDF files. The reader
is available for free

Storage Area

by Lynn Greiner

Docid: 00016891

Publication Date: 1801

Report Type: TUTORIAL


With today’s ever-increasing storage needs and ever-shrinking backup
windows, companies have outgrown the capabilities of traditional
direct-attached storage. A storage area network (SAN) is one solution to this
challenge. SANs provide a storage infrastructure that resides on its own private
network, is not tied to any one server, and can be allocated and reallocated as
required. It would have been difficult for
the cloud to have been provisioned without SANs. A SAN is not
necessarily an easy thing to build or manage, however. This report
describes the technologies involved and discusses some of the issues involved
in implementation.

Report Contents:

Executive Summary

[return to top of this report]

A storage area network, or SAN, is a specialized network that enables fast, reliable
access among servers and external or independent storage resources.

Related Faulkner Reports

Storage Area Networks Market Trends

In this type of network, a storage device is not the exclusive property of
any one server; rather, storage devices are shared among all networked servers
as peer resources. Just as a LAN can be used to connect clients to servers, a
SAN can be used to connect servers to storage, servers to each other, and
storage to storage.

This arrangement offers a number of benefits. Redundancy is an inherent part
of SAN architectures, which also allows for easier scalability while preserving
ubiquitous data access. Also, with storage centralized, management of the data
for tasks such as optimization, reconfiguration, and backup/restore is more
efficient. All of this makes SANs highly suited for data intensive environments
like those used for video editing, pre-press, online transaction processing (OLTP), data warehousing, storage
management, and server clustering applications. It is a foundation technology
for today’s cloud infrastructures.

SANs are particularly useful for data needing regular backups. Previously, there were only two
choices: either a tape drive had to be installed and maintained on every server
or the data was moved across the network to a dedicated backup server, which
consumed bandwidth. Performing backups over the LAN can be slow  –  and
disruptive to clients and applications. A daily backup can suddenly introduce gigabytes of data
into normal LAN traffic. With SANs, organizations can have the best of both
worlds: high-speed backups over a dedicated network and central management.


[return to top of this report]

SANs have existed for years in the mainframe environment in the form of
Enterprise Systems Connection (ESCON). In mid-range environments, the high-speed
data connection was primarily SCSI (Small Computer System Interface) – a
point-to-point connection, which is severely limited in terms of the number of
connected devices it can support as well as the distance between devices.

In a traditional storage environment, a server controls the storage devices
and administers requests and backup. With a SAN, instead of being involved in
the storage process, the server simply monitors it. By optimizing the box at the
head of the SAN to do only file transfers, users are able to get much higher
transfer rates over such transports as Fibre Channel.

Using Fibre Channel as the connection between storage devices also increases
distance options. While traditional SCSI allows only a 25-meter (about 82 feet)
distance between machines and Ultra2 SCSI allows only a 12-meter distance (about
40 feet), Fibre Channel supports spans of 10 kilometers (about 6.2 miles),
making it suited to building campus-wide storage networks. SCSI can only connect
up to 16 devices, whereas Fibre Channel can link as many as 126. By combining
LAN networking models with the core building blocks of server performance and
mass storage capacity, SAN eliminates the bandwidth bottlenecks and scalability
limitations imposed by previous SCSI bus-based architectures.

Over the years, the speed of Fibre Channel has increased from 1G bps to
32G bps
(with the added benefit that the 16G bps spec decreased power consumption by at least 25 percent vs 8G; it is backward compatible with the older 4G and 8G
standards) and the distance has surpassed the original standards to about 75 miles. Speed
and distance are increased using such technologies as Asynchronous Transfer Mode
(ATM) and SONET-framed DWDM links in combination with fiber link extenders at
each end. The Fibre Channel ANSI standard for 32G bps speeds, with
inter-datacenter connectivity up to 10,000 meters, was published in 2014, with products
expected to hit the market in 2016. Products running Gen 6 FC, with speeds up to
128GFC, were demonstrated in a plugfest in June 2016,
along with a proof-of-concept of NVMe over fabric. The T11 INCITS technical committee is currently
defining the Gen 7 FC specification, which is expected in 2017, with products
hitting the market in 2019.

An additional technology, iSCSI, is bringing the SAN downmarket and
making it accessible to smaller organizations. Coupling SCSI with IP
is proving to be a cost-effective way to get into SANs for companies
that have previously lacked the cash or the expertise to delve into
Fibre Channel. And the Fibre Channel over Ethernet (FCoE) specification was
approved as a standard in June 2009 and ratified by INCITS as an ANSI standard
in 2010.

NVM Express (NVMe) over Fabrics, announced in September 2014, is designed to
allow SSDs to connect over fabrics. The Fibre Channel Industry Association has
announced a working group in the INCITS T11 Committee to align Fibre Channel
connections using NVMe over Fabrics. In July 2015, a plugfest tested 17 devices
against version 1.2 of the spec; 10 successfully completed the tests,
and in December 2017 a plugfest validated connectivity to FC Gen 6 products.

SAN Features

The following are key features of SANs:

  • Storage and archival traffic are routed over a separate network,
    off-loading the majority of data traffic from the enterprise LAN.
  • Provides faster backup and restoration of data files, while improving bulk
    data movement.
  • Allows centralized storage management and provides disaster recovery.
  • A shared data storage pool can be easily accessed by remote workstations
    and servers.
  • The SAN can be easily expanded to a virtually unlimited size with hubs or
  • Nodes on the SAN can be easily added or removed with minimal disruption to
    the active network.
  • For totally redundant operation, the SAN can be easily configured to
    support mission-critical applications.
  • Offers better savings compared to server-based storage.

Figure 1 depicts a typical SAN configuration.

Figure 1. Storage Area Network Configuration

Figure 1. Storage Area Network Configuration


As the SAN concept has evolved, it has moved beyond association with any
single technology. In fact, just as LANs and WANs use a diverse mix of
technologies, so can SANs. This mix can include FDDI, ATM, and IBM’s Serial
Storage Architecture (SSA), as well as Fibre Channel. More recently, SONET and
DWDM have been added to the mix. SAN architectures also allow for the use of a
number of underlying protocols, including TCP/IP and all the variants of SCSI.

Instead of dedicating a specific kind of storage to one or more servers, a
SAN allows different kinds of storage – mainframe disk, tape, and RAID, and more
recently solid state disks (SSDs) – to be
shared by different kinds of servers, such as Windows, UNIX, and OS/390. With
this shared capacity, organizations can acquire, deploy, and use storage devices
more efficiently and cost-effectively.

SANs also let users with heterogeneous storage platforms utilize all of the
available storage resources. This means that within a SAN users can backup or
archive data from different servers to the same storage system. They can also
allow stored information to be accessed by all servers, create and store a
mirror image of data as it is created, and share data between different

By decoupling storage from computers, workstations, and servers, and taking
storage traffic off the operations network, organizations gain a
high-performance storage network and improve the performance of the LAN. These
features reduce network downtime and productivity losses while extending current
storage resources. In effect, the SAN does in a network environment what
traditionally has been done in a back-end I/O environment between a server and
its own storage subsystem. The result is high speed, high fault tolerance, and
high reliability.

With a SAN, there is no need for a physically separate network to handle
storage and archival traffic. This is because the SAN can function as a virtual
subnet that operates on a shared network infrastructure. For this to work,
however, different priorities or classes of service must be established.
Fortunately, both Fibre Channel and ATM provide the means to set different
classes of service.

Although early implementations of SANs were local or campus-based, there
is no technological reason why they cannot be extended much farther over the
WAN. SANs have already been extended over a much wider areas – perhaps they will
extend globally in the future.

SANs also promise easier and less expensive network administration. In
traditional networks,
administrative functions are labor-intensive and time-consuming, and IT
organizations typically have to replicate management tools across multiple
server environments. With a SAN, only one set of tools is needed, which
eliminates the need for replication and associated costs. However, immature
standards can create other management headaches.

SAN Components

There are several components that are required to implement a SAN;
a typical enterprise SAN might be configured thusly: A Fibre
Channel adapter is installed in each server. These are connected via the
server’s PCI bus to the server’s operating system and applications. Because
Fibre Channel’s transport-level protocol wraps easily around SCSI frames, the
adapter appears to be a SCSI device.

The adapters are connected to interconnected Fibre Channel switches (two or
more of these switches are known as a fabric), running over
fiber-optic cable or copper coaxial cable. Category 5 cable, the high-end
twisted pair rated for Fast Ethernet and 155M-bps ATM, can also be used.

A LAN-free backup architecture may include some type of automated tape
library that attaches to the fabric via Fibre Channel. This machine typically
includes a mechanism capable of feeding data to multiple tape drives and may be
bundled with a front-end Fibre Channel controller. Existing SCSI-based tape
drives can be used also through the addition of a Fibre Channel-to-SCSI bridge.

Storage management software running in the servers performs contention
management by communicating with other servers via a control protocol to
synchronize access to the tape library. The control protocol maintains a master
index and uses data maps and time stamps to establish the server-to-hub
connections. Many control protocols are specific to
the software vendors, however the storage industry
is leaning towards standardizing
on the SMI-S protocol from the Storage Network Industry Association,
which is currently supported by over 500 products.

From the switch, a standard Fibre Channel protocol, Fibre Channel-Arbitrated
Loop (FC-AL), functions similarly to token ring to ensure collision-free data
transfers to the storage devices. The hub also contains an embedded SNMP agent
for reporting to network management software.

Much like Ethernet switches in LAN environments, Fibre Channel switches provide fault
tolerance in SAN environments. The port by-pass functionality will
automatically bypass a problem port and avoid most faults. Stations can be
powered off or added to the loop without serious loop effects. Storage
management software is used to mediate contention and synchronize
data – activities necessary for moving backup data from multiple servers to
multiple storage devices. 

To achieve full redundancy in a Fibre Channel SAN, two fully independent,
redundant loops must be cabled by interconnecting switches in a fabric. This scheme provides
at least two independent paths for
data with fully redundant hardware. Most disk drives and disk arrays targeted
for high availability environments have dual ports specifically for this

Remote Data Replication

The ability to access or replicate data at long distances enables users to
expand their backup capabilities, migrate data remotely, plan for disaster
recovery, and efficiently utilize IT resources across the WAN. This can be
accomplished in two ways, depending on the choice of WAN technology.

Synchronous data replication ensures data integrity by allowing source and
copied storage volumes to remain in sync with one another. This is accomplished
by a pair of link extenders that convert the short-haul copper or multi-mode
optical signal to a long haul, single-mode signal, and vice versa. An internal
digital signal conditioner re-times the signal over the long-haul connection to
eliminate jitter. Another way to implement remote data replication over long
distances is through the use of ATM on fiber WANs.


A key feature of SANs is zoning, a term used by some switch companies to
denote the division of a SAN into subnets that provide different levels of
connectivity between specific hosts and devices on the network. In effect,
routing tables are used to control access of hosts to devices. This gives IT
managers the flexibility to support the needs of different groups and
technologies without compromising data security. Zoning can be performed by
cooperative consent of the hosts or can be enforced at the switch level. In the
former case, hosts are responsible for communicating with the switch to
determine if they have the right to access a device.

There are several ways to enforce zoning. With hard zoning, which delivers
the highest level of security, IT managers program zone assignments into the
flash memory of the switch. This ensures that there can be absolutely no data
traffic between zones.

Virtual zoning provides additional flexibility because it is set at the
individual port level. Individual ports can be members of more than one virtual
zone, so groups can have access to more than one set of data on the SAN.

Broadcast zoning can be used to restrict the scope of broadcasts. For
example, IP ARP (Address Resolution Protocol) broadcasts can be kept from SCSI
ports on the switch. These IP broadcasts can otherwise cause storage devices to


The basic tools needed to manage systems on the Fibre Channel fabric are
available through the familiar SNMP (Simple Network Management Protocol)
interface. The FC-AL MIB (Management Information Base) approved by the Internet
Engineering Task Force (IETF) extends SNMP management capabilities to the
multi-vendor SAN environment. New vendor-specific MIBs will emerge as products
are developed with new management features. Of course,
GUI-based management systems will play a key role in managing storage networks.

The Storage Networking Industry Association (SNIA) is promoting the
eXtensible Access Method (XAM) to provide interoperability,
information security and storage transparency.


Historically, SAN test solutions were developed in-house, using real servers, storage equipment and internally
developed software. With the increasing scale of SAN infrastructures, even Fibre
Channel SAN equipment manufacturers and SAN solution integrators needed a way to
perform interoperability and scalability testing that is less resource-intensive
and physically challenging. Off-the-shelf test platforms are available
that enable realistic characterization of fabric performance. 

In addition, plugfests hosted by industry associations give vendors the
opportunity to test their products’ compatibility.


InfiniBand, short for "infinite
bandwidth," is a bus technology that provides the basis for an input/output
(I/O) fabric designed to increase the aggregate data rate between servers and
storage devices. The point-to-point linking technology allows server vendors to
replace outmoded system buses with InfiniBand to greatly multiply total I/O
traffic compared with legacy system buses such as PCI and its successors
PCI-X and the newer PCI Express (PCIe).

The current PCI bus standard supports up to 133M
bps across the installed PCI slots, providing shared bandwidth of up to 566M
bps, while PCI-X 2.0 permits a maximum bandwidth of 4.3G bps,
PCIe v3.0 offers 16G bps, and Fibre Channel
offers bandwidth up to 28G bps.

PCIe 4.0, released in 2017, doubles the bandwidth of PCIe 3.0,
and PCIe 5.0, expected in 2019, doubles the transfer rate yet again. In contrast, InfiniBand utilizes a 2.5G bps wire
speed connection with multi-wire link widths. With a four-wire link width, for
example, Infiniband offers 14G bps of bandwidth; Infiniband
100G EDR provides 100G bps. The InfiniBand specification supports both
copper and fiber implementations.

In addition, Fourteen Data Rate (FDR) InfiniBand delivers 56G bps per link,
and Enhanced Data Rate (EDR) InfiniBand, at 100G bps was added to the spec in
2013. 200 G bps HDR is on the horizon.

The I/O fabric of the InfiniBand architecture
takes on a role similar to that of the traditional mainframe-based channel
architecture, which used point-to-point cabling to maximize overall I/O
throughput by handling multiple I/O streams simultaneously. The move to
InfiniBand means that I/O subsystems need no longer be the bottleneck to
improving overall data throughput for server systems.

In addition to performance, InfiniBand promises other benefits such as lower
latency, easier and faster sharing of data, built in security and quality of
service, and improved usability through a form factor that makes components much
easier to add, remove, or upgrade than today’s shared-bus I/O cards.

InfiniBand technology works by connecting host-channel adapters to target
channel adapters. The host-channel adapters tend to be located near the servers’
CPUs and memory, while the target channel adapters tend to be located near the
systems’ storage and peripherals. A switch located between the two types of
adapters directs data packets to the appropriate destination based on
information that is bundled into the data packets themselves.

The connection between the host-channel and target-channel adapters is the
InfiniBand switch, which allows the links to create a uniform fabric
environment. One of the key features of this switch is that it allows data to be
managed based on variables such as service level agreements and a destination
identifier. In addition, InfiniBand devices support both packet and connection
protocols to provide a seamless transition between the system area network and
external networks.

The InfiniBand specification is the culmination of the combined efforts of
about 43 companies that belong to the InfiniBand Trade Association led by
industry leaders Intel, Hewlett-Packard Enterprise, IBM, and Oracle. InfiniBand
will coexist with the wide variety of existing I/O standards that are already
widely deployed in user sites. Likewise, InfiniBand fabrics can be expected to
coexist with newer I/O standards, including PCI-X, PCIe, Gigabit Ethernet, and
10, 40 and 100 Gigabit Ethernet. On the fall 2015 list of the Top 500
supercomputers, 47 percent used InfiniBand interconnect, including 45 percent of
the 73 most powerful Petabyte systems. On the November 2016 list, the number had
fallen to 37 percent of the Top 500, but it rocketed to 77
percent on the November 2017 list.

The key advantage of the InfiniBand architecture,
however, is that it offers a new approach to I/O efficiency. Specifically, it
replaces the traditional system bus with an I/O fabric that supports parallel
data transfers along multiple I/O links. Furthermore, the InfiniBand
architecture offloads CPU cycles for I/O processing, delivers faster memory
pipes and higher aggregate data-transfer rates, and reduces management overhead
for the server system.


Industry leaders are coalescing around new technology advances to bring
performance and interoperability to storage networking. By offering intelligent
storage networking products and offerings, they hope to free companies from the
increasing challenges of managing business- and mission-critical data. The
introduction of 16G-bps switch speeds for storage enables users to realize and
leverage the increasing speed and bandwidth capabilities across their entire

The INCITS FC-SW-4 standard, approved in October 2005, established
the foundation for building
interoperable, multi-vendor switch fabrics; products are now on the
market. INCITS FC-SW-5 is has been published, and INCITS FC-SW-6
was published in August 2016. The INCITS FC-SW-7 project was initiated
in October 2015. 16G bps technology, which solved growing performance requirements,
and enabled the
development of intelligent and interoperable products, connected with existing products and, through
standards-based auto negotiation, extended
existing SAN installations instead
of having to replace them. 16G standard was completed in the spring of 2011, and
the first plugfest was in the fall of 2011. The Fibre Channel
association published the 32G standard in 2014;
compliant products hit the market in 2016. In the near future, it expects a 128 G
standard combining 4 32G lanes into a single connection. This new standard will
also include the first standardized forward error correction.

A significant trend in building storage networks is the use of IP.
Most companies have developed,
or are in the process of developing SAN products that work over ubiquitous Internet
Protocol (IP) networks. IP SANs are considerably less
expensive than Fibre Channel SANs, increasing the technology’s penetration into
smaller organizations.

iSCSI (Internet protocol small computer system interface), uses the IP
networking infrastructure to quickly transport large amounts of block storage
(SCSI) data over existing LANs and WANs. With the potential to support all major
networking protocols, iSCSI can unify network architecture across an entire
enterprise, reducing the overall network cost and complexity. To ensure
reliability, iSCSI can use known network management tools and utilities that
have been developed for IP networks.

While standards have been developed for Fibre Channel technology, different
interpretations of the standards by vendors have resulted in products that do
not work together on the same network. As a result, interoperability problems
still plague the SAN industry. Many vendors are now working on standards through
organizations such as the Storage Networking Industry Association (SNIA) and
the Fibre Channel Industry Association (FCIA). Others are investing heavily in interoperability labs where vendors could see
how well their equipment works with components from other vendors.
Dell EMC and IBM,
for example, have devoted extensive resources to establish such facilities.

But plug-and-play compatibility still has not been entirely realized,
although there has been significant progress. Customers cannot yet confidently
mix and match switches and hubs from various vendors, for example, and there are
still problems at the storage-management level. This area is being addressed by
SNIA’s Storage Management Initiative (SMI), which is driving the first common
and interoperable management backbone for large scale, multi-vendor storage
networks. In October 2004, SNIA and the International Committee for Information
Technology Standards (INCITS) approved management standard ANSI INCITS 388-2004,
American National Standard for Information Technology Storage Management, and in
September 2005 version 1.1 of the spec added a number of features, including
policy management, standard ways of mirroring and taking snapshots of data, disk
partitioning, provisioning and switch port management. This standard has since
been revised and released as ANSI INSITS 388:2008, and a proposal for revisions
to it was accepted in November 2010. SMI-S version 1.3 was designated ANSI
INCITS 388:2011. SMI-S 1.5 has been accepted as ISO/IEC 24775:2014. Versions 1.6
has been published, 1.6.1 rev 6 technical position was released in November
2016, and scoping has begun on version 2.0. SMI-S 1.7.0 revision 5 is under
development by a technical working group, and SMI-S 1.8 is in the

Current View

[return to top of this report]

The SAN market, like all external storage, has suffered a
slowdown, but over the past years, failures and
consolidations have weeded out many of the less viable vendors, leaving a core
who are dedicated to the technology and who have the wherewithal to support it
properly. Vendors such as HPE have had considerable success in streamlining
provisioning and management.

Still, the growth of storage
virtualization and dynamic resource allocation makes a SAN an effective solution
for applications that transfer large blocks of data, and whose storage needs
experience spikes. Analysts have seen a trend towards smaller
"starter" SANs of a terabyte or two, where the benefits of SAN
technology justify the expense for such a comparatively small amount of storage.
SANs are also foundation technologies for the cloud.

Standards provide hope that SAN
management will be even easier, and that all products will interoperate in a
relatively seamless manner. Today, it takes expertise to install
and run a SAN; the industry is slowly moving towards more straightforward

New to the industry are server SANs, which combine compute and shared
storage. In 2015, the global server SAN market was estimated to have a compound
annual growth rate of 53.2 percent in the next five years.
Some analysts believe that they will slowly replace traditional SANs over the
next decade.


[return to top of this report]

The external storage market slipped about 6 percent in
2016. Some analysts see that as part of an ongoing slow decline, but others
believe there will be growth – a 3.5 percent CAGR between 2016 and 2026, mainly
driven by North America. In q3 2017, it recovered slightly,
with 4.1 percent growth. Storage system sales to hyperscale datacenters
accounted for 22.7 percent of global spend.

The supposed battle between the SAN
and network attached storage (NAS) has resulted in each genre finding its niche
which, in the case of NAS, can even be within a SAN. 

Some analysts suggest that companies building new SANs may spurn
Fibre Channel and go straight to iSCSI, since it is cheaper to buy, install and
manage. However, they add, it is unlikely that existing Fibre Channel SANs will
be replaced. 

Instead, they believe that Fibre Channel and IP SANs will
interoperate, thanks to technologies that interface between them. Fibre Channel and IP-based SANs can be
interconnected using one of two protocols: FC-IP, which sends Fibre Channel data
through an IP tunnel, or iFCP, which enables IP networking’s management
capabilities. Most vendors currently support FC-IP. However,
it is telling that, when Broadcom announced that it was acquiring Brocade in
2016 (the deal closed in
November 2017), it said it would only
keep the Fibre Channel components of the business. The
datacenter networking business was subsequently sold to
Extreme Networks.

In addition, the Fibre Channel over Ethernet (FCoE) standard has been
developed and approved by the T11 technical committee as INCITS FC-BB-5 to allow
mapping of Fibre Channel over full duplex Ethernet; it was ratified in May 2010
as an ANSI standard (ANSI INCITS 462:2010). INCITS FC-BB-6 was released as ANSI
INCITS 509-2014 in October 2014.

All-flash and hybrid SANs are
now the technologies of choice, driving the
development of flash-optimized solutions from mainstream vendors.

In the future, especially with the growth of the cloud, we can look forward to larger and larger
storage requirements – petabyte sizes
and more – necessitating stronger management
solutions. Server SANs combining compute and shared storage will continue to
make inroads, driven by the increasing number of datacenters and the
need for improved data staorage management.

As the pendulum swings back towards server attached
storage, as evidenced by the growth of converged and hyperconverged systems in
public and private clouds, standalone SANs may become an endangered species. It
will be a slow decline, however – companies will not replace expensive SANs
wholesale, but rather phase them out over long periods.


[return to top of this report]

For any
information-intensive business, a SAN can make good sense. However, it is a good idea to begin with a simple installation where
the need is immediate and then expand its use to the rest of the organization as
appropriate. SANs are expensive to install, and complex to install and manage, so the novice should look
for outside help to make sure it is done right, and ensure staff are
properly trained. A badly-implemented SAN can be a
threat to corporate data.

Platform independence is a key element of the SAN product selection
process. This means the product should support the most popular platforms,
including Microsoft Windows Server, Solaris, Linux, IBM AIX, and Hewlett
Packard Enterprise HP-UX. Products should be Web-enabled to provide remote management of SAN
devices anytime, anywhere, giving IT managers the capability for 24×7 proactive
management and monitoring of the SAN’s health, facilitating SAN adoption
throughout the enterprise.

To increase the ability to detect and control all aspects of the enterprise
from one interface, the selected SAN product should be able to seamlessly
integrate with storage management applications and enterprise-wide management
applications. The product should also integrate with storage management
applications from vendors such as CA Technologies, Dell
EMC, and Veritas, as
well as enterprise management applications from vendors such as BMC,
CA Technologies, Hewlett Packard Enterprise, and IBM Tivoli. This ability enables IT managers to
monitor and manage the LAN, WAN, and SAN in their enterprise from a common

In the decision-making process, products should be compared based on their
support of available open standards, including CIM, XML, HTML, SNMP, CORBA, SES,
RMI, Management Server, Fibre Alliance MIB, and Standard Fibre Alliance .dll for
host bus adapters (HBAs). These standards are used for in-band and out-of-band
discovery and device communication of SAN devices. By incorporating each
standard into the product, the vendor is able to manage other vendor devices and
provide a wider communication flow to storage and enterprise management

Since virtualization is now a mainstream technology, it is
important to confirm that chosen the SAN technology can function in
and with a virtual environment. 

The move to standalone SANs provides a new level of
scalability to system administrators, allowing a much greater degree of freedom
than the traditional attached storage paradigm, although it
faces competition from server SANs. Fibre Channel technology
provides the basic foundation for this shift. The the evolution
of SANs requires management of interconnect devices from different vendors
across islands of storage to create enterprise-wide SANs. Accordingly, it is
wise to evaluate vendors based on their commitment to SNIA’s Storage
Management Initiative.

When hiring new IT people to administer the SAN, it is recommended that the
job description include and that preference be given to individuals who have
taken SNIA’s Storage Networking Certification Program (SNCP). This is the
storage industry’s leading vendor-independent certification program for storage
networking, which was developed to establish standards and expectations for
measuring the storage networking expertise of IT professionals. The program
allows IT professionals to demonstrate their storage networking skill level to
current and future employers by equipping them with the expertise required to
successfully complete the SNIA certification process and exams.

[return to top of this report]

About the Author

[return to top of this report]

Lynn Greiner is Vice President, Technical Services
for a division of a multi-national corporation, and is also an award-winning
computer industry journalist. She is a member of Faulkner’s Advisory Panel.

[return to top of this report]