2014 Gartner Magic Quadrant for Data Center Networking


GMQ-Data-Center-Networking-2014

Data center networking requirements have evolved rapidly, with emerging technologies increasingly focused on supporting more automation and simplified operations in virtualized data centers. We focus on how vendors are meeting the emerging requirements of data center architects.

Market Definition/Description

This document was revised on 2 May 2014. The document you are viewing is the corrected version. For more information, see the Corrections page on gartner.com.

Data center networking requirements are evolving rapidly after a period of architectural stability that lasted at least 15 years. While speed, density and scale increased during that period, the underlying architecture relied on an oversubscribed three-tier hierarchical approach — using server access switches, an aggregation layer and an intelligent Layer 3 switching core.

Today, the data center network market is being transformed with new architectures, technologies and vendors specifically targeting solutions to address:

  • The increasing requirement to improve and simplify network operations activities to align more closely with business goals and broader data center orchestration agility
  • The changing size and density within the data center
  • Shifts in application traffic patterns

What’s Changed?

During the past 12 months, there has been a significant amount of change in the data center networking market. There are several acquisitions that have been undertaken and/or are in progress, involving Alcatel-Lucent, IBM, Extreme Networks and Enterasys Networks. In addition, many of the vendors included in this Magic Quadrant announced or released major components of their software-defined networking (and related technology) strategies, while others made significant enhancements to existing software-defined networking (SDN) offerings. Also, many of the vendors now use merchant-based silicon within significant portions of their switching portfolios. As a result, the differentiation between vendor solutions is now relatively balanced between software (management, provisioning, automation and orchestration) and hardware (bandwidth, capacity and scalability).

There has been a significant increase in interest from Gartner clients in the broad capabilities and open interfaces delivered via SDN. Search volume for SDN on gartner.com is now higher than searches for MPLS, WAN optimization, application delivery controller and router (see “Gartner Analytics Trends: Interest Is Gaining Momentum for Software-Defined Networking”). Interest in these SDN technologies is now shifting from Type A Gartner clients to Type B (see Note 1), who often cite the following drivers when exploring SDN and related technologies:

  • Faster provisioning of workloads in the data center
  • Improved management and visibility
  • Improved traffic engineering or capacity optimization of their networks
  • Reduced expenditures on networking hardware/software
  • Reduced operational expenditures to operate networks
  • Improved application performance
  • Reduced vendor lock-in at the hardware and software layers

SDN provides several different approaches to deliver a more agile network infrastructure. Rather than completely rearchitecting the physical network, software-centric overlay technologies are emerging as a frequent discussion point with network designers and data center architects (see “VMware’s NSX Could Be a Small Step or Giant Leap for VMware” [Note: This document has been archived; some of its content may not reflect current conditions.]). Several vendors included in this Magic Quadrant provide overlay network capabilities, which typically integrate the provisioning of network and compute resources for a more agile infrastructure. While this is an important development, it is also important to consider how various overlay solutions are implemented, as the overlay is still fully dependent on a physical underlay network, and issues of network control and visibility are critical to ensure the reliability of overlay solutions.

What Is Required in New Data Center Networks?

During the past several years, several factors have significantly impacted data center networking hardware and software requirements. First, data center networks must address an increased business appetite for faster and catalog-/service-based delivery of IT services. This is driven by increasingly real-time business requirements and the availability of viable options outside of traditional corporate IT (i.e., infrastructure as a service [IaaS] for compute, and SaaS for applications). This has exposed suboptimal network operations paradigms (including static and manual provisioning and configuration activities), which increase time-to-delivery services, lower network availability, increase operational expenditures and make it increasingly difficult to scale the environment. In addition, there is a need to address an increasing disconnect between the performance, availability and provisioning needs of existing applications running on the data center network.

Second, the size and density of data centers are changing, with several macrolevel trends driving both the expansion and contraction of data centers:

  • Server and data center consolidation require IT organizations to centralize compute resources and reduce the number of physical data centers, resulting in fewer, but larger, corporate data centers.
  • Increasing compute density using multicore, multisocket servers, combined with virtualization and storage convergence, is reducing the physical footprint required. Workloads that used to take multiple racks of servers are now being delivered within a portion of a single rack.
  • The migration of applications toward external cloud services also reduces the space requirements within the corporate data center.
  • Application traffic patterns are shifting from predominantly user-to-application (north/south) to both user-to-application and application-to-application (north/south and east/west). In addition, these traffic flows become less predictable with time as automated provisioning tools and general maintenance activities result in a more randomized distribution of workloads.

While new technology and business model innovation is critical, vendors also need to be concerned with providing migration plans from currently deployed architectures to the new ones. The increasing density drives the need for higher-speed interfaces. New server connections are now typically 10 Gigabit Ethernet (GbE), with uplinks from top of rack (ToR) or blade switches migrating to 40GbE. The use of server virtualization drives the first level of workload aggregation into the physical server host (usually at a 10:1 ratio or higher), which leads to higher network utilization for traffic exiting the physical server network interface card (NIC). This significantly reduces the need for additional dedicated physical aggregation layers in the network infrastructure. In addition, enterprises are increasingly evaluating more cost-effective and rightsized data center networks with fixed-form-factor core switches (see “Rightsizing the Enterprise Network”).

Application Changes

Applications have become more distributed, increasingly independent from specific servers and more elastic in their deployment. With no physical dependency on network connections, it is more difficult to specify network requirements, which is the leading driver toward integrating storage gateway capabilities into the ToR or blade switch. Also, newer applications like big data have more stringent bandwidth, latency and interface buffer requirements than traditional applications. In addition, the increasing requirement to efficiently deal with east-west traffic has resulted in new approaches, including higher-performance, low-latency ToR switches; the emergence of one- or two-tier physical switching architectures; the increasing use of fixed-form-factor core switches; and more intelligence and traffic forwarding at the server access layer (through the use of virtual chassis or chassis clustering solutions). All these approaches improve server-to-server performance and, in some cases, evolve the data center network toward providing a homogeneous set of capabilities for all connected compute resources.

Long-Term Innovation and Choice

Beyond being seen as the solution for today’s network operations challenges, SDN and related technologies offer an opportunity for transformational change within the networking marketplace. The decoupling of hardware and software represents the potential for a fundamental improvement in how networks are designed, procured, managed and evolved. The potential for long-term innovation that could emerge with an open SDN-based marketplace is clearly disruptive to today’s hardware-centric model. Modern data center solutions can take advantage of significantly streamlined and custom-built data center software images. This approach should lead to a more efficient and reliable data center infrastructure (see “It’s Time to Rethink Your Data Center Network Software”). It also results in increased customer options, with opportunities to decouple hardware and software purchases, as illustrated by announcements from vendors such as Cumulus and Pica8, running on commodity switching solutions (see “Dell and Cumulus Networks Aim to Take ‘BYO Switching’ Mainstream”). We have described an environment that has undergone substantial change and that offers the opportunity to deliver networking capabilities in very different, more agile and cost-effective ways.

View Report

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.