SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Software-Defined Will Disrupt Your Datacenter

By: Paul Teich

The term “software-defined” is broadening to include other data center functions, including storage, computing, and the data center itself. The implications of software-defined architecture (SDx) on data center operations are profound. SDx had its roots in Network Function Virtualization (NFV), which, as its name suggests, started in the network and is based on virtualization technology. However, SDx also impacts storage, computing and general data center architecture. In order to understand the transformative nature of SDx, I’ll start at the beginning, with virtualization, and from there describe composable services at cloud data center scale.

Being Virtual

The word “virtual” in Middle English meant “possessing certain virtues,” where a virtue is a useful quality. Today we use the word virtual in a general sense to describe things that appear to be the real thing, but may not in fact be complete or even real. In practical use, virtual often means “close enough to the real thing,” having many of its qualities (virtues) but perhaps not all of them.

In the world of software and programming, the word virtual was pressed into service with an interesting twist – virtual is used to describe a software abstraction that behaves like a given set of hardware and software resources, even if the real resources are not available. Virtual hardware resources may be emulated in software, and entire computers may be emulated within another computer as a “virtual machine” (VM).

VMs are designed to run the same software as a specific physical computer, be it a smartphone or a server. The biggest difference is the software running in a VM does not have direct access to physical hardware or software device drivers running underneath the VM. A VM may inform a guest OS (an OS running within the VM) that the guest OS has sole access to hardware resources which, in reality, are shared among many VMs. An application running on a guest OS in a VM is almost certainly unaware of the exact hardware being used for memory, storage, networking, and even compute.

There are no widely-used methods for application software to define an optimal set of VM hardware resources. The opposite usually occurs; applications tune themselves to the hardware they believe to be present on a machine (real or virtual). 

To cope with this, many IT organizations and cloud services implement a selection of VM templates. These templates are optimized for different application balances of compute, network, or storage resource utilization. Typically an application will be profiled against likely VM templates to find the closest fit for efficiency, performance, service quality, or other desired run-time attributes. Fewer templates means that applications are likely to be over-served – they will have too many resources dedicated to them, which erodes overall system efficiency and performance.

Containers are a form of virtualization. Simplistically, containers integrate the generic services of a guest OS into a VM framework so that applications can run directly in a container without an intermediate guest OS layer, but containers are still VM technology. Everything in this article applies to traditional VM products as well as new container products.

Network Function Virtualization (NFV) Decouples Network Apps from Switches

NFV was the first simple step in separating tightly bound network applications from the core switching functions. As mainstream IT started using VMs to control server sprawl, their network counterparts were dealing with a bewildering array of network appliances.

The solution was relatively simple: use VM technology to decouple applications not directly involved with moving and directing data traffic from network routers and switches. These applications still need to be in the data path, but they do not need to be in the same box as the switch or router.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel