SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

What Can't Be Virtualized?

By: Chris Piedmonte

Virtualization of compute, networks and storage is a key component of modern information technology cloud management.  Virtualization allows for better alignment of computing resources with the workload at hand, better security and isolation of applications and data, superior administration, improved manageability and many other advantages.  

Some of these advantages and other aspects of virtualization are covered in this issue of Pipeline magazine by Paul Teich in his article, Software Defined Disruption and by Dan Baker in his article, The Four Layers of Virtualization. However, there are some components of our modern information technology infrastructure that do not lend themselves well to virtualization or pose a potential security risk if virtualized.  This article touches on some of these components, what characteristics of them make them difficult or undesirable to virtualize, and how do we integrate these components into modern cloud management systems.

Components

The vast majority of the components that don’t lend themselves well to virtualization are generally based on very specialized hardware technology designed to provide exceptional cost-performance or unique capabilities. 

This includes the application of field programmable gate array (FPGA) logic technology from Altera (now part of Intel) and Xilinx, special purpose co-processors like graphical processing units (GPU) from AMD and NVidia or the Intel Phi vector parallel accelerator, application specific integrated circuits (ASIC), and finally security and trust components such as hardware security modules (HSM).


The use of FPGA technology in computing has been around for a very long time.  FPGA technology allows for logic and data processing to be implemented in hardware, rather than as application code running in a microprocessor. Processing data using FPGAs can accelerate throughput by orders of magnitude and result in large cost-performance advantages.  The technology has applications in real-time signal processing, high-speed automated trading, complex system simulation and cryptography, to name a few.  Some of the earliest implementations where developed by SRC Computers (the last computer company founded by the father of supercomputing computing, Seymour Cray) in the 1990s.  The technology is widely available in PCIe form-factor devices from companies like Nallatech, Lattice Semiconductor and others.

GPUs and other types of coprocessors are another means for accelerating specific types of information technology application performance.  These coprocessors are specifically designed to offload work from CPUs which can be more efficiently executed by the specific processing capabilities of these devices.  Typically used to accelerate graphical performance or data management activities, these types of co-processors are finding their way into virtualized desktop servers and dedicated data management appliances.  NVidia is considered by many to be the current leader in GPU technology; however, they continue to be challenged by AMD and other lesser players in the GPU market.  The Intel Phi co-processor is designed to provide massively parallel computing capabilities, useful in data management applications and other types of workloads that lend themselves to parallel computing techniques.  Like the FPGA technology, these technologies are typically implemented as PCIe add-on cards that can be installed in servers within the data center.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel