By: Paul Teich
The term “software-defined” is broadening to include other data center functions, including storage, computing, and the data center itself. The implications of software-defined architecture (SDx) on data center operations are profound. SDx had its roots in Network Function Virtualization (NFV), which, as its name suggests, started in the network and is based on virtualization technology. However, SDx also impacts storage, computing and general data center architecture. In order to understand the transformative nature of SDx, I’ll start at the beginning, with virtualization, and from there describe composable services at cloud data center scale.
The word “virtual” in Middle English meant “possessing certain virtues,” where a virtue is a useful quality. Today we use the word virtual in a general sense to describe things that appear to be the real thing, but may not in fact be complete or even real. In practical use, virtual often means “close enough to the real thing,” having many of its qualities (virtues) but perhaps not all of them.
In the world of software and programming, the word virtual was pressed into service with an interesting twist – virtual is used to describe a software abstraction that behaves like a given set of hardware and software resources, even if the real resources are not available. Virtual hardware resources may be emulated in software, and entire computers may be emulated within another computer as a “virtual machine” (VM).
Ontology Systems is revolutionising how telecommunication network operators get to know their networks. We replace expensive and fragmented views of your network with affordable,
comprehensive and dynamic panoramas so you can truly know your network.
Ontology Systems was founded in 2005 by Benedict Enweani (CEO) and Leo Zancani (CTO) to radically reduce the cost, risk and effort of joining up data in the IT and network estates of
Communication Service Providers (CSPs).
CSP day-to-day operations require interaction with a huge range of extremely high-variety data, and this interaction needs the data to be joined-up in order to present a cogent
picture of the business, its assets and operations to the processes that form the CSP.
Because of the wide variety, the cost of joining the data up represents a staggeringly high proportion of CSP operational costs, and the rigidity and brittleness of present-day
solutions to the problem fundamentally limit the agility of the business, making it hard to compete against ever more nimble emergent players.
Ontology Systems' products are based on its Ontology 5 graph-search data alignment and linking platform which uses new ways of representing and storing data to radically reduce the
cost and risk associated with joining up data from multiple, misaligned sources and understanding its quality.
Ontology's Intelligent 360 for Network Operators product line employs this capability to build a comprehensive end-to-end Dynamic Network Topology Model and use it to drive tools for
Network Troubleshooting, Customer and Infrastructure Navigation and Change Management.
Ontology Systems is proud to count some of the world's largest telecommunications companies amongst its customers, including a number of Vodafone Group companies, Telenor, Telkom
South Africa and Level 3 Communications. For more information, visit www.ontology.com
VMs are designed to run the same software as a specific physical computer, be it a smartphone or a server. The biggest difference is the software running in a VM does not have direct access to physical hardware or software device drivers running underneath the VM. A VM may inform a guest OS (an OS running within the VM) that the guest OS has sole access to hardware resources which, in reality, are shared among many VMs. An application running on a guest OS in a VM is almost certainly unaware of the exact hardware being used for memory, storage, networking, and even compute.
There are no widely-used methods for application software to define an optimal set of VM hardware resources. The opposite usually occurs; applications tune themselves to the hardware they believe to be present on a machine (real or virtual).
To cope with this, many IT organizations and cloud services implement a selection of VM templates. These templates are optimized for different application balances of compute, network, or storage resource utilization. Typically an application will be profiled against likely VM templates to find the closest fit for efficiency, performance, service quality, or other desired run-time attributes. Fewer templates means that applications are likely to be over-served – they will have too many resources dedicated to them, which erodes overall system efficiency and performance.
Containers are a form of virtualization. Simplistically, containers integrate the generic services of a guest OS into a VM framework so that applications can run directly in a container without an intermediate guest OS layer, but containers are still VM technology. Everything in this article applies to traditional VM products as well as new container products.
NFV was the first simple step in separating tightly bound network applications from the core switching functions. As mainstream IT started using VMs to control server sprawl, their network counterparts were dealing with a bewildering array of network appliances.
The solution was relatively simple: use VM technology to decouple applications not directly involved with moving and directing data traffic from network routers and switches. These applications still need to be in the data path, but they do not need to be in the same box as the switch or router.