The potential benefits of this new paradigm are huge, including decreased CAPEX and OPEX and a faster rate of innovation.
The anatomy of a software-defined network
The networking community has habitually solved new problems by inventing new protocols, while the software industry has advanced by developing solid abstractions, e.g., transaction
managers, message buses, garbage collection, and abstract data types, that can be reused as building blocks for a variety of functions. Tail-f’s Network Control System, the
SDN control system in Deutsche Telekom’s TeraStream project, makes use of six abstractions, which provide the foundation for a declarative, data model-driven implementation of
network services:
-
Centralization. Implementation is logically centralized in the sense that there’s an API for creating, modifying and deleting services without the need to resort to
distributed programming. (The actual realization of services, beneath the API, is invisible to service developers but can exploit distribution techniques such as clustering.)
-
Data-structure representations. The API gives programmers access to configuration and state information for network services and resources. Crucially, this
information is supplied in the form of conventional programming data structures that don’t require distributed programming, such as trees and graphs.
-
Data models. Semantic descriptions of network services and resources exist in the form of stringent data models, which contain structural information, type
information and integrity constraints and are written in the YANG language (IETF, RFC 6020).
-
Mappings. A data model-driven mapping, from service operations to network state changes, is done in two stages: first, operations on service data structures are
translated into operations on network data structures, at which point the latter are converted into sequences of device-specific commands that are deployed in the
network.
-
Transactions. Each operation on a service data structure results in a change set comprised of the updated service data structure, the correspondingly updated network data
structures and the correspondingly deployed network state update. Each change set is atomic, i.e., changes are made as atomic transactions: either every single
change is executed without failure or none are executed. These transactions encompass the deployed state changes in the network, which, of course, is
where transactional guarantees are most valuable, since distributed error recovery is inherently difficult to manage.
-
Data stores. All data structures representing service and network states are stored in a logically centralized data store, which is kept consistent with the state of
the network at all times. Consistency is guaranteed by transactional updates, efficient detection and reconciliation of out-of-band changes to network devices.
Real-time OSS differs from traditional OSS in that it offers fully automated service fulfillment and mostly automated service assurance (at best, manual fault management should be a
contingency solution) through the guaranteed data quality supplied by an SDN control system, not to mention it contains modules that all rely on the
components of the control system’s data store, from service, fault, performance, and workflow management to resource inventory and data warehousing.
The distinct network architecture and SDN technologies used in Deutsche Telekom’s TeraStream project enable new services to be realized through network programming,
not re-architecting. Greatly increased innovation cycles will make it possible for centralized programming activities to take the place of
individual service-node management while narrowing time to market and shrinking management costs, all in the name of delivering dependable,
flexible network services to customers.