Secondly, in order to plan infrastructure investment, comply with regulatory requirements and engage in proactive dialogue with both the market at large and individual customers, the large-scale behaviour of these future semi-autonomous networks must be observed, measured, understood and forecast, and this necessitates a global view.
OSS therefore is likely to change from a set of software siloes handling different aspects of the service lifecycle, to a set of functions that consume data produced by local automatic management functions and instrumentation, using it to provide intelligence to the business and its processes. It is true that much of the “write” side of what is currently considered OSS functionality – service activation and provisioning – is thoroughly addressed by the current plans for NFV; this is largely what MANO is about. The “read” side of the equation, however, – which covers everything from service assurance, fault finding and capacity planning through to handling exceptional circumstances – is not.
Where we find ourselves today then, is that the traditional network is likely to persist for some time - at minimum acting as transport for NFV and, therefore, tacitly participating in service delivery - and its OSS, complete with its myriad data problems, will persist with it. In any case, regardless of the longevity of this “legacy”, the health and expansion of physical network and infrastructure will continue to be managed by processes stewarded by human beings.
This matters, because it could well pose a significant risk to the timetable for the expected benefits of NFV – service agility, for example, could easily find itself at the mercy of the rigidity of the physical and “legacy” network layers.
Furthermore, precisely because service activation and resource provisioning will, ultimately, be handled automatically, a mechanism is required to enable the information about operational and provisioned state from automatic management functions in the NFV stack to be viewed holistically.
This all points to a requirement for a complete end-to-end view of both the overall state of the infrastructure and of the provisioned state of the virtual services it contains, provided by a “big data” infrastructure that is able to store and distribute both telemetry and service state, across all of the network, for the consumption of next-generation OSS.
OSS isn’t just a software layer; it’s an inherent part of the functioning of service providers. Whether it’s delivered within, across or outside of the NFV stack is an important question, but given the momentum of next generation, virtualized networks, is secondary to the pragmatic challenge of how to keep it functioning with a minimum of cost and risk.
If OSS is to be ready to support the widespread delivery of NFV services to the market, urgent attention is required to the twin concerns of readying the traditional OSS and designing the new.
Decades of trying (and spending) have failed to “fix” the traditional OSS – it remains, for most providers, a costly, sprawling estate of partially-integrated systems with extremely variable quality. Virtualisation at last creates an opportunity to access the value it contains by providing focus and demanding innovation: focus on the requirements that emerge from the immediate need to integrate traditional and virtualised worlds; innovation to prevent this work from simply being another failed attempt to “transform the OSS”.
The precise nature of this focus and innovation is clearly a matter deserving of urgent analysis; but since a correct and complete view of the state of existing network infrastructure is something only the traditional OSS can in principle provide, this would seem to be a worthwhile goal.
Constructing such a view from the complex and divergent data in the traditional OSS requires a fresh look at how data is handled and reconsideration of the idea that the only way to manage and implement cogent pictures of a domain is through monolithic information and data models.
Regardless, CSPs with the vision to imagine and execute such plans - rather than just ignore the OSS until the last possible moment - will be those most likely to realise the benefits of virtualisation and emerge as winners from the incoming tidal wave of change. Those that don’t are likely to find themselves drowned by it, with their existing systems a burdening weight instead of a springboard to the future.