At the turn of the century, a network database was a static thing that had between 30 percent (core network) and 70 percent (outside plant) bad or outdated records. Processes to clean the data and reconcile issues between databases were expensive and time-consuming. Few inventory systems were open to others for queries or synchronization with anything more sophisticated than database dumps and ETL tools. We were starting to synchronize the databases with the network elements through auto-discovery but had the problem that we were using up too much of the precious computing power of the network elements. Prognosis was good for the new generation of OSSs, but poor for legacy systems.
In 2021, all modern network elements, whether virtual or physical, can be automatically queried, or even announce themselves to the northbound systems. They also have the processing power to announce changes in configuration, or even real-time state. Domain controllers are in play that automatically synchronize with these elements and make the information freely available to other systems. We pretty much solved this one.While hardware cost per unit of capability dropped by nearly twelve orders of magnitude between 1975 and 2000, software cost per function point only dropped by less than three orders of magnitude. Each transition to new software technology, such as object-oriented programming and reusable architectures, including J2EE and .NET, generally contributed about a 20 percent reduction in cost to produce and maintain. But there was a fair prognosis to bring this down further with web-based user interfaces, component-based software technologies, and better system-to-system interfaces coming into focus.
The rapidly decreasing cost of the underlying computing hardware combined with the generalization of service-oriented architectures to include many internal APIs (not just APIs to external systems) led to the creation of cloud-native software architectures. In these, software is broken up into small separately defined, developed, and deployed microservices that have defined APIs between them—kind of a super-SOA architecture. Yes, it is incredibly inefficient in computing resources. But it is so much easier to build, test, deploy, and maintain that this approach is well worth it. This is the single most important technology change in the history of software: separate software components that can be specified, developed, deployed, and evolved easily, decreasing the cost by about an order of magnitude. Further advances in service meshes, simplifying the communications infrastructure between the microservices, and integrated security are further reducing the cost and time to develop, deploy, and evolve the microservices.
With these major advances from the last 20 years, have we solved all the major challenges in OSS today? No, we have new challenges today and into the future. But the prognosis is good.
In the optical, packet, and radio technologies, work is underway to break the larger network elements into smaller pieces that can come from multiple vendors. These hardware and software pieces have specified interfaces and, in many cases, also have defined requirements to allow them to be put on open bid as “white box” solutions. Disaggregation is happening in both the horizontal (breaking boxes up into multiple boxes with the communication path threaded through them) and vertical (breaking them into the hardware and separate control software). For OSSs, this means a larger,