Standard, common models are predominantly monolithic - they aim to standardise across the full breadth of a domain, even if they allow for vertical specialisation.
This makes these models tightly coupled: changes have effects that ripple through the models and, therefore, also through any components that use them, so change is consequently expensive.
Perhaps a better approach would be an ensemble approach for common models: architectures in which individual, loosely coupled models with restricted scope are standardised, alongside mechanisms to tie them together. Crucially, actually tying them together is left to be done in suit when components are deployed in implementations of actual systems.
A great deal of the rigidity in implementations of standard models arises from the data technologies used to implement them - which fail to separate the concerns of data representation and storage. For example, the schema in a relational database both describes a data model and how it is stored (in tables). Data technologies exist in which this is not the case - and there is plenty of evidence to suggest that a consequence of separating storage and representation concerns is a very significant reduction in the cost of change.
Common information and data models are already an essential element in the construction of multi-vendor systems; but, to date, have been found to suffer from deficiencies which must be overcome if the substantial increase in automation expected over the next five to ten years is to be achieved and is to yield the expected benefits.
Large, monolithic models describing large, poorly-bounded domains, alongside the rigid, tightly-coupled data technologies used to embody them have contributed significantly to this shortfall in benefits of common models.
New technologies exist that make it possible to design and standardise small models that address specific, well-bounded parts of a domain of interest and which can be loosely coupled together when they are deployed.
This is a design approach that closely matches the desired mode of operation of future networks and supports the agility we expect from them.
However, it requires a radical reconsideration of how the models for these new domains are designed, specified and described. Currently, most model standardisation activity is informed by deeply ingrained, often implicit assumptions that stem from the data technologies available to eventually embody these models in running systems.
These technologies have developed at an astonishing pace in less than a decade; not in labs and R&D departments, but in production systems in business environments.The piece-by-piece integration that has historically been required leads to jigsaw-puzzle solutions in which each piece is trapped in its unique position. The benefits that our industry ultimately expects from next-generation networks depend on finding a way of accessing genuine interoperability.
A radical re-evaluation of how standards bodies, architects and implementers think about common information models in the light of this is surely long overdue.