The first challenge is secure access to network data. This may sound trivial,
because every network element provides some combination of interfaces through
which inventory and configuration data can be gathered. The challenge arises
because the network integrity solution requires access to all of the configuration
data. The combination of Command Line Interfaces, SNMP, web services API’s,
and legacy interfaces such as TL-1 require the data collection environment to
support every kind of encryption and authentication available. Furthermore, the
data collection architecture needs to be flexible enough to provide additional
security when data is aggregated to a central site.
The next challenge is to provide flexible normalization and comparison logic.
Each system that a service provider uses has its own data model, which
is an abstract version of what is supposed to be in the network. Inventory
systems contain a resource model that is used to design new services. Fault
management systems contain a model that is used for root cause analysis and
problem resolution. Performance management and assurance systems contain
models regarding the utilization of the network. Each of these databases is an
abstraction of the network data, and each needs to be accurate. By extracting
raw configuration data from the network and applying flexible normalization
techniques, a centralized solution is able to ensure that these models are
all synchronized with the network, and with each other. Traditionally, these
problems have been addressed with separate projects per system. Not only has
this traditional approach been expensive, it has also failed to provide a level of
network integrity that would allow performance, fault, assurance, and inventory
systems to all contribute to effective customer reporting and management
strategies.
The third challenge is scale. If Network Integrity is to be maintained at a high
level, then the service provider has to achieve a high level of scalability in three
important areas: scalable data collection, rapid analysis and reconciliation, and
scalable resolution.
Scalable data collection requires the data for the entire network to be collected
regularly. Traditional approaches have tolerated this as a background task
with baselines being collected on some interval. This might work well for asset
utilization or flow through improvements, but will not address security threats or
service assurance concerns. In order to achieve configuration data integrity, one
more thing is required: data from individual elements needs to be collected and
analyzed very quickly. To put this another way, the Network Integrity solution
has to be optimized for both size and speed.
Speed is important because configuration processes are running concurrently.