|
article page | 1 | 2 | 3 |
details required to meet the needs outlined above. Let’s look at the most common options:
-
Common monitoring technologies utilize aggregate metrics collected from network elements (routers and switches), commonly via standards-based interfaces such as SNMP. The advantage here is simplicity, but the disadvantage is also simplicity – this data does not provide details on which services or applications are active on a particular service link.
|
|
In the end, it’s all about customer assurance no matter what technologies you’ve deployed. |
|
as a proxy for measuring service activity and delivery. The advantage of this approach is that it provides a very complete record of sessions and service activities that have been initiated across the MPLS service links without creating load on the managed elements or adding traffic to the network. On the downside, signaling data does not provide any of the
|
|
|
|
-
Another approach is to use flow records, such as NetFlow, which are issued by edge routers. NetFlow can provide details on which application/service is being used and which end addresses are using it on a transaction or session basis. The disadvantages are twofold – first, NetFlow leaves out important details such as response time, and second, it creates compute load on the network elements that can impact its performance (especially during unexpected high traffic situations, such as a denial of service attack).
- Another alternative is testing agents that create synthetic traffic and measure approximate customer experience. Their advantage is that it is possible to make accurate approximations of synthetic performance and recognize degradations in the process. The major shortfall is that it does not measure actual traffic and cannot indicate why degradation is happening. It also generates non-revenue network load, and thus cannot be deployed exhaustively to monitor every combination of path and application/service type.
- Providers’ traditional focus has been placed on the use of signaling traffic
|
|
details necessary to recognize degradations, nor does it provide a basis for direct troubleshooting of performance issues. Another drawback is that all traffic in an IP network does not generate signaling traffic, thus creating significant blind spots on actual use of network resources.
- Finally, there is an approach that combines the best of all these worlds – deep packet inspection (DPI). The idea behind DPI is that dedicated instrumentation devices, attached at key traffic aggregation points, listen passively to all traffic and assemble a highly granular, complete view of all network- and application-layer transactions, including volumetric, response time, and latency measurements, along with details of all the underlying physical and virtual network constructs. This is a very complete set of data upon which to base customer-aware service performance assurance. The biggest challenge with applying DPI is identifying the optimal locations for deploying instrumentation, so that the most coverage can be established with the lightest total capital outlay.
article page | 1 | 2 | 3 |
|
|
|