By: Wesley Hicks
Service providers are seeing an explosion in the growth of IP based video services in their networks. Driven by increasing demand for online content sources such as YouTube, HBO GO or the NFL Network and by the emergence of 4K HD content, their networks have seen a 3-fold growth in IP based video traffic from 2014 to 2019. According to Cisco's Virtual Networking Index, IP video, which consists of both IPTV services offered by the communications service provider (CSP) and over-the-top (OTT) video offered by third party content providers, is now projected to make up as much as 84 percent of all consumer IP traffic by 2019 with OTT expected to be around 83 percent of that number.
With video services becoming an essential part of the CSP offering, and given the increasingly competitive market for service providers of all kinds, the question becomes how do these providers ensure a high quality of experience (QoE) for multiple IP video delivery methods, delivered across their network to retain and growth their customer base?
By now, everybody has experienced video delivered over the internet. Whether it was a YouTube video clip, a TV show streamed on-demand from one of the major networks’ website or a movie from Netflix, video delivered over the internet and streamed to your smart TV, computer, or mobile device, has become an essential part of the service bundle. But how is OTT video different from the more traditional Internet Protocol TV (IPTV) service offered by service providers?
Traditional IPTV is a carefully planned and engineered service where the video distribution network is built specifically to address transport issues such as capacity, latency and jitter, and scaled to meet the anticipated demand. The service is broadly distributed throughout the providers’ network and uses traditional Internet Group Management Protocol (IGMP) techniques to allow customers access to their subscribed channel lineup. Since the network is multicast and engineered for quality of service (QoS) and, since this video is a live streamed service, it is acceptable to rely on UDP-based transport services. Packets lost or corrupted during transport are simply discarded. By carefully engineering the network, packet loss can be minimized; however, network issues such as equipment failures, maintenance activities, software upgrades, or in the case of hybrid or virtual networks, dynamic optimization of VNF instances and traffic routing, will always exist and can all lead to service quality issues.
OTT video, on the other hand, does not use a purpose-built network; rather, it leverages the public internet to deliver the video content. The service relies on internet protocols to deliver video in much the same way as other web based content. OTT video is delivered ‘on demand’ as a unicast service using TCP based transport which means it is subject to re-transmission of lost or corrupted packets. To avoid impacting the video service in the event of packet loss and re transmission, OTT video is delivered as a segmented file so that content is delivered to the users’ device in advance of the viewing – commonly referred to as buffering. In this way, any lost or corrupted packets can be re-transmitted before they are required, ensuring a smooth video service.
The other interesting challenge that OTT brings to the CSP is that, unlike a traditional IPTV service that is offered by the CSP and engineered for QoS, OTT is typically offered by third-party content providers and distributed through content distribution networks (CDN) and across the CSP network. Not only does the third-party provider represent a loss of potential revenue to the CSP, but if the OTT service is not performing well, it is often the CSP that gets blamed for it, damaging their brand reputation even though the issue may very well not have been in their network.
As mentioned, OTT video relies heavily on content distribution networks (CDN), such as Akamai or Amazon, to position high value or popular content closer to the end users. Leveraging CDNs allows the content provider to ensure a better quality of service by shortening the delivery path, hence minimizing any latency related to transport, and by distributing the file sharing load across multiple servers, reducing any impact from high user demand.