It is important to note pre-Internet quality expectations have set the bar for QoE—the long-established five-9s uptime that network providers have often quoted as the reliability of their networks
Businesses today cannot afford network failures or lags. In fact, it can cost a company thousands of dollars for every hour that a customer relationship management (CRM) is down. Even if the CRM
system just slows, a company can still lose money. Consumers are equally wedded to foolproof network performance, whether there’s an outage or the network simply slows. Just a few years ago, when
the Internet slowed or went down, consumers felt the inconvenience if they couldn’t play a video game or check email. Today, however, consumers are paying money for online services like Netflix,
Hulu or HBO Now, and they expect that service to work on demand.QoE is a top-of-stack metric which by definition measures the impact of all lower layer events and interactions; that is, how an
application network service uses orchestrated protocols such as HTTP, SIP, and video, which in turn drive transport layer protocol such as TCP or UDP, which then load the network with bandwidth. It
also helps to mitigate negative economic impact. If there’s poor quality, a user will call customer service or issue a help ticket. Enough times, and an unhappy Netflix customer might cancel the
service. When we talk about measuring Quality of Experience, what we are really saying is “what is my experience with the service now, and how does previous history with my experience impact my
perception?”
It is important to note pre-Internet quality expectations have set the bar for QoE—the long-established 99.999% uptime that network providers have often quoted as the reliability of their networks.
All new services are converging to those expectations. The web is a utility that should be always-on, and if a web page loads more slowly than expected, a user’s experience is diminished. Just a
matter of seconds can impact perception.
As such, modern networks must be built to deliver a high QoE and robust predictability. In practical terms, this means that prior to testing, traffic flows across the network need to be well
understood and for each one of the services, a minimum definition for acceptable quality must be established. For example, a modern webpage will have around 200 URLs forming the single page. In
order for the instantaneous quality experience of this page to be considered high, all 200 URLs need to render in one to two seconds. Further, the variance from loading the same page overtime needs
to be very small. Lastly, any hard failure, such as a broken link or page, immediately reduces the quality of experience to unacceptable. With this definition of service QoE, we now have a
measurement stick to determine the maximum number of concurrent users and user rate that we can add to a device under test.
In the lab, test and measurement systems need to generate stateful services, such as stateful CRM or stateful firewalls in order to more accurately and reliably measure provision scale and
provision rate. Instead of determining that a firewall needs 100 gigabits of bandwidth, you’ll be able to determine that the firewall can support 50,000 users concurrently across a device and still
get a minimum QoE.