By: James Kimery
Old hands in the enterprise software space know what happens when the sales and marketing promises get to the implementation crew: “But our product can’t do it!” As you might imagine, the language of the technicians is often more spirited and colorful, but the real point here is that while promises are easy, delivery often is not. With this in mind, we turn our attention to one of the hottest developments in the 5G space right now: Open Radio Access Network (Open RAN). The promises? A familiar hyperbole monologue of benefits, such as increased competition and lower costs through open, standardized interfaces and software-defined architectures, along with new use cases and service models with intelligent controls and flexible, disaggregated, and virtualized technology.
Turning these promises into reality depends on comprehensive testing and avoiding costly and time-consuming missteps along the way—only in this way can the hype become reality.
On buzz, those familiar with Gartner Group’s Hype Cycle will appreciate its applicability to almost any emerging technology. The heady promises start with a “technology trigger” and quickly ramp up to the “peak of inflated expectations,” with the “trough of disillusionment” soon setting in. A heavy grind, usually taking far longer than expected, drives up a “slope of enlightenment” on to the “plateau of productivity.”
While Gartner apparently doesn’t do a Hype Cycle for Open RAN, it’s arguable that the technology is somewhere on one or another side of the peak, either ascendant or perhaps on the downward slope into the trough. It may have been a year ago, but John Strand and Alan Weissberger were onto something when they wrote, “While there’s a lot of talk about Open RAN, it’s still a technology that operators are testing—not deploying.”
This brief yet insightful comment homes in on probably the most crucial aspect of Open RAN today—the key word is “testing.”
While there are strong claims around the promise of Open RAN delivering more flexible hardware powered by open standardized interfaces and software-defined architectures, fundamental questions remain. Operators should proceed with caution and rely on real test data to design, develop, and deploy their network. Open RAN breaks all of the traditional RAN interfaces, therefore, traditional test approaches cannot be trusted. Many operators around the world are trialing Open RAN, and deployments are lagging behind. The results in the lab driven by piecemeal test solutions do not provide a reasonable facsimile of the actual results in the field. More importantly, realizing a true multi-vendor network requires the infrastructure to consume and deploy updates to the various Open RAN components necessitating the need for a true continuous integration (CI) and continuous delivery (CD) pipeline spanning the lab, pre-production, and ultimately the production environment.
In the short term, many operators have decided to move to “partial” O-RAN solutions for three reasons:
Testing remains front and center in the discussion and presents a conundrum between the availability of focused and integrated test solutions and the availability of Open RAN options in the ecosystem. There is a plethora of test solutions available on the market today, but many in the ecosystem rely on standard 3GPP partitioning and do not consider the new partitioning mandated by the O-RAN Systems Alliance.
Test systems must evolve in three important vectors. First, test solutions must have emulation—real emulation—to be useful for Open RAN testing. For example, a company that provides a distributed unit (DU) capability only must have a test capability that emulators the Core and control unit (CU), the radio unit (RU), the