SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Loving Legacy and the 3 Vs of Big Data


All three factors – volume, variety, and velocity – must be addressed

The 3 Vs of Big Data

Tucked away in these legacy systems is incredibly important and relevant data. The sheer volume of this data is massive. Networks, systems, users, and machines produce an endless stream of relevant information. Organizations themselves produce enormous amounts of collaboration data in email conversations and enterprise networking platforms. In some cases, this data is being leveraged for fault and performance; but in others it’s not due to the lack of the ability to contextualize this data and present it in a way that is useful to various organization departments.

The velocity of data in many cases requires near-real-time processing to keep up with the rate at which the data is being produced.  The access to the most recent, relevant information is the hinge-pin of agility.  It’s not uncommon for a top-tier service provider today to take 25 million data measurements every five minutes. This data velocity only increases as smarter, more “chatty” devices and new use cases such as residential and industrial IoT (Internet of Things) continue to be brought online at an unprecedented pace. Tackling the increasing velocity when it’s compounded by volume becomes even more daunting.  

Each data source also produces its own unique variety of data. Some of the data is structured, some unstructured, and each contains specifically relevant and different pieces of information that need to be tied together. For example, device and network information being generated is distinctly different than call data, CRM information, and email.  

All three factors – volume, variety, and velocity – must be addressed to provide actionable insights that by which service providers can quickly solve customer issues, better make informed decisions, and rapidly capitalize on new revenue opportunities.

A different approach: Big Data Playground

gen-E is no stranger to data mediation, and it has been working with top operators globally to solve this problem of aggregating data from legacy systems. Instead of removing legacy systems gen-E has developed a consolidated console that taps into the rich data stored within. It then contextualizes the data and models the data in an intuitive user interface – opening access to critical data in a single system.

The gen-E solution includes many standards-based, RESTFUL APIs to common platforms to tap the data from virtually any system. gen-E has also amassed over 2,000 key performance indicators (KPIs) and includes over 300 pre-defined KPI mappings available out-of-the-box today. The solution then presents the data into an easy-to-use console based on five different personas, which are designed to quickly present digestible information relevant to the particular user’s unique needs. KPIs continue to be evaluated, added and grouped into these functional areas to populate these tailored personas for customer service, operations, finance, human resources, and more.

gen-E has also simplified the visualization of data by including customizable drag-and-drop widgets that digest the data and present them in informative tickers, charts, graphs, cascading event windows and more to make the information easy to consume. The ramp-up time is minimal, with online training being available for those that need it.

“We want to make the information as easy to model and digest as possible,” Thummalapalli commented. “So we made it intuitive, drag-and-drop, and easier to use than Excel.  By having it all in one system, it provides immediate access to the most relevant information.”

gen-E is also using state-of-the-art data processing models for speed, scalability,and to reduce the dependency on third-party licenses. They are leveraging technology developed by some of the world’s largest social networks and packaging in one, efficient solution that has been benchmarked to write up to 2 million events per second, on three inexpensive machines.

gen-E has essentially created a scalable “Big Data Playground” whereby organizations can connect any legacy or future data source, contextualize the data by stitching it together into a consolidated platform, present useful information departmentally by leveraging categorical KPIs, and put simple to use widgets into the hands of the end-user to make the information highly relevant and immediately consumable. This is big.

Changing the data game

It may be too soon to tell how big of an impact a solution like gen-E will have, but its launch marks a monumental shift in how data is and can be leveraged by organizations. It breaks the dependence on expensive, third-party database licenses and empowers the organization and end-user to maximize the value of the data being generated across all data sources; new and old. It tames the volume, variety, and velocity of data and negates the risk of a rip-and-replace approach by unlocking the data which has been locked away in legacy systems and enables service providers to continue to leverage essential systems to become increasingly agile, innovative and fuel transformation.

Don’t hate your legacy, embrace it. It’s integral to agility and it can now be an asset, not an obstacle, to your organization’s future success.


FEATURED SPOTLIGHT

Latest Updates