By: Jesse Cryderman
“The price of light is less than the cost of darkness.” Arthur C. Nielsen, the market research pioneer, is probably best known for illuminating trends related to radio and television broadcasts. In pursuit of this particular “light,” in 1948 his son, Arthur C Nielsen, Jr., convinced ACNielsen to invest $150,000 in the UNIVAC, the first commercially available business computer. Under the leadership of Arthur Jr., the company subsequently grew from $4 million in annual revenues to nearly $700 million.
Today, companies around the world are investing like never before in tools that shed new light on their customers, operations, systems, and potential market opportunities. Most of these tools can be grouped under one heading: big data.
With so much buzz surrounding this concept, and so many solutions being re-labeled as “big data” solutions, it might be helpful to advance an easy to apply, scalable definition. Big data is simply a collection of data sets that are so large and complex that they strain the limits of legacy data management systems. In other words, when the size of the data becomes part of the problem, it’s a big data challenge. In the days of the UNIVAC, a single megabyte of data would have constituted a big data challenge. In 2013, Twitter and Facebook typically generate a combined 30 terabytes of structured and unstructured data every 24 hours. (Try managing that with a traditional relational database!)
Chances are your company is already investigating big data, so now is a good time to ask: does your big data strategy have any blind spots?