By: Todd Coleman
According to MarketsandMarkets, the Artificial Intelligence (AI) business is expected to reach approximately $191 billion globally by 2025, up from roughly $16 billion in 2017 and increasing at a compound annual growth rate (CAGR) of nearly 37 percent. These figures clearly align with other surveys suggesting a bullish attitude among enterprise leaders regarding strategic investments in AI.
Deloitte recently interviewed more than a thousand IT and line-of-business executives from U.S.-based companies on the current state of their AI adoption initiatives. Among the findings was that more than eight out of ten enterprises are already seeing positive returns from their production-level AI projects, with telecommunications, technology, media and entertainment companies earning an average 20 percent or greater return on investment (ROI). Meanwhile, across the border to the north, more than 30 percent of Canadian businesses will introduce AI-based technologies into their operations by the end of this year, according to the International Data Corporation (IDC).
AI is a term used to describe technologies that enable computers to perceive, learn, reason and assist in decision-making to solve problems in ways similar to how human beings do. Common use cases range from natural language processing that enables globally dispersed employees to communicate across languages and borders, to far more complex applications such as autonomous vehicles. Notably, more than half of enterprise executives surveyed say their AI initiatives are needed to either edge slightly ahead or widen their lead against their competitors. The bottom 36 percent state that AI is enabling them to remain parallel to the competition or play catch up.
AI research began in earnest in the 1950s, based on work by British mathematician and computer scientist Alan Turing during World War II. Over the past decade, however, we have witnessed more rapid advances in AI due to the confluence of cloud computing, a tremendous surge in data volumes, and significant breakthroughs in machine learning algorithms.
Machine Learning (ML) is a method of data analysis that automates analytical model-building. Using algorithms that iteratively learn from data, ML allows computers to find hidden insights without being explicitly programmed on where to look for them.While many ML algorithms, too, have been around for a long time, the ability to automatically apply complex mathematical calculations to Big Data — rapidly, iteratively and repeatedly — is a recent development. Meanwhile, the unmanageable volume and complexity of Big Data that researchers and businesses now have access to has increased the potential of ML—and, increasingly, society’s dependence on it.
Search recommendations, speech recognition, and email filtering are all examples of AI that leverage ML. When you search products while shopping online or browse Netflix or Hulu to plan your next binge-watch, the suggested results that follow you, even across various platforms, are the work of ML algorithms. The voice recognition systems of Siri and Cortana are based on ML. And when the ride-hailing app Uber provides an ETA for when you should pull up to your destination, that’s ML at work too.
Deep learning, a subset of ML, emulates the functions of the inner layers of the human brain, thereby creating knowledge from multiple layers of information processing. The present and near-future potential of deep learning applications is so immense that many experts believe it will soon become the dominant technology of the AI market.
In practice, deep learning involves acquiring learning or experience from hierarchical layers of discovery. The computer learns from each layer and then uses that learning in the next layer to learn more, until the learning reaches its full stage through cumulative learning in multiple layers. In this way, the system reaches a highly detailed understanding of the data that amounts to a form of intelligent reasoning. So, in perceiving a picture of an object, the machine will first detect a shape from a matrix of pixels. Then, it might identify the edges of that shape, then its contours, then the object itself, and so on, until it identifies the image.