AI in the Real World

By: Milton Lopez

The dream of creating machines that can think and act like humans is nearly as old as machines themselves. Greek mythology gives us the tale of the robot Talos, constructed by the inventor and blacksmith Hephaestus to guard the island of Crete from foreign invaders. In the centuries since, innovators have worked to create machines that could solve critical problems, handle unappealing or dangerous jobs, and even play games from the trivial (your first Pong game) to the most sophisticated (chess and Go).

But mankind’s aspiration to create thinking machines has always outpaced the availability of technology that could serve as its foundation. This changed in the mid-1950s when a young mathematician named John McCarthy joined the faculty at Dartmouth College.

During brief tenures at Bell Labs and IBM, McCarthy had become deeply intrigued by the work of mathematicians Claude Shannon and Alan Turing, and he proposed hosting an inaugural conference to explore the ideas that would become the field of artificial intelligence (AI). In fact, the term ‘AI’ was McCarthy’s invention, too, though he later confessed he wasn’t terribly happy with the moniker because, after all, it was genuine intelligence they were seeking to create, not the artificial kind. That first conference, held at Dartmouth in 1956 on a grant-funded budget of $7,500, is widely regarded as the first structured foray into what would become the field of AI. 

“Our ultimate objective is to make programs that learn from their experience as effectively as humans do.” John McCarthy, Founder, 1956 Dartmouth AI Conference

Fast forward a little over 65 years to a world in which AI is fast becoming a genuine force across a wide range of technological realms despite the occasional headwinds it has faced in its journey to today, including a tendency for overhyping and several periods of inconsistent funding. AI’s greatest difficulty has, though, not been the hype cycle or even the imaginations of the small handful of people who have led developments in the field. Rather, until very recently, AI’s principal challenge has been that its ambitions have so wildly outpaced the computer processing and data management technologies necessary to make those aspirations a reality. Only in recent years, with the advent of advanced sensors, big data, and petaFLOP computing capabilities, has AI truly come into its own. 

Make no mistake: the field of AI has indeed come a long way, not just in terms of academic research and pie-in-the-sky concepts, but in delivering genuine scientific and business innovation—the kind you can quantify with jobs created (and, yes, sometimes lost), patent applications, and dollar signs. You would now be hard-pressed to identify a field or industry in which AI has not made its impact felt. The examples described below are but a tiny fraction of the ways AI is not only delivering real business value and competitiveness but also touching the lives of everyday people, whether they realize it or not.

AI in the sky

Ever since the Wright brothers first flew at Kitty Hawk in 1903, aviation pioneers have sought opportunities to control aircraft remotely, eliminating the need to endanger a crew or limit the craft’s capabilities to what human operators can physiologically endure. The first modern drone dates to 1935 and was a British innovation designed by the De Havilland company as a live aerial gunnery target (though unmanned craft date back to the 18th century if you’re willing to include hot air balloons and gliders). But the field has truly taken off (if you will) in the past decade with the emergence of AI-enabled unmanned aerial vehicles (UAVs), whose purposes now span such a broad range as to be nearly ubiquitous.


Latest Updates

Subscribe to our YouTube Channel