SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Taming the AI Wild West

By: Scott St. John, Pipeline

In 1848, James W. Marshall allegedly exclaimed, “there is gold in ‘dem hills!” when he spotted precious flakes of the material as he worked on the water wheel of John Sutter’s Mill on the American River in what is now Sacramento, California. The discovery sparked what is known today as The California Gold Rush, comprising an estimated 300,000 “forty-niners” that flocked to the West Coast to strike it rich between 1848 and 1855.

Yet, despite the widespread excitement, very few actually struck it rich, or even found gold at all. Instead, most treasure hunters endured major hardships, and countless others died along the way. And, as the masses flooded into the ill-prepared region, lawlessness ensued until proper social structures, governance, and regulations were established. In fact, Sutter was bankrupt by 1853 and the famed American River flooded the city of Sacramento for three months in 1862, in large part due to the debris caused by the practice of hydraulic mining for gold.

Today, there is a similar rush to generative Artificial Intelligence (AI) after ChatGPT extolled the promise of gold earlier this year. But much like the California Gold Rush, this AI frenzy lacks the crucial guardrails needed to navigate the hazards that lie ahead. For end-consumers simply playing with the large-language model, the risk is relatively low. But for enterprises that need to protect their company, customers, employees, shareholders, and brand, the stakes are much higher.


When businesses use generative AI tools for real-world enterprise use cases, they need to be certain that the data set is pure, and the output is accurate and unbiased. To do this, businesses need an enterprise-grade generative AI model that goes beyond public data built for the masses, such as ChatGPT. Businesses require an enterprise-grade generative AI technology that incorporates a diverse range of specific data sources and models that have undergone rigorous filtering, quality controls, and certification, while generating content that is both reliable and ethical. For example, generative AI tools that avoid distributing product content that slants in favor of a specific gender, decisioning that reinforces a racial stereotype, a company spokesperson from promoting conspiracy theories, or a virtual assistant from providing the wrong medical advice to a patient.

In addition to the liability risks, jumping on the generative AI bandwagon with the wrong model can be costly. For example, ChatGPT is an AI-generative engine that automatically generates text based on written prompts in a fashion that is very advanced, creative, and conversational in nature. AI research lab OpenAI launched ChatGPT for public use on November 30, 2022. GPT stands for Generative Pre-trained Transformer and refers to a family of neural network-based large language models (LLMs) developed by OpenAI. ChatGPT is now one of the largest language models created to date, with a huge neural network that powers the model at 175 billion parameters—consuming massive amounts of computational resources and energy. 

It’s not ideal for businesses that are being held accountable for ESG goals. From a cost perspective, if a company as large as DoorDash were to replace its prediction engine with this type of generative AI, it could cost them as much as $600 million perday in token consumption, or a staggering $218B per year. The cost model may be too prohibitive for smaller companies to adopt generative AI, causing them to fall back on other technologies—such as Robotics Process Automation (RPA)—or even fallible human workers. And, just as the hundreds of thousands of forty-niners came to realize, promise without protocols can become a real problem. Emerging regulations in



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel