SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES
Fastly AI Accelerator Reaches GA

Fastly AI Accelerator Helps Developers Unleash the Power of Generative AI

Fastly expands support to include OpenAI ChatGPT and Microsoft Azure AI Foundry

Fastly announced the general availability of Fastly AI Accelerator. A semantic caching solution created to address the critical performance and cost challenges faced by developers with Large Language Model generative AI applications, Fastly AI Accelerator delivers an average of 9x faster response times. Initially released in beta with support for OpenAI ChatGPT, Fastly AI Accelerator is also now available with Microsoft Azure AI Foundry.

"AI is helping developers create so many new experiences, but too often at the expense of performance for end-users. Too often, today’s AI platforms make users wait,” said Kip Compton, Chief Product Officer at Fastly. “With Fastly AI Accelerator we’re already averaging 9x faster response times and we’re just getting started. We want everyone to join us in the quest to make AI faster and more efficient.”

Fastly AI Accelerator can be a game-changer for developers looking to optimize their LLM generative AI applications. To access its intelligent, semantic caching abilities, developers simply update their application to a new API endpoint, which typically only requires changing a single line of code. With this easy implementation, instead of going back to the AI provider for each individual call, Fastly AI Accelerator leverages the Fastly Edge Cloud Platform to provide a cached response for repeated queries. This approach helps to enhance performance, lower costs, and ultimately deliver a better experience for developers.

"Fastly AI Accelerator is a significant step towards addressing the performance bottleneck accompanying the generative AI boom,” said Dave McCarthy, Research Vice President, Cloud and Edge Services at IDC. “This move solidifies Fastly's position as a key player in the fast-evolving edge cloud landscape. The unique approach of using semantic caching to reduce API calls and costs unlocks the true potential of LLM generative AI apps without compromising on speed or efficiency, allowing Fastly to enhance the user experience and empower developers."

Source: Fastly media announcement
FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel