Brilliant Labs Partners with Liquid AIBrilliant Labs Partners With Liquid AI to Bring Vision-Language Tech to Your GlassesThe next generation of Brilliant Labs AI glasses, Halo, will be powered by Liquid AI’s lightweight vision-language Liquid Foundation Models to bring fast, reliable, and fully private intelligence to wearers’ eyes.Liquid AI and Brilliant Labs announced an agreement to integrate Liquid’s vision-language foundation models into Brilliant’s products. Under this agreement, Brilliant will be licensing the current and upcoming multimodal Liquid foundation models to enhance the general scene-understanding capability of their AI glasses. “At Liquid, we build efficient generative AI models that demonstrate the quality and reliability of models orders of magnitude larger. Our commitment to delivering the highest quality AI solutions with the lowest energy footprint truly unlocks high-stakes use cases on any device,” said Ramin Hasani, co-founder and CEO of Liquid AI. “I strongly believe in glasses as a viable form factor for the future of hyper-personalized human-AI interaction. Brilliant Labs has been on the verge of building this future with their AI glasses products. We’re excited to bring our best-in-class, private, and efficient on-device LFMs to their customers.” Liquid’s models will be incorporated into the products created by Brilliant, such as the company’s Halo AI glasses. Brilliant has quickly become the go-to open source glasses platform for builders and creatives all over the world looking to advance the future of computing. Halo’s first-of-their-kind features, like AI memory, realtime conversational AI, and Vibe Mode—all within an open and private platform—raise the bar in the smart wearables space and unlock new capabilities for users. “The future of computing must be open, private, and personal,” said Bobak Tavangar, CEO of Brilliant Labs. “These are core values we share with Liquid and their incredibly innovative foundation models are a perfect fit for Halo and Brilliant’s open source AI glasses platform. The speed and efficiency of LFM2-VL-450M enables us to build a whole new class of AI features atop our glasses hardware platform and we’re just getting started.” LFM2-VL is the first series of vision-language foundation models from Liquid, supporting both text and image inputs with variable resolutions. The model is composed of a tiny but powerful vision encoder with 86M parameters, built on a 350M LFM2 base model. The model is capable of providing a detailed, accurate, and creative description of the scenes provided by a camera sensor with millisecond latency on CPUs and GPUs, delivering real-time intelligence to the end-device. Source: Liquid AI and Brilliant Labs media announcement |