NVIDIA Supports Gemma 3n on Jetson and RTXRun Google DeepMind’s Gemma 3n on NVIDIA Jetson and RTXNVIDIA announced that it now supports the general availability of Gemma 3n on NVIDIA RTX and Jetson. Gemma, previewed by Google DeepMind at Google I/O last month, includes two new models optimized for multi-modal on-device deployment. Gemma now includes audio in addition to the text and vision capabilities introduced in version 3.5. Each component integrates trusted research models: Universal Speech Model for audio, MobileNet v4 for vision, and MatFormer for text. The biggest usage advancement is an innovation called Per-Lay Embeddings. It allows for significant reduction in RAM usage for parameters. The Gemma 3n E4B model has a raw parameter count of 8B parameters but can operate using a dynamic memory footprint that’s comparable to a 4B model. This enables developers to use a higher quality model within a resource-constrained environment. Powering robotics and edge AI with Jetson The Gemma family of models works well on NVIDIA Jetson devices that are geared at powering edge applications, such as next-generation robotics. The lightweight architecture and, now, dynamic memory usage fit in resource-constrained environments. Jetson developers can participate in the Gemma 3n Impact Challenge hosted on Kaggle. The aim is to use this technology to create meaningful, positive change in the world in areas such as accessibility, education, healthcare, environmental sustainability, and crisis response. Several cash prizes, which start at $10,000, are available for submissions for overall placement and for using different technologies suited for on-device deployment, such as Jetson. To get started, check out the live text and image demo from the Gemma 3 Developer Day in April and the GitHub repository for deploying Gemma locally using Ollama. NVIDIA RTX for Windows developers and AI enthusiasts With NVIDIA RTX AI PCs, developers can easily deploy Gemma 3n models using Ollama. AI enthusiasts can use Gemma 3n models with RTX accelerations in their favorite apps like AnythingLLM and LM Studio. Developers can deploy Gemma 3n locally to both RTX and Jetson devices with a few simple instructions using the Ollama CLI:
ollama pull gemma3n:e4b ollama run gemma3n:e4b “Summarize Shakespeare’s Hamlet” NVIDIA collaborates with Ollama to provide performance optimizations for NVIDIA RTX GPUs, accelerating the latest models like Gemma 3n. For this model, Ollama leverages the Ollama engine in the backend, which builds upon the GGML library. Learn more about NVIDIA’s contributions to the GGML library for maximum performance on NVIDIA RTX GPUs. Customize Gemma for your data with the open NVIDIA NeMo Framework Developers can use the Gemma 3n models from Hugging Face with the open source NVIDIA NeMo Framework. It provides a comprehensive framework for post-training Llama models to achieve higher accuracy, specifically through fine-tuning with enterprise-specific data. The workflow within NeMo is designed to be end-to-end, covering data preparation, efficient fine-tuning, and model evaluation. The workflow includes:
Advancing community models and collaboration NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to open models such as Gemma that promote AI transparency and let users broadly share work in AI safety and resilience. Source: NVIDIA media announcement |