Redefine Creativity
AI Image Editor
Free browser-based tool for stunning visual creations
3 min to read
Gemma 3, Google's latest open-weight multimodal AI model, is a groundbreaking tool capable of processing text, images, and short videos. Designed for accessibility, efficiency, and versatility, it is an excellent choice for developers and researchers.
This guide provides a detailed walkthrough on running Gemma 3 on Ubuntu, covering prerequisites, installation methods, and optimization tips.
Key Features:
Gemma 3 is useful for applications such as content creation, multilingual translation, medical image analysis, and autonomous systems.
Before installing Gemma 3 on Ubuntu, ensure your system meets the following requirements:
There are two primary methods to run Gemma 3 on Ubuntu: using Ollama or Hugging Face Transformers. Both approaches are covered below.
Ollama simplifies running AI models locally. Follow these steps:
Verify Installation Check if the model is running:
ollama list
Install Gemma 3 Models Run the appropriate command based on model size:
ollama run gemma3:1b
ollama run gemma3:4b
ollama run gemma3:12b
ollama run gemma3:27b
Start the Ollama Server Launch the server:
ollama serve
Install Ollama Download and install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Install GPU Utilities Ensure your GPU is properly configured:
sudo apt install pciutils lshw -y
Update System Packages
sudo apt update && sudo apt upgrade -y
Hugging Face provides flexibility for developers familiar with Python and machine learning.
Fine-Tune the Model (Optional)
from peft import LoraConfig
config = LoraConfig(...)
model = get_peft_model(model, config)
# Proceed with fine-tuning...
Run Inference
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3")
inputs = tokenizer("Your input text", return_tensors="pt")
outputs = model(**inputs)
print(outputs)
Download Pretrained Weights
from transformers import AutoModel
model = AutoModel.from_pretrained("google/gemma-3")
Install Python Dependencies
pip install transformers torch torchvision accelerate
If running Gemma 3 on consumer-grade hardware:
gemma3:1b
or gemma3:4b
).Llama.cpp
.Enable quantization (e.g., 4-bit precision) to reduce memory usage:
ollama quantize --model gemma3 --precision int4
Ensure CUDA is installed and properly configured:
nvidia-smi
Running Gemma 3 on Ubuntu opens up a world of possibilities for developers and researchers. By following this guide, you can harness the power of this state-of-the-art AI model for applications ranging from content generation to advanced image analysis.
Need expert guidance? Connect with a top Codersera professional today!