Gemma 3 27B
by Google · gemma-3 family
27B
parameters
text-generation code-generation reasoning multilingual vision math creative-writing summarization
Gemma 3 27B is the flagship of the Gemma 3 family — a multimodal powerhouse that competes with models in the 65B+ class on many benchmarks. It supports 128K context and handles text, images, and complex reasoning tasks with high quality. At Q4 it needs about 20 GB VRAM, fitting well on a RTX 3090/4090 or a Mac with 24 GB+ unified memory. An excellent choice for users who want near-frontier performance from a single consumer GPU.
Quick Start with Ollama
ollama run 27b-it-q4_K_M | Creator | |
| Parameters | 27B |
| Architecture | transformer-decoder |
| Context | 128K tokens |
| Released | Mar 12, 2025 |
| License | Gemma Terms of Use |
| Ollama | gemma3:27b |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 17 GB | 20 GB | | 27b-it-q4_K_M |
| Q8_0 | 30 GB | 34 GB | | 27b-it-q8_0 |
| F16 | 55 GB | 60 GB | | 27b-it-fp16 |
Compatible Hardware
Q4_K_M requires 20 GB VRAM
Benchmark Scores
78.5
mmlu