Qwen 3 32B
by Alibaba · qwen-3 family
32B
parameters
text-generation code-generation reasoning multilingual math tool-use creative-writing summarization
Qwen 3 32B is the largest dense model in the Qwen 3 family, delivering near-frontier performance across coding, math, reasoning, and creative writing. Hybrid thinking mode allows it to compete with 70B-class models on complex tasks. At Q4 it needs about 23 GB VRAM — fits on a RTX 3090/5090 or Macs with 24 GB+ unified memory. An excellent choice for users with high-end hardware who want the best dense model experience.
Quick Start with Ollama
ollama run 32b-q4_K_M | Creator | Alibaba |
| Parameters | 32B |
| Architecture | transformer-decoder |
| Context | 128K tokens |
| Released | Apr 29, 2025 |
| License | Apache 2.0 |
| Ollama | qwen3:32b |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 20 GB | 23 GB | | 32b-q4_K_M |
| Q8_0 | 34.5 GB | 39 GB | | 32b-q8_0 |
| F16 | 65 GB | 70 GB | | 32b-fp16 |
Compatible Hardware
Q4_K_M requires 23 GB VRAM
Benchmark Scores
84.0
mmlu