AMD Radeon RX 7900 XT
AMD · 20GB GDDR6 · Can run 20 models
| Manufacturer | AMD |
| VRAM | 20 GB |
| Memory Type | GDDR6 |
| Architecture | RDNA 3 |
| Stream Processors | 5,376 |
| TDP | 315W |
| MSRP | $849 |
| Released | Dec 13, 2022 |
AI Notes
The RX 7900 XT offers an unusual 20GB VRAM capacity that sits between common 16GB and 24GB tiers. It can comfortably run 13B models and handle some 30B models with quantization. ROCm support continues to improve, making it a viable alternative to NVIDIA for local AI inference.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit |
|---|---|---|---|---|
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs (tight) |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | CPU Offload |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | CPU Offload |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | CPU Offload |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | CPU Offload |
5
model(s) are too large for this hardware.