NVIDIA GeForce RTX 4060 Ti 8GB
NVIDIA · 8GB GDDR6 · Can run 13 models
| Manufacturer | NVIDIA |
| VRAM | 8 GB |
| Memory Type | GDDR6 |
| Architecture | Ada Lovelace |
| CUDA Cores | 4,352 |
| Tensor Cores | 136 |
| TDP | 160W |
| MSRP | $399 |
| Released | May 24, 2023 |
AI Notes
The RTX 4060 Ti 8GB is limited for AI workloads due to its 8GB VRAM constraint. It can run 7B-parameter models with quantization but struggles with anything larger. For AI-focused use, the 16GB variant is strongly recommended over this model.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit |
|---|---|---|---|---|
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | CPU Offload |
| Mistral 7B | 7B | Q8_0 | 9 GB | CPU Offload |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | CPU Offload |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | CPU Offload |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | CPU Offload |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | CPU Offload |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload |
12
model(s) are too large for this hardware.