Skip to content

NVIDIA GeForce RTX 5070

NVIDIA · 12GB GDDR7 · Can run 48 models

Buy Amazon
Manufacturer NVIDIA
VRAM 12 GB
Memory Type GDDR7
Architecture Blackwell
CUDA Cores 6,144
Tensor Cores 192
Bandwidth 672 GB/s
TDP 250W
MSRP $549
Released Mar 10, 2025

AI Notes

The RTX 5070 brings next-gen GDDR7 memory with 672 GB/s bandwidth to the mid-range. With 12GB VRAM, it handles 7B models at very fast speeds and runs 13B models with quantization. The Blackwell architecture's efficiency improvements make it one of the best value cards for local AI.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~269 tok/s
Qwen 3.5 0.8B 800M Q4_K_M 1.5 GB Runs ~448 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~336 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~224 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~224 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~168 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~204 tok/s
Gemma 4 E2B 2B Q4_K_M 4 GB Runs ~168 tok/s
Qwen 3.5 2B 2B Q4_K_M 3 GB Runs ~224 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~134 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~116 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~149 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~134 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~149 tok/s
Gemma 4 E4B 4B Q4_K_M 6 GB Runs ~112 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~149 tok/s
Qwen 3.5 4B 4B Q4_K_M 4.5 GB Runs ~149 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~75 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~99 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~75 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~75 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~75 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~96 tok/s
Aya Expanse 8B 8B Q4_K_M 6.5 GB Runs ~103 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~90 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~90 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~67 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~90 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~90 tok/s
Qwen 3.5 9B 9B Q4_K_M 7.5 GB Runs ~90 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~79 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~79 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~71 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~68 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~68 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~68 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs (tight) ~61 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs (tight) ~64 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs (tight) ~61 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB CPU Offload ~17 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB CPU Offload ~17 tok/s
StarCoder2 15B 15B Q8_0 17 GB CPU Offload ~12 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB CPU Offload ~14 tok/s
Devstral 24B 24B Q4_K_M 17 GB CPU Offload ~12 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB CPU Offload ~12 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB CPU Offload ~11 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB CPU Offload ~11 tok/s
Qwen 3.5 35B A3B 35B Q4_K_M 12 GB CPU Offload ~17 tok/s
36 model(s) are too large for this hardware.