Skip to content

MacBook Air M2 8GB

Apple · M2 · 8GB Unified Memory · Can run 43 models

Buy Apple
Manufacturer Apple
Unified Mem 8 GB
Chip M2
CPU Cores 8
GPU Cores 8
Neural Engine 16
Bandwidth 100 GB/s
MSRP $1,099
Released Jul 15, 2022

AI Notes

The MacBook Air M2 8GB brings a faster memory bus at 100 GB/s compared to M1's 68 GB/s, but is still limited by 8GB unified memory to 7B models. The improved bandwidth means noticeably faster token generation at this model size. A capable ultrabook for light local AI work.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~40 tok/s
Qwen 3.5 0.8B 800M Q4_K_M 1.5 GB Runs ~67 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~50 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~33 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~33 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~25 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~30 tok/s
Gemma 4 E2B 2B Q4_K_M 4 GB Runs ~25 tok/s
Qwen 3.5 2B 2B Q4_K_M 3 GB Runs ~33 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~20 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~17 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~22 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~20 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~22 tok/s
Gemma 4 E4B 4B Q4_K_M 6 GB Runs ~17 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~22 tok/s
Qwen 3.5 4B 4B Q4_K_M 4.5 GB Runs ~22 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~15 tok/s
Aya Expanse 8B 8B Q4_K_M 6.5 GB Runs ~15 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs (tight) ~14 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs (tight) ~13 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs (tight) ~13 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs (tight) ~13 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs (tight) ~13 tok/s
Qwen 3.5 9B 9B Q4_K_M 7.5 GB Runs (tight) ~13 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB CPU Offload ~3 tok/s
Mistral 7B 7B Q8_0 9 GB CPU Offload ~3 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB CPU Offload ~3 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB CPU Offload ~3 tok/s
Llama 3.1 8B 8B Q8_0 10 GB CPU Offload ~3 tok/s
Gemma 2 9B 9B Q8_0 11 GB CPU Offload ~3 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB CPU Offload ~4 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB CPU Offload ~4 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB CPU Offload ~3 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB CPU Offload ~3 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB CPU Offload ~3 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB CPU Offload ~3 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB CPU Offload ~3 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB CPU Offload ~3 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB CPU Offload ~2 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB CPU Offload ~2 tok/s
StarCoder2 15B 15B Q4_K_M 10.5 GB CPU Offload ~3 tok/s
Qwen 3.5 35B A3B 35B Q4_K_M 12 GB CPU Offload ~2 tok/s
41 model(s) are too large for this hardware.