MacBook Air M4 24GB
Apple · M4 · 24GB Unified Memory · Can run 20 models
| Manufacturer | Apple |
| Unified Memory | 24 GB |
| Chip | M4 |
| CPU Cores | 10 |
| GPU Cores | 10 |
| Neural Engine Cores | 16 |
| Memory Bandwidth | 120 GB/s |
| MSRP | $1,299 |
| Released | Mar 12, 2025 |
AI Notes
The MacBook Air M4 24GB adds meaningful headroom for local AI over the 16GB model. With 24GB of unified memory, it can run 13B models at full precision and attempt 30B models with heavy quantization. The fanless, portable form factor makes it an excellent choice for developers who want local AI on the go.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit |
|---|---|---|---|---|
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs (tight) |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | CPU Offload |
5
model(s) are too large for this hardware.