DeepSeek R1 14B

by DeepSeek · deepseek-r1 family

14B

parameters

text-generation code-generation reasoning math

DeepSeek R1 14B is a distilled reasoning model based on the Qwen 2.5 14B architecture, trained to replicate the chain-of-thought reasoning capabilities of the full DeepSeek R1 671B model. It offers a substantial reasoning improvement over the 7B variant. This model is particularly strong at mathematical reasoning, competitive programming problems, and scientific analysis. It fits comfortably on GPUs with 16GB VRAM at Q4 quantization, making it one of the most accessible dedicated reasoning models available.

Quick Start with Ollama

ollama run 14b-q4_K_M
Creator DeepSeek
Parameters 14B
Architecture transformer-decoder
Context Length 128K tokens
License MIT
Released Jan 20, 2025
Ollama deepseek-r1:14b

Quantization Options

Format File Size VRAM Required Quality Ollama Tag
Q4_K_M recommended 7.1 GB 9.9 GB
14b-q4_K_M
Q5_K_M 8.2 GB 11.3 GB
14b-q5_K_M
Q8_0 12.6 GB 16 GB
14b-q8_0

Compatible Hardware for Q4_K_M

Showing compatibility for the recommended quantization (Q4_K_M, 9.9 GB VRAM).

Benchmark Scores

79.7
mmlu