deepseek-ai

deepseek-ai/DeepSeek-OCR

No description available.

Model Documentation

DeepSeek AI







🌟 Github | πŸ“₯ Model Download | πŸ“„ Paper Link | πŸ“„ Arxiv Paper Link |

DeepSeek-OCR: Contexts Optical Compression

Explore the boundaries of visual-text compression.



Usage

Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.12.9 + CUDA11.8:


torch==2.6.0
transformers==4.46.3
tokenizers==0.20.3
einops
addict 
easydict
pip install flash-attn==2.7.3 --no-build-isolation


python
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True) model = model.eval().cuda().to(torch.bfloat16)

prompt = "\nFree OCR. "

prompt = "\n<|grounding|>Convert the document to markdown. " image_file = 'your_image.jpg' output_path = 'your/output/dir'

infer(self, tokenizer, prompt='', image_file='', output_path = ' ', base_size = 1024, image_size = 640, crop_mode = True, test_compress = False, save_results = False):



Tiny: base_size = 512, image_size = 512, crop_mode = False

Small: base_size = 640, image_size = 640, crop_mode = False

Base: base_size = 1024, image_size = 1024, crop_mode = False

Large: base_size = 1280, image_size = 1280, crop_mode = False



Gundam: base_size = 1024, image_size = 640, crop_mode = True



res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)


vLLM

Refer to 🌟GitHub for guidance on model inference acceleration and PDF processing, etc. -->

2025/10/23] πŸš€πŸš€πŸš€ DeepSeek-OCR is now officially supported in upstream [vLLM.
shell
uv venv
source .venv/bin/activate

Until v0.11.1 release, you need to install vLLM from nightly build

uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly


python
from vllm import LLM, SamplingParams
from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor
from PIL import Image

Create model instance

llm = LLM( model="deepseek-ai/DeepSeek-OCR", enable_prefix_caching=False, mm_processor_cache_gb=0, logits_processors=[NGramPerReqLogitsProcessor] )

Prepare batched input with your image file

image_1 = Image.open("path/to/your/image_1.png").convert("RGB") image_2 = Image.open("path/to/your/image_2.png").convert("RGB") prompt = "\nFree OCR."

model_input = [ { "prompt": prompt, "multi_modal_data": {"image": image_1} }, { "prompt": prompt, "multi_modal_data": {"image": image_2} } ]

sampling_param = SamplingParams( temperature=0.0, max_tokens=8192,

ngram logit processor args

extra_args=dict( ngram_size=30, window_size=90, whitelist_token_ids={128821, 128822},

whitelist: ,

), skip_special_tokens=False, )

Generate output

model_outputs = llm.generate(model_input, sampling_param)

Print output

for output in model_outputs: print(output.outputs[0].text)


Visualizations



Acknowledgement



We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.

We also appreciate the benchmarks: Fox, OminiDocBench.

Citation

`bibtex @article{wei2025deepseek, title={DeepSeek-OCR: Contexts Optical Compression}, author={Wei, Haoran and Sun, Yaofeng and Li, Yukun}, journal={arXiv preprint arXiv:2510.18234}, year={2025} }

Files & Weights

FilenameSizeAction
model-00001-of-000001.safetensors 6.21 GB