PaddleOCR
Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
Top Related Projects
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Tesseract Open Source OCR Engine (main repository)
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
Text recognition (optical character recognition) with deep learning methods, ICCV 2019
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Quick Overview
PaddleOCR is an open-source Optical Character Recognition (OCR) toolkit developed by Baidu's PaddlePaddle team. It provides a comprehensive set of tools for text detection, recognition, and layout analysis, supporting multiple languages and offering both lightweight and accurate models for various OCR tasks.
Pros
- Comprehensive OCR solution with support for multiple languages and tasks
- Offers both lightweight models for mobile devices and high-accuracy models for server-side applications
- Active development and frequent updates from the PaddlePaddle team
- Extensive documentation and examples for easy integration
Cons
- Primarily based on the PaddlePaddle deep learning framework, which may have a steeper learning curve for those familiar with other frameworks
- Some advanced features may require more computational resources
- Documentation is sometimes not fully up-to-date with the latest features
- Limited community support compared to some other popular OCR libraries
Code Examples
- Basic text detection and recognition:
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.jpg')
for line in result:
print(line)
- Extracting text from a specific region of an image:
import cv2
from paddleocr import PaddleOCR
image = cv2.imread('image.jpg')
roi = image[100:300, 200:400] # Define region of interest
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr(roi)
for line in result:
print(line[1][0]) # Print recognized text
- Using a custom dictionary for text recognition:
from paddleocr import PaddleOCR
custom_dict = 'path/to/custom_dict.txt'
ocr = PaddleOCR(use_angle_cls=True, lang='en', rec_char_dict_path=custom_dict)
result = ocr.ocr('image.jpg')
for line in result:
print(line)
Getting Started
To get started with PaddleOCR:
- Install PaddleOCR:
pip install paddleocr
- Use PaddleOCR in your Python script:
from paddleocr import PaddleOCR
# Initialize PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
# Perform OCR on an image
result = ocr.ocr('path/to/your/image.jpg')
# Print the results
for line in result:
print(line)
For more advanced usage and customization options, refer to the official documentation on the PaddleOCR GitHub repository.
Competitor Comparisons
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Pros of UniLM
- Broader scope: Supports a wide range of natural language processing tasks beyond OCR
- More advanced language understanding: Utilizes large-scale pre-trained models for improved performance
- Active research focus: Regularly updated with cutting-edge NLP techniques and models
Cons of UniLM
- Less specialized for OCR: May not offer as many OCR-specific features and optimizations
- Potentially more complex to use: Broader scope may require more setup and configuration for OCR tasks
- Larger resource requirements: Pre-trained models can be computationally intensive
Code Comparison
PaddleOCR:
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.jpg')
UniLM (using LayoutLM for OCR):
from transformers import LayoutLMForTokenClassification, LayoutLMTokenizer
model = LayoutLMForTokenClassification.from_pretrained("microsoft/layoutlm-base-uncased")
tokenizer = LayoutLMTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
Note: The code snippets demonstrate basic setup and may not reflect the full complexity of using each library for OCR tasks.
Tesseract Open Source OCR Engine (main repository)
Pros of Tesseract
- Mature and widely adopted OCR engine with a long history
- Supports a wide range of languages and scripts
- Highly customizable with extensive documentation
Cons of Tesseract
- Generally slower performance compared to modern deep learning-based approaches
- May struggle with complex layouts or low-quality images
- Requires more manual configuration for optimal results
Code Comparison
Tesseract
import pytesseract
from PIL import Image
image = Image.open('image.png')
text = pytesseract.image_to_string(image)
print(text)
PaddleOCR
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.png', cls=True)
for line in result:
print(line[1][0])
PaddleOCR offers a more streamlined API for OCR tasks, with built-in support for text detection, recognition, and angle classification. Tesseract, while powerful, often requires additional preprocessing steps for optimal results. PaddleOCR's deep learning approach generally provides better performance on complex layouts and low-quality images, but Tesseract's extensive language support and customization options make it a versatile choice for specific use cases.
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
Pros of EasyOCR
- Simpler installation process and easier to use for beginners
- Supports a wider range of languages (80+) out of the box
- Better documentation and examples for quick start
Cons of EasyOCR
- Generally slower inference speed compared to PaddleOCR
- Less flexibility and customization options for advanced users
- Smaller community and fewer pre-trained models available
Code Comparison
EasyOCR:
import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')
PaddleOCR:
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.jpg')
Both libraries offer simple APIs for OCR tasks, but PaddleOCR provides more options for fine-tuning and optimization. EasyOCR's code is more straightforward, making it easier for beginners to get started quickly. PaddleOCR's approach allows for more advanced configurations, which can be beneficial for complex OCR tasks or when performance optimization is crucial.
Text recognition (optical character recognition) with deep learning methods, ICCV 2019
Pros of deep-text-recognition-benchmark
- Focuses specifically on text recognition, providing a comprehensive benchmark for various models
- Implements multiple state-of-the-art architectures, allowing for easy comparison and experimentation
- Offers a modular design, making it easier to swap components and test different combinations
Cons of deep-text-recognition-benchmark
- Limited to text recognition, while PaddleOCR offers a more comprehensive OCR pipeline
- Less extensive documentation and fewer pre-trained models compared to PaddleOCR
- Smaller community and fewer updates, potentially leading to slower development and support
Code Comparison
deep-text-recognition-benchmark:
model = Model(opt)
converter = AttnLabelConverter(opt.character)
criterion = torch.nn.CrossEntropyLoss(ignore_index=0).to(device)
PaddleOCR:
model = build_model(config['Architecture'])
loss_class = build_loss(config['Loss'])
optimizer = build_optimizer(config['Optimizer'], model)
Both repositories use similar approaches for model initialization and loss function definition. However, PaddleOCR's code structure is more modular and configurable, allowing for easier customization of the OCR pipeline.
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Pros of Mask_RCNN
- Specialized in instance segmentation, offering precise object detection and segmentation
- Well-documented with extensive tutorials and examples
- Supports both TensorFlow 1.x and 2.x
Cons of Mask_RCNN
- Limited to object detection and segmentation tasks
- Less frequent updates and maintenance compared to PaddleOCR
- Steeper learning curve for beginners
Code Comparison
Mask_RCNN:
import mrcnn.model as modellib
from mrcnn import utils
class InferenceConfig(coco.CocoConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
model = modellib.MaskRCNN(mode="inference", config=InferenceConfig(), model_dir=MODEL_DIR)
PaddleOCR:
from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.jpg', cls=True)
The code snippets highlight the difference in focus between the two repositories. Mask_RCNN requires more setup for object detection and segmentation, while PaddleOCR offers a simpler interface for OCR tasks.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
English | ç®ä½ä¸æ | ç¹é«ä¸æ | æ¥æ¬èª | íêµì´ | Français | Ð ÑÑÑкий | Español | Ø§ÙØ¹Ø±Ø¨ÙØ©
ð Introduction
Since its initial release, PaddleOCR has gained widespread acclaim across academia, industry, and research communities, thanks to its cutting-edge algorithms and proven performance in real-world applications. It's already powering popular open-source projects like Umi-OCR, OmniParser, MinerU, and RAGFlow, making it the go-to OCR toolkit for developers worldwide.
On May 20, 2025, the PaddlePaddle team unveiled PaddleOCR 3.0, fully compatible with the official release of the PaddlePaddle 3.0 framework. This update further boosts text-recognition accuracy, adds support for multiple text-type recognition and handwriting recognition, and meets the growing demand from large-model applications for high-precision parsing of complex documents. When combined with the ERNIE 4.5 Turbo, it significantly enhances key-information extraction accuracy. PaddleOCR 3.0 also introduces support for Chinese Heterogeneous AI Accelerators such as KUNLUNXIN and Ascend. For the complete usage documentation, please refer to the PaddleOCR 3.0 Documentation.
Three Major New Features in PaddleOCR 3.0:
-
Universal-Scene Text Recognition Model PP-OCRv5: A single model that handles five different text types plus complex handwriting. Overall recognition accuracy has increased by 13 percentage points over the previous generation. Online Demo
-
General Document-Parsing Solution PP-StructureV3: Delivers high-precision parsing of multi-layout, multi-scene PDFs, outperforming many open- and closed-source solutions on public benchmarks. Online Demo
-
Intelligent Document-Understanding Solution PP-ChatOCRv4: Natively powered by the ERNIE 4.5 Turbo, achieving 15 percentage points higher accuracy than its predecessor. Online Demo
In addition to providing an outstanding model library, PaddleOCR 3.0 also offers user-friendly tools covering model training, inference, and service deployment, so developers can rapidly bring AI applications to production.
ð£ Recent updates
2025.06.29: Release of PaddleOCR 3.1.0, includes:
-
Key Models and Pipelines:
- Added PP-OCRv5 Multilingual Text Recognition Model, which supports the training and inference process for text recognition models in 37 languages, including French, Spanish, Portuguese, Russian, Korean, etc. Average accuracy improved by over 30%. Details
- Upgraded the PP-Chart2Table model in PP-StructureV3, further enhancing the capability of converting charts to tables. On internal custom evaluation sets, the metric (RMS-F1) increased by 9.36 percentage points (71.24% -> 80.60%).
- Newly launched document translation pipeline, PP-DocTranslation, based on PP-StructureV3 and ERNIE 4.5 Turbo, which supports the translation of Markdown format documents, various complex-layout PDF documents, and document images, with the results saved as Markdown format documents. Details
-
New MCP server: Details
- Supports both OCR and PP-StructureV3 pipelines.
- Supports three working modes: local Python library, AIStudio Community Cloud Service, and self-hosted service.
- Supports invoking local services via stdio and remote services via Streamable HTTP.
-
Documentation Optimization: Improved the descriptions in some user guides for a smoother reading experience.
2025.06.26: Release of PaddleOCR 3.0.3, includes:
- Bug Fix: Resolved the issue where the
enable_mkldnn
parameter was not effective, restoring the default behavior of using MKL-DNN for CPU inference.
ð¥ð¥ 2025.06.19: Release of PaddleOCR 3.0.2, includes:
-
New Features:
- The default download source has been changed from
BOS
toHuggingFace
. Users can also change the environment variablePADDLE_PDX_MODEL_SOURCE
toBOS
to set the model download source back to Baidu Object Storage (BOS). - Added service invocation examples for six languagesâC++, Java, Go, C#, Node.js, and PHPâfor pipelines like PP-OCRv5, PP-StructureV3, and PP-ChatOCRv4.
- Improved the layout partition sorting algorithm in the PP-StructureV3 pipeline, enhancing the sorting logic for complex vertical layouts to deliver better results.
- Enhanced model selection logic: when a language is specified but a model version is not, the system will automatically select the latest model version supporting that language.
- Set a default upper limit for MKL-DNN cache size to prevent unlimited growth, while also allowing users to configure cache capacity.
- Updated default configurations for high-performance inference to support Paddle MKL-DNN acceleration and optimized the logic for automatic configuration selection for smarter choices.
- Adjusted the logic for obtaining the default device to consider the actual support for computing devices by the installed Paddle framework, making program behavior more intuitive.
- Added Android example for PP-OCRv5. Details.
- The default download source has been changed from
-
Bug Fixes:
- Fixed an issue with some CLI parameters in PP-StructureV3 not taking effect.
- Resolved an issue where
export_paddlex_config_to_yaml
would not function correctly in certain cases. - Corrected the discrepancy between the actual behavior of
save_path
and its documentation description. - Fixed potential multithreading errors when using MKL-DNN in basic service deployment.
- Corrected channel order errors in image preprocessing for the Latex-OCR model.
- Fixed channel order errors in saving visualized images within the text recognition module.
- Resolved channel order errors in visualized table results within PP-StructureV3 pipeline.
- Fixed an overflow issue in the calculation of
overlap_ratio
under extremely special circumstances in the PP-StructureV3 pipeline.
-
Documentation Improvements:
- Updated the description of the
enable_mkldnn
parameter in the documentation to accurately reflect the program's actual behavior. - Fixed errors in the documentation regarding the
lang
andocr_version
parameters. - Added instructions for exporting production line configuration files via CLI.
- Fixed missing columns in the performance data table for PP-OCRv5.
- Refined benchmark metrics for PP-StructureV3 across different configurations.
- Updated the description of the
-
Others:
- Relaxed version restrictions on dependencies like numpy and pandas, restoring support for Python 3.12.
History Log
ð¥ð¥ 2025.06.05: Release of PaddleOCR 3.0.1, includes:
-
Optimisation of certain models and model configurations:
- Updated the default model configuration for PP-OCRv5, changing both detection and recognition from mobile to server models. To improve default performance in most scenarios, the parameter
limit_side_len
in the configuration has been changed from 736 to 64. - Added a new text line orientation classification model
PP-LCNet_x1_0_textline_ori
with an accuracy of 99.42%. The default text line orientation classifier for OCR, PP-StructureV3, and PP-ChatOCRv4 pipelines has been updated to this model. - Optimised the text line orientation classification model
PP-LCNet_x0_25_textline_ori
, improving accuracy by 3.3 percentage points to a current accuracy of 98.85%.
- Updated the default model configuration for PP-OCRv5, changing both detection and recognition from mobile to server models. To improve default performance in most scenarios, the parameter
-
Optimizations and fixes for some issues in version 3.0.0, details
ð¥ð¥2025.05.20: Official Release of PaddleOCR v3.0, including:
-
PP-OCRv5: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
- ð Single-model support for five text types - Seamlessly process Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English and Japanese within a single model.
- âï¸ Improved handwriting recognition: Significantly better at complex cursive scripts and non-standard handwriting.
- ð¯ 13-point accuracy gain over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
-
PP-StructureV3: General-Purpose Document Parsing â Unleash SOTA Images/PDFs Parsing for Real-World Scenarios!
- ð§® High-Accuracy multi-scene PDF parsing, leading both open- and closed-source solutions on the OmniDocBench benchmark.
- ð§ Specialized capabilities include seal recognition, chart-to-table conversion, table recognition with nested formulas/images, vertical text document parsing, and complex table structure analysis.
-
PP-ChatOCRv4: Intelligent Document Understanding â Extract Key Information, not just text from Images/PDFs.
- ð¥ 15-point accuracy gain in key-information extraction on PDF/PNG/JPG files over the previous generation.
- ð» Native support for ERNIE 4.5 Turbo, with compatibility for large-model deployments via PaddleNLP, Ollama, vLLM, and more.
- ð¤ Integrated PP-DocBee2, enabling extraction and understanding of printed text, handwriting, seals, tables, charts, and other common elements in complex documents.
â¡ Quick Start
1. Run online demo
2. Installation
Install PaddlePaddle refer to Installation Guide, after then, install the PaddleOCR toolkit.
# Install paddleocr
pip install paddleocr
3. Run inference by CLI
# Run PP-OCRv5 inference
paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --use_doc_orientation_classify False --use_doc_unwarping False --use_textline_orientation False
# Run PP-StructureV3 inference
paddleocr pp_structurev3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png --use_doc_orientation_classify False --use_doc_unwarping False
# Get the Qianfan API Key at first, and then run PP-ChatOCRv4 inference
paddleocr pp_chatocrv4_doc -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png -k 驾驶室åä¹äººæ° --qianfan_api_key your_api_key --use_doc_orientation_classify False --use_doc_unwarping False
# Get more information about "paddleocr ocr"
paddleocr ocr --help
4. Run inference by API
4.1 PP-OCRv5 Example
# Initialize PaddleOCR instance
from paddleocr import PaddleOCR
ocr = PaddleOCR(
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_textline_orientation=False)
# Run OCR inference on a sample image
result = ocr.predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png")
# Visualize the results and save the JSON results
for res in result:
res.print()
res.save_to_img("output")
res.save_to_json("output")
4.2 PP-StructureV3 Example
from pathlib import Path
from paddleocr import PPStructureV3
pipeline = PPStructureV3(
use_doc_orientation_classify=False,
use_doc_unwarping=False
)
# For Image
output = pipeline.predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png",
)
# Visualize the results and save the JSON results
for res in output:
res.print()
res.save_to_json(save_path="output")
res.save_to_markdown(save_path="output")
4.3 PP-ChatOCRv4 Example
from paddleocr import PPChatOCRv4Doc
chat_bot_config = {
"module_name": "chat_bot",
"model_name": "ernie-3.5-8k",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "openai",
"api_key": "api_key", # your api_key
}
retriever_config = {
"module_name": "retriever",
"model_name": "embedding-v1",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "qianfan",
"api_key": "api_key", # your api_key
}
pipeline = PPChatOCRv4Doc(
use_doc_orientation_classify=False,
use_doc_unwarping=False
)
visual_predict_res = pipeline.visual_predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
use_common_ocr=True,
use_seal_recognition=True,
use_table_recognition=True,
)
mllm_predict_info = None
use_mllm = False
# If a multimodal large model is used, the local mllm service needs to be started. You can refer to the documentation: https://github.com/PaddlePaddle/PaddleX/blob/release/3.0/docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.en.md performs deployment and updates the mllm_chat_bot_config configuration.
if use_mllm:
mllm_chat_bot_config = {
"module_name": "chat_bot",
"model_name": "PP-DocBee",
"base_url": "http://127.0.0.1:8080/", # your local mllm service url
"api_type": "openai",
"api_key": "api_key", # your api_key
}
mllm_predict_res = pipeline.mllm_pred(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
key_list=["驾驶室åä¹äººæ°"],
mllm_chat_bot_config=mllm_chat_bot_config,
)
mllm_predict_info = mllm_predict_res["mllm_res"]
visual_info_list = []
for res in visual_predict_res:
visual_info_list.append(res["visual_info"])
layout_parsing_result = res["layout_parsing_result"]
vector_info = pipeline.build_vector(
visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config
)
chat_result = pipeline.chat(
key_list=["驾驶室åä¹äººæ°"],
visual_info=visual_info_list,
vector_info=vector_info,
mllm_predict_info=mllm_predict_info,
chat_bot_config=chat_bot_config,
retriever_config=retriever_config,
)
print(chat_result)
5. Chinese Heterogeneous AI Accelerators
â°ï¸ Advanced Tutorials
ð Quick Overview of Execution Results
ð©âð©âð§âð¦ Community
PaddlePaddle WeChat official account | Join the tech discussion group |
---|---|
![]() | ![]() |
ð Awesome Projects Leveraging PaddleOCR
PaddleOCR wouldn't be where it is today without its incredible community! ð A massive thank you to all our longtime partners, new collaborators, and everyone who's poured their passion into PaddleOCR â whether we've named you or not. Your support fuels our fire!
Project Name | Description |
---|---|
RAGFlow | RAG engine based on deep document understanding. |
MinerU | Multi-type Document to Markdown Conversion Tool |
Umi-OCR | Free, Open-source, Batch Offline OCR Software. |
OmniParser | OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent. |
QAnything | Question and Answer based on Anything. |
PDF-Extract-Kit | A powerful open-source toolkit designed to efficiently extract high-quality content from complex and diverse PDF documents. |
Dango-Translator | Recognize text on the screen, translate it and show the translation results in real time. |
Learn more projects | More projects based on PaddleOCR |
ð©âð©âð§âð¦ Contributors
ð Star
ð License
This project is released under the Apache 2.0 license.
ð Citation
@misc{paddleocr2020,
title={PaddleOCR, Awesome multilingual OCR toolkits based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleOCR}},
year={2020}
}
Top Related Projects
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Tesseract Open Source OCR Engine (main repository)
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
Text recognition (optical character recognition) with deep learning methods, ICCV 2019
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot