Convert Figma logo to code with AI

JaidedAI logoEasyOCR

Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.

23,625
3,097
23,625
419

Top Related Projects

42,444

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)

60,774

Tesseract Open Source OCR Engine (main repository)

24,519

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Text recognition (optical character recognition) with deep learning methods, ICCV 2019

3,546

docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

Quick Overview

EasyOCR is a Python package that allows for easy Optical Character Recognition (OCR) in over 80 languages. It uses deep learning models to detect and recognize text in images, making it a powerful tool for various applications, from document digitization to automated data extraction from visual content.

Pros

  • Supports a wide range of languages (80+), including those with non-Latin scripts
  • Easy to use with a simple API, requiring minimal setup and configuration
  • Provides both text detection and recognition in a single package
  • Offers GPU acceleration for improved performance

Cons

  • May have lower accuracy compared to some commercial OCR solutions
  • Can be slower on CPU-only systems, especially for large images or complex scripts
  • Requires significant disk space due to the size of language models
  • Limited customization options for fine-tuning the OCR process

Code Examples

  1. Basic usage for reading text from an image:
import easyocr

reader = easyocr.Reader(['en'])  # Initialize with English
result = reader.readtext('path/to/your/image.jpg')
for (bbox, text, prob) in result:
    print(f"Text: {text}, Probability: {prob}")
  1. Reading text in multiple languages:
reader = easyocr.Reader(['en', 'fr', 'de'])  # English, French, German
result = reader.readtext('path/to/multilingual/image.png')
for (bbox, text, prob) in result:
    print(f"Detected Text: {text}")
  1. Using GPU acceleration:
reader = easyocr.Reader(['ja'], gpu=True)  # Japanese with GPU
result = reader.readtext('path/to/japanese/image.jpg')
for (bbox, text, prob) in result:
    print(f"Japanese Text: {text}")

Getting Started

To get started with EasyOCR, follow these steps:

  1. Install EasyOCR using pip:

    pip install easyocr
    
  2. Import the library and create a reader:

    import easyocr
    reader = easyocr.Reader(['en'])  # For English
    
  3. Use the reader to extract text from an image:

    result = reader.readtext('path/to/your/image.jpg')
    for (bbox, text, prob) in result:
        print(f"Detected Text: {text}")
    

That's it! You're now ready to use EasyOCR for your OCR tasks.

Competitor Comparisons

42,444

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)

Pros of PaddleOCR

  • More comprehensive OCR toolkit with detection, recognition, and layout analysis
  • Supports a wider range of languages (80+) out of the box
  • Better performance on complex document layouts and handwritten text

Cons of PaddleOCR

  • Steeper learning curve due to more complex architecture
  • Requires more setup and configuration compared to EasyOCR
  • Less straightforward integration for simple OCR tasks

Code Comparison

EasyOCR:

import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')

PaddleOCR:

from paddleocr import PaddleOCR
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr('image.jpg', cls=True)

Both libraries offer simple APIs for basic OCR tasks, but PaddleOCR provides more options for fine-tuning and advanced use cases. EasyOCR's API is more straightforward for quick implementation, while PaddleOCR offers greater flexibility and control over the OCR process.

PaddleOCR is better suited for complex OCR tasks and large-scale applications, whereas EasyOCR excels in simplicity and ease of use for basic OCR needs. The choice between the two depends on the specific requirements of your project and the level of OCR complexity you need to handle.

60,774

Tesseract Open Source OCR Engine (main repository)

Pros of Tesseract

  • Mature and well-established project with extensive documentation
  • Supports a wide range of languages and scripts
  • Highly customizable with various training options

Cons of Tesseract

  • Steeper learning curve for beginners
  • May require more preprocessing for optimal results
  • Performance can be slower compared to newer OCR libraries

Code Comparison

EasyOCR:

import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')

Tesseract:

import pytesseract
from PIL import Image
text = pytesseract.image_to_string(Image.open('image.jpg'))

Both libraries offer straightforward ways to perform OCR, but EasyOCR provides a more streamlined approach with fewer lines of code. Tesseract requires additional setup and may need more preprocessing steps for optimal results.

EasyOCR is designed to be user-friendly and works well out-of-the-box for many common scenarios. It supports multiple languages in a single model and offers good accuracy without extensive configuration.

Tesseract, being more established, offers greater flexibility and customization options. It's particularly useful for complex OCR tasks and scenarios requiring fine-tuned control over the recognition process.

24,519

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Pros of Mask_RCNN

  • Specialized in instance segmentation, offering precise object detection and segmentation
  • Highly customizable architecture for various computer vision tasks
  • Extensive documentation and community support

Cons of Mask_RCNN

  • Steeper learning curve compared to EasyOCR
  • Requires more computational resources and training data
  • Not specifically designed for OCR tasks

Code Comparison

Mask_RCNN:

import mrcnn.model as modellib
model = modellib.MaskRCNN(mode="inference", config=config, model_dir=MODEL_DIR)
model.load_weights(COCO_MODEL_PATH, by_name=True)
results = model.detect([image], verbose=1)

EasyOCR:

import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')

Mask_RCNN is more complex and requires additional setup, while EasyOCR offers a simpler, more straightforward approach for OCR tasks. Mask_RCNN is better suited for advanced computer vision projects, whereas EasyOCR is optimized for quick and easy text recognition.

Text recognition (optical character recognition) with deep learning methods, ICCV 2019

Pros of deep-text-recognition-benchmark

  • More comprehensive and flexible framework for text recognition research
  • Provides a benchmark for comparing different models and approaches
  • Includes multiple model architectures and training options

Cons of deep-text-recognition-benchmark

  • Steeper learning curve and more complex setup
  • Primarily designed for research purposes, less user-friendly for quick implementation
  • Requires more manual configuration and fine-tuning

Code Comparison

EasyOCR:

import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')

deep-text-recognition-benchmark:

from model import Model
from dataset import RawDataset, AlignCollate
model = Model(opt)
AlignCollate_demo = AlignCollate(imgH=opt.imgH, imgW=opt.imgW, keep_ratio_with_pad=opt.PAD)
demo_data = RawDataset(root='demo_image/', opt=opt)
demo_loader = torch.utils.data.DataLoader(
    demo_data, batch_size=opt.batch_size,
    shuffle=False,
    num_workers=int(opt.workers),
    collate_fn=AlignCollate_demo, pin_memory=True)

The code comparison shows that EasyOCR is more straightforward to use, requiring fewer lines of code for basic text recognition tasks. In contrast, deep-text-recognition-benchmark offers more flexibility and control but requires more setup and configuration.

3,546

docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

Pros of doctr

  • More focused on document analysis and processing
  • Offers pre-trained models for specific document types
  • Provides a comprehensive document processing pipeline

Cons of doctr

  • Less language support compared to EasyOCR
  • May require more setup and configuration
  • Steeper learning curve for beginners

Code Comparison

EasyOCR:

import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('image.jpg')

doctr:

from doctr.io import DocumentFile
from doctr.models import ocr_predictor
model = ocr_predictor(pretrained=True)
doc = DocumentFile.from_images('image.jpg')
result = model(doc)

Both libraries offer straightforward ways to perform OCR, but doctr's approach is more geared towards document analysis. EasyOCR provides a simpler interface for quick text extraction, while doctr offers more advanced features for document processing.

EasyOCR excels in multi-language support and ease of use, making it ideal for simple OCR tasks. doctr, on the other hand, is better suited for complex document analysis scenarios, offering more control and specialized models for different document types.

Choose EasyOCR for quick, multi-language OCR tasks, and doctr for more comprehensive document processing and analysis workflows.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

EasyOCR

PyPI Status license Open in Colab Tweet Twitter

Ready-to-use OCR with 80+ supported languages and all popular writing scripts including: Latin, Chinese, Arabic, Devanagari, Cyrillic, etc.

Try Demo on our website

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo: Hugging Face Spaces

What's new

  • 4 September 2023 - Version 1.7.1

    • Fix several compatibilities
  • 25 May 2023 - Version 1.7.0

  • 15 September 2022 - Version 1.6.2

    • Add CPU support for DBnet
    • DBnet will only be compiled when users initialize DBnet detector.
  • 1 September 2022 - Version 1.6.1

    • Fix DBnet path bug for Windows
    • Add new built-in model cyrillic_g2. This model is a new default for Cyrillic script.
  • 24 August 2022 - Version 1.6.0

    • Restructure code to support alternative text detectors.
    • Add detector DBnet, see paper. It can be used by initializing like this reader = easyocr.Reader(['en'], detect_network = 'dbnet18').
  • 2 June 2022 - Version 1.5.0

    • Add trainer for CRAFT detection model (thanks@gmuffiness, see PR)
  • Read all release notes

What's coming next

  • Handwritten text support

Examples

example

example2

example3

Installation

Install using pip

For the latest stable release:

pip install easyocr

For the latest development release:

pip install git+https://github.com/JaidedAI/EasyOCR.git

Note 1: For Windows, please install torch and torchvision first by following the official instructions here https://pytorch.org. On the pytorch website, be sure to select the right CUDA version you have. If you intend to run on CPU mode only, select CUDA = None.

Note 2: We also provide a Dockerfile here.

Usage

import easyocr
reader = easyocr.Reader(['ch_sim','en']) # this needs to run only once to load the model into memory
result = reader.readtext('chinese.jpg')

The output will be in a list format, each item represents a bounding box, the text detected and confident level, respectively.

[([[189, 75], [469, 75], [469, 165], [189, 165]], '愚园路', 0.3754989504814148),
 ([[86, 80], [134, 80], [134, 128], [86, 128]], '西', 0.40452659130096436),
 ([[517, 81], [565, 81], [565, 123], [517, 123]], '东', 0.9989598989486694),
 ([[78, 126], [136, 126], [136, 156], [78, 156]], '315', 0.8125889301300049),
 ([[514, 126], [574, 126], [574, 156], [514, 156]], '309', 0.4971577227115631),
 ([[226, 170], [414, 170], [414, 220], [226, 220]], 'Yuyuan Rd.', 0.8261902332305908),
 ([[79, 173], [125, 173], [125, 213], [79, 213]], 'W', 0.9848111271858215),
 ([[529, 173], [569, 173], [569, 213], [529, 213]], 'E', 0.8405593633651733)]

Note 1: ['ch_sim','en'] is the list of languages you want to read. You can pass several languages at once but not all languages can be used together. English is compatible with every language and languages that share common characters are usually compatible with each other.

Note 2: Instead of the filepath chinese.jpg, you can also pass an OpenCV image object (numpy array) or an image file as bytes. A URL to a raw image is also acceptable.

Note 3: The line reader = easyocr.Reader(['ch_sim','en']) is for loading a model into memory. It takes some time but it needs to be run only once.

You can also set detail=0 for simpler output.

reader.readtext('chinese.jpg', detail = 0)

Result:

['愚园路', '西', '东', '315', '309', 'Yuyuan Rd.', 'W', 'E']

Model weights for the chosen language will be automatically downloaded or you can download them manually from the model hub and put them in the '~/.EasyOCR/model' folder

In case you do not have a GPU, or your GPU has low memory, you can run the model in CPU-only mode by adding gpu=False.

reader = easyocr.Reader(['ch_sim','en'], gpu=False)

For more information, read the tutorial and API Documentation.

Run on command line

$ easyocr -l ch_sim en -f chinese.jpg --detail=1 --gpu=True

Train/use your own model

For recognition model, Read here.

For detection model (CRAFT), Read here.

Implementation Roadmap

  • Handwritten support
  • Restructure code to support swappable detection and recognition algorithms The api should be as easy as
reader = easyocr.Reader(['en'], detection='DB', recognition = 'Transformer')

The idea is to be able to plug in any state-of-the-art model into EasyOCR. There are a lot of geniuses trying to make better detection/recognition models, but we are not trying to be geniuses here. We just want to make their works quickly accessible to the public ... for free. (well, we believe most geniuses want their work to create a positive impact as fast/big as possible) The pipeline should be something like the below diagram. Grey slots are placeholders for changeable light blue modules.

plan

Acknowledgement and References

This project is based on research and code from several papers and open-source repositories.

All deep learning execution is based on Pytorch. :heart:

Detection execution uses the CRAFT algorithm from this official repository and their paper (Thanks @YoungminBaek from @clovaai). We also use their pretrained model. Training script is provided by @gmuffiness.

The recognition model is a CRNN (paper). It is composed of 3 main components: feature extraction (we are currently using Resnet) and VGG, sequence labeling (LSTM) and decoding (CTC). The training pipeline for recognition execution is a modified version of the deep-text-recognition-benchmark framework. (Thanks @ku21fan from @clovaai) This repository is a gem that deserves more recognition.

Beam search code is based on this repository and his blog. (Thanks @githubharald)

Data synthesis is based on TextRecognitionDataGenerator. (Thanks @Belval)

And a good read about CTC from distill.pub here.

Want To Contribute?

Let's advance humanity together by making AI available to everyone!

3 ways to contribute:

Coder: Please send a PR for small bugs/improvements. For bigger ones, discuss with us by opening an issue first. There is a list of possible bug/improvement issues tagged with 'PR WELCOME'.

User: Tell us how EasyOCR benefits you/your organization to encourage further development. Also post failure cases in Issue Section to help improve future models.

Tech leader/Guru: If you found this library useful, please spread the word! (See Yann Lecun's post about EasyOCR)

Guideline for new language request

To request a new language, we need you to send a PR with the 2 following files:

  1. In folder easyocr/character, we need 'yourlanguagecode_char.txt' that contains list of all characters. Please see format examples from other files in that folder.
  2. In folder easyocr/dict, we need 'yourlanguagecode.txt' that contains list of words in your language. On average, we have ~30000 words per language with more than 50000 words for more popular ones. More is better in this file.

If your language has unique elements (such as 1. Arabic: characters change form when attached to each other + write from right to left 2. Thai: Some characters need to be above the line and some below), please educate us to the best of your ability and/or give useful links. It is important to take care of the detail to achieve a system that really works.

Lastly, please understand that our priority will have to go to popular languages or sets of languages that share large portions of their characters with each other (also tell us if this is the case for your language). It takes us at least a week to develop a new model, so you may have to wait a while for the new model to be released.

See List of languages in development

Github Issues

Due to limited resources, an issue older than 6 months will be automatically closed. Please open an issue again if it is critical.

Business Inquiries

For Enterprise Support, Jaided AI offers full service for custom OCR/AI systems from implementation, training/finetuning and deployment. Click here to contact us.