Convert Figma logo to code with AI

VikParuchuri logomarker

Convert PDF to markdown + JSON quickly with high accuracy

19,481
1,162
19,481
200

Top Related Projects

57,265

Inference code for Llama models

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

38,629

The simplest, fastest repository for training/finetuning medium-sized GPTs.

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Quick Overview

Marker is an open-source Python library for converting PDF files and images to markdown. It uses advanced machine learning techniques to accurately extract and format text, tables, and images from documents, making it easier to work with content from various sources in a markdown format.

Pros

  • High accuracy in text and layout extraction
  • Supports both PDF and image input formats
  • Preserves formatting, including tables and images
  • Easy to use with a simple Python API

Cons

  • Requires significant computational resources for processing
  • May struggle with highly complex or non-standard document layouts
  • Limited support for handwritten text or unusual fonts
  • Dependency on external libraries and models

Code Examples

  1. Basic usage to convert a PDF to markdown:
from marker import Marker

marker = Marker()
markdown = marker.convert_to_markdown("input.pdf")
print(markdown)
  1. Converting an image to markdown:
from marker import Marker

marker = Marker()
markdown = marker.convert_to_markdown("input.jpg")
print(markdown)
  1. Customizing output options:
from marker import Marker

marker = Marker()
markdown = marker.convert_to_markdown(
    "input.pdf",
    include_images=True,
    table_format="github"
)
print(markdown)

Getting Started

To get started with Marker, follow these steps:

  1. Install Marker using pip:

    pip install marker-pdf
    
  2. Import and use Marker in your Python script:

    from marker import Marker
    
    marker = Marker()
    markdown = marker.convert_to_markdown("your_document.pdf")
    
    # Save the markdown to a file
    with open("output.md", "w") as f:
        f.write(markdown)
    

This will convert your PDF or image file to markdown and save it as "output.md" in the current directory.

Competitor Comparisons

57,265

Inference code for Llama models

Pros of Llama

  • Developed by Meta AI, benefiting from extensive resources and research
  • Supports multiple languages and tasks beyond text generation
  • Offers various model sizes for different computational requirements

Cons of Llama

  • Requires more computational resources to run effectively
  • Less focused on specific document processing tasks
  • May have stricter licensing and usage restrictions

Code Comparison

Marker:

from marker import Marker

marker = Marker()
result = marker.mark(document)
print(result.summary)

Llama:

from llama import Llama

llm = Llama(model_path="path/to/model.bin")
output = llm.generate("Your prompt here", max_tokens=100)
print(output)

Key Differences

Marker is specifically designed for document processing and summarization, while Llama is a more general-purpose language model. Marker focuses on extracting key information from documents, whereas Llama can be used for a wider range of natural language processing tasks.

Marker is likely easier to set up and use for document-specific tasks, while Llama offers more flexibility but may require more expertise to implement effectively. The choice between the two depends on the specific use case and available resources.

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • More extensive language support (80+ languages)
  • Highly accurate transcription, especially for English
  • Robust to background noise and accents

Cons of Whisper

  • Larger model size, requiring more computational resources
  • Slower processing speed, especially for longer audio files
  • Less flexible for fine-tuning on specific domains or accents

Code Comparison

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Marker:

from marker import transcribe

result = transcribe("audio.mp3")
print(result)

Key Differences

  • Marker focuses on speed and efficiency, while Whisper prioritizes accuracy and language coverage
  • Marker is designed for easier fine-tuning and customization
  • Whisper has a more extensive research backing and is widely adopted in the industry
  • Marker aims to be more lightweight and suitable for edge devices or resource-constrained environments

Both projects offer valuable solutions for speech recognition, with Whisper excelling in multilingual support and accuracy, while Marker emphasizes speed and customization. The choice between them depends on specific use cases and resource availability.

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Highly optimized C++ implementation, offering faster performance
  • Supports various platforms and architectures, including mobile devices
  • Provides real-time audio processing capabilities

Cons of whisper.cpp

  • Limited to OpenAI's Whisper model, while marker supports multiple models
  • Requires more manual setup and configuration compared to marker's user-friendly interface
  • Less flexibility in terms of customization and fine-tuning options

Code Comparison

whisper.cpp:

// Initialize whisper context
struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");

// Process audio
whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());

// Print result
const int n_segments = whisper_full_n_segments(ctx);
for (int i = 0; i < n_segments; ++i) {
    const char * text = whisper_full_get_segment_text(ctx, i);
    printf("%s", text);
}

marker:

from marker import marker

# Load model and transcribe audio
model = marker.get_model("base.en")
result = model.transcribe("audio.mp3")

# Print result
for segment in result.segments:
    print(segment.text)

The code comparison demonstrates the simplicity of marker's Python interface compared to the more low-level C++ implementation of whisper.cpp. While whisper.cpp offers finer control and potentially better performance, marker provides a more user-friendly and Pythonic approach to audio transcription.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library with support for numerous pre-trained models and architectures
  • Well-documented and actively maintained by a large community
  • Seamless integration with other Hugging Face tools and datasets

Cons of transformers

  • Steeper learning curve due to its comprehensive nature
  • Can be resource-intensive for smaller projects or limited hardware
  • May include unnecessary features for specific use cases

Code comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

marker:

from marker.convert import convert_pdf_to_images
from marker.ocr import ocr_images

images = convert_pdf_to_images("document.pdf")
text = ocr_images(images)
print(text)

Key differences

  • transformers focuses on NLP tasks and model implementations
  • marker specializes in document processing and OCR
  • transformers offers a wider range of pre-trained models and tasks
  • marker provides specific tools for PDF conversion and image processing
38,629

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Pros of nanoGPT

  • Simpler implementation, focusing on core GPT architecture
  • Excellent educational resource for understanding transformer models
  • Highly optimized for performance on single GPU setups

Cons of nanoGPT

  • Limited features compared to Marker's more comprehensive toolkit
  • Less focus on practical applications and fine-tuning for specific tasks
  • Requires more expertise to adapt for real-world use cases

Code Comparison

nanoGPT:

class Head(nn.Module):
    def __init__(self, head_size):
        super().__init__()
        self.key = nn.Linear(n_embd, head_size, bias=False)
        self.query = nn.Linear(n_embd, head_size, bias=False)
        self.value = nn.Linear(n_embd, head_size, bias=False)
        self.register_buffer('tril', torch.tril(torch.ones(block_size, block_size)))

Marker:

class TransformerBlock(nn.Module):
    def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
        super().__init__()
        self.att = nn.MultiheadAttention(embed_dim, num_heads)
        self.ffn = nn.Sequential(
            nn.Linear(embed_dim, ff_dim),
            nn.ReLU(),
            nn.Linear(ff_dim, embed_dim),
        )
37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for large-scale distributed training of deep learning models
  • Supports a wide range of AI models and frameworks (PyTorch, TensorFlow, etc.)
  • Offers advanced features like ZeRO optimizer and 3D parallelism for efficient training

Cons of DeepSpeed

  • Steeper learning curve due to its complexity and advanced features
  • Primarily focused on training, while Marker is designed for inference
  • May be overkill for smaller projects or single-GPU setups

Code Comparison

DeepSpeed (model initialization):

model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=params
)

Marker (model loading):

model = marker.load("vikparuchuri/marker-v1")
result = model.mark(text)

Summary

DeepSpeed is a powerful library for distributed training of large AI models, offering advanced optimization techniques. Marker, on the other hand, is focused on efficient inference for text processing tasks. DeepSpeed is more suitable for large-scale projects and research, while Marker provides a simpler interface for specific text-related applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Marker

Marker converts documents to markdown, JSON, and HTML quickly and accurately.

  • Converts PDF, image, PPTX, DOCX, XLSX, HTML, EPUB files in all languages
  • Formats tables, forms, equations, inline math, links, references, and code blocks
  • Extracts and saves images
  • Removes headers/footers/other artifacts
  • Extensible with your own formatting and logic
  • Optionally boost accuracy with LLMs
  • Works on GPU, CPU, or MPS

Performance

Marker benchmarks favorably compared to cloud services like Llamaparse and Mathpix, as well as other open source tools.

The above results are running single PDF pages serially. Marker is significantly faster when running in batch mode, with a projected throughput of 122 pages/second on an H100 (.18 seconds per page across 22 processes).

See below for detailed speed and accuracy benchmarks, and instructions on how to run your own benchmarks.

Hybrid Mode

For the highest accuracy, pass the --use_llm flag to use an LLM alongside marker. This will do things like merge tables across pages, handle inline math, format tables properly, and extract values from forms. It can use any gemini or ollama model. By default, it uses gemini-2.0-flash. See below for details.

Here is a table benchmark comparing marker, gemini flash alone, and marker with use_llm:

As you can see, the use_llm mode offers higher accuracy than marker or gemini alone.

Examples

PDFFile typeMarkdownJSON
Think PythonTextbookViewView
Switch TransformersarXiv paperViewView
Multi-column CNNarXiv paperViewView

Commercial usage

I want marker to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.

The weights for the models are licensed cc-by-nc-sa-4.0, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with the Datalab API. If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options here.

Hosted API

There's a hosted API for marker available here:

  • Supports PDFs, word documents, and powerpoints
  • 1/4th the price of leading cloud-based competitors
  • High uptime (99.99%), quality, and speed (around 15 seconds to convert a 250 page PDF)

Community

Discord is where we discuss future development.

Installation

You'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. See here for more details.

Install with:

pip install marker-pdf

If you want to use marker on documents other than PDFs, you will need to install additional dependencies with:

pip install marker-pdf[full]

Usage

First, some configuration:

  • Your torch device will be automatically detected, but you can override this. For example, TORCH_DEVICE=cuda.
  • Some PDFs, even digital ones, have bad text in them. Set the force_ocr flag to ensure your PDF runs through OCR, or the strip_existing_ocr to keep all digital text, and strip out any existing OCR text.

Interactive App

I've included a streamlit app that lets you interactively try marker with some basic options. Run it with:

pip install streamlit
marker_gui

Convert a single file

marker_single /path/to/file.pdf

You can pass in PDFs or images.

Options:

  • --output_dir PATH: Directory where output files will be saved. Defaults to the value specified in settings.OUTPUT_DIR.
  • --output_format [markdown|json|html]: Specify the format for the output results.
  • --paginate_output: Paginates the output, using \n\n{PAGE_NUMBER} followed by - * 48, then \n\n
  • --use_llm: Uses an LLM to improve accuracy. You must set your Gemini API key using the GOOGLE_API_KEY env var.
  • --redo_inline_math: If you want the highest quality inline math conversion, use this along with --use_llm.
  • --disable_image_extraction: Don't extract images from the PDF. If you also specify --use_llm, then images will be replaced with a description.
  • --page_range TEXT: Specify which pages to process. Accepts comma-separated page numbers and ranges. Example: --page_range "0,5-10,20" will process pages 0, 5 through 10, and page 20.
  • --force_ocr: Force OCR processing on the entire document, even for pages that might contain extractable text.
  • --strip_existing_ocr: Remove all existing OCR text in the document and re-OCR with surya.
  • --debug: Enable debug mode for additional logging and diagnostic information.
  • --processors TEXT: Override the default processors by providing their full module paths, separated by commas. Example: --processors "module1.processor1,module2.processor2"
  • --config_json PATH: Path to a JSON configuration file containing additional settings.
  • --languages TEXT: Optionally specify which languages to use for OCR processing. Accepts a comma-separated list. Example: --languages "en,fr,de" for English, French, and German.
  • config --help: List all available builders, processors, and converters, and their associated configuration. These values can be used to build a JSON configuration file for additional tweaking of marker defaults.
  • --converter_cls: One of marker.converters.pdf.PdfConverter (default) or marker.converters.table.TableConverter. The PdfConverter will convert the whole PDF, the TableConverter will only extract and convert tables.
  • --llm_service: Which llm service to use if --use_llm is passed. This defaults to marker.services.gemini.GoogleGeminiService.
  • --help: see all of the flags that can be passed into marker. (it supports many more options then are listed above)

The list of supported languages for surya OCR is here. If you don't need OCR, marker can work with any language.

Convert multiple files

marker /path/to/input/folder --workers 4
  • marker supports all the same options from marker_single above.
  • --workers is the number of conversion workers to run simultaneously. This is set to 5 by default, but you can increase it to increase throughput, at the cost of more CPU/GPU usage. Marker will use 5GB of VRAM per worker at the peak, and 3.5GB average.

Convert multiple files on multiple GPUs

NUM_DEVICES=4 NUM_WORKERS=15 marker_chunk_convert ../pdf_in ../md_out
  • NUM_DEVICES is the number of GPUs to use. Should be 2 or greater.
  • NUM_WORKERS is the number of parallel processes to run on each GPU.

Use from python

See the PdfConverter class at marker/converters/pdf.py function for additional arguments that can be passed.

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.output import text_from_rendered

converter = PdfConverter(
    artifact_dict=create_model_dict(),
)
rendered = converter("FILEPATH")
text, _, images = text_from_rendered(rendered)

rendered will be a pydantic basemodel with different properties depending on the output type requested. With markdown output (default), you'll have the properties markdown, metadata, and images. For json output, you'll have children, block_type, and metadata.

Custom configuration

You can pass configuration using the ConfigParser. To see all available options, do marker_single --help.

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.config.parser import ConfigParser

config = {
    "output_format": "json",
    "ADDITIONAL_KEY": "VALUE"
}
config_parser = ConfigParser(config)

converter = PdfConverter(
    config=config_parser.generate_config_dict(),
    artifact_dict=create_model_dict(),
    processor_list=config_parser.get_processors(),
    renderer=config_parser.get_renderer(),
    llm_service=config_parser.get_llm_service()
)
rendered = converter("FILEPATH")

Extract blocks

Each document consists of one or more pages. Pages contain blocks, which can themselves contain other blocks. It's possible to programmatically manipulate these blocks.

Here's an example of extracting all forms from a document:

from marker.converters.pdf import PdfConverter
from marker.models import create_model_dict
from marker.schema import BlockTypes

converter = PdfConverter(
    artifact_dict=create_model_dict(),
)
document = converter.build_document("FILEPATH")
forms = document.contained_blocks((BlockTypes.Form,))

Look at the processors for more examples of extracting and manipulating blocks.

Other converters

You can also use other converters that define different conversion pipelines:

Extract tables

The TableConverter will only convert and extract tables:

from marker.converters.table import TableConverter
from marker.models import create_model_dict
from marker.output import text_from_rendered

converter = TableConverter(
    artifact_dict=create_model_dict(),
)
rendered = converter("FILEPATH")
text, _, images = text_from_rendered(rendered)

This takes all the same configuration as the PdfConverter. You can specify the configuration force_layout_block=Table to avoid layout detection and instead assume every page is a table. Set output_format=json to also get cell bounding boxes.

You can also run this via the CLI with

marker_single FILENAME --use_llm --force_layout_block Table --converter_cls marker.converters.table.TableConverter --output_format json

Output Formats

Markdown

Markdown output will include:

  • image links (images will be saved in the same folder)
  • formatted tables
  • embedded LaTeX equations (fenced with $$)
  • Code is fenced with triple backticks
  • Superscripts for footnotes

HTML

HTML output is similar to markdown output:

  • Images are included via img tags
  • equations are fenced with <math> tags
  • code is in pre tags

JSON

JSON output will be organized in a tree-like structure, with the leaf nodes being blocks. Examples of leaf nodes are a single list item, a paragraph of text, or an image.

The output will be a list, with each list item representing a page. Each page is considered a block in the internal marker schema. There are different types of blocks to represent different elements.

Pages have the keys:

  • id - unique id for the block.
  • block_type - the type of block. The possible block types can be seen in marker/schema/__init__.py. As of this writing, they are ["Line", "Span", "FigureGroup", "TableGroup", "ListGroup", "PictureGroup", "Page", "Caption", "Code", "Figure", "Footnote", "Form", "Equation", "Handwriting", "TextInlineMath", "ListItem", "PageFooter", "PageHeader", "Picture", "SectionHeader", "Table", "Text", "TableOfContents", "Document"]
  • html - the HTML for the page. Note that this will have recursive references to children. The content-ref tags must be replaced with the child content if you want the full html. You can see an example of this at marker/output.py:json_to_html. That function will take in a single block from the json output, and turn it into HTML.
  • polygon - the 4-corner polygon of the page, in (x1,y1), (x2,y2), (x3, y3), (x4, y4) format. (x1,y1) is the top left, and coordinates go clockwise.
  • children - the child blocks.

The child blocks have two additional keys:

  • section_hierarchy - indicates the sections that the block is part of. 1 indicates an h1 tag, 2 an h2, and so on.
  • images - base64 encoded images. The key will be the block id, and the data will be the encoded image.

Note that child blocks of pages can have their own children as well (a tree structure).

{
      "id": "/page/10/Page/366",
      "block_type": "Page",
      "html": "<content-ref src='/page/10/SectionHeader/0'></content-ref><content-ref src='/page/10/SectionHeader/1'></content-ref><content-ref src='/page/10/Text/2'></content-ref><content-ref src='/page/10/Text/3'></content-ref><content-ref src='/page/10/Figure/4'></content-ref><content-ref src='/page/10/SectionHeader/5'></content-ref><content-ref src='/page/10/SectionHeader/6'></content-ref><content-ref src='/page/10/TextInlineMath/7'></content-ref><content-ref src='/page/10/TextInlineMath/8'></content-ref><content-ref src='/page/10/Table/9'></content-ref><content-ref src='/page/10/SectionHeader/10'></content-ref><content-ref src='/page/10/Text/11'></content-ref>",
      "polygon": [[0.0, 0.0], [612.0, 0.0], [612.0, 792.0], [0.0, 792.0]],
      "children": [
        {
          "id": "/page/10/SectionHeader/0",
          "block_type": "SectionHeader",
          "html": "<h1>Supplementary Material for <i>Subspace Adversarial Training</i> </h1>",
          "polygon": [
            [217.845703125, 80.630859375], [374.73046875, 80.630859375],
            [374.73046875, 107.0],
            [217.845703125, 107.0]
          ],
          "children": null,
          "section_hierarchy": {
            "1": "/page/10/SectionHeader/1"
          },
          "images": {}
        },
        ...
        ]
    }


Metadata

All output formats will return a metadata dictionary, with the following fields:

{
    "table_of_contents": [
      {
        "title": "Introduction",
        "heading_level": 1,
        "page_id": 0,
        "polygon": [...]
      }
    ], // computed PDF table of contents
    "page_stats": [
      {
        "page_id":  0, 
        "text_extraction_method": "pdftext",
        "block_counts": [("Span", 200), ...]
      },
      ...
    ]
}

LLM Services

When running with the --use_llm flag, you have a choice of services you can use:

  • Gemini - this will use the Gemini developer API by default. You'll need to pass --gemini_api_key to configuration.
  • Google Vertex - this will use vertex, which can be more reliable. You'll need to pass --vertex_project_id. To use it, set --llm_service=marker.services.vertex.GoogleVertexService.
  • Ollama - this will use local models. You can configure --ollama_base_url and --ollama_model. To use it, set --llm_service=marker.services.ollama.OllamaService.
  • Claude - this will use the anthropic API. You can configure --claude_api_key, and --claude_model_name. To use it, set --llm_service=marker.services.claude.ClaudeService.

These services may have additional optional configuration as well - you can see it by viewing the classes.

Internals

Marker is easy to extend. The core units of marker are:

  • Providers, at marker/providers. These provide information from a source file, like a PDF.
  • Builders, at marker/builders. These generate the initial document blocks and fill in text, using info from the providers.
  • Processors, at marker/processors. These process specific blocks, for example the table formatter is a processor.
  • Renderers, at marker/renderers. These use the blocks to render output.
  • Schema, at marker/schema. The classes for all the block types.
  • Converters, at marker/converters. They run the whole end to end pipeline.

To customize processing behavior, override the processors. To add new output formats, write a new renderer. For additional input formats, write a new provider.

Processors and renderers can be directly passed into the base PDFConverter, so you can specify your own custom processing easily.

API server

There is a very simple API server you can run like this:

pip install -U uvicorn fastapi python-multipart
marker_server --port 8001

This will start a fastapi server that you can access at localhost:8001. You can go to localhost:8001/docs to see the endpoint options.

You can send requests like this:

import requests
import json

post_data = {
    'filepath': 'FILEPATH',
    # Add other params here
}

requests.post("http://localhost:8001/marker", data=json.dumps(post_data)).json()

Note that this is not a very robust API, and is only intended for small-scale use. If you want to use this server, but want a more robust conversion option, you can use the hosted Datalab API.

Troubleshooting

There are some settings that you may find useful if things aren't working the way you expect:

  • If you have issues with accuracy, try setting --use_llm to use an LLM to improve quality. You must set GOOGLE_API_KEY to a Gemini API key for this to work.
  • Make sure to set force_ocr if you see garbled text - this will re-OCR the document.
  • TORCH_DEVICE - set this to force marker to use a given torch device for inference.
  • If you're getting out of memory errors, decrease worker count. You can also try splitting up long PDFs into multiple files.

Debugging

Pass the debug option to activate debug mode. This will save images of each page with detected layout and text, as well as output a json file with additional bounding box information.

Benchmarks

Overall PDF Conversion

We created a benchmark set by extracting single PDF pages from common crawl. We scored based on a heuristic that aligns text with ground truth text segments, and an LLM as a judge scoring method.

MethodAvg TimeHeuristic ScoreLLM Score
marker2.8383795.67094.23916
llamaparse23.34884.24423.97619
mathpix6.3622386.42814.15626
docling3.6994986.70733.70429

Benchmarks were run on an H100 for markjer and docling - llamaparse and mathpix used their cloud services. We can also look at it by document type:

Document TypeMarker heuristicMarker LLMLlamaparse HeuristicLlamaparse LLMMathpix HeuristicMathpix LLMDocling HeuristicDocling LLM
Scientific paper96.67374.3489987.16513.9642191.22674.4686192.1353.72422
Book page97.18464.1616890.95324.0718693.88864.3532990.05563.64671
Other95.16324.2507681.13854.0183579.62314.0030683.82233.76147
Form88.01473.8466366.30813.6871264.75123.3312968.38573.40491
Presentation95.15624.1366981.2261483.67373.9568384.84053.86331
Financial document95.36974.3910682.58124.1611181.31154.0555686.38823.8
Letter98.40214.593.44774.2812596.03834.4531292.09524.09375
Engineering document93.92444.0441277.48543.7205980.33193.8823579.68073.42647
Legal document96.6894.2775986.97693.8758491.6014.2080587.83833.65552
Newspaper page98.87334.2580684.74923.9032396.99634.4516192.64963.51613
Magazine page98.21454.3877687.29023.9795993.59344.1632793.08924.02041

Throughput

We benchmarked throughput using a single long PDF.

MethodTime per pageTime per documentVRAM used
marker0.1843.423.17GB

The projected throughput is 122 pages per second on an H100 - we can run 22 individual processes given the VRAM used.

Table Conversion

Marker can extract tables from PDFs using marker.converters.table.TableConverter. The table extraction performance is measured by comparing the extracted HTML representation of tables against the original HTML representations using the test split of FinTabNet. The HTML representations are compared using a tree edit distance based metric to judge both structure and content. Marker detects and identifies the structure of all tables in a PDF page and achieves these scores:

MethodAvg scoreTotal tables
marker0.81699
marker w/use_llm0.90799
gemini0.82999

The --use_llm flag can significantly improve table recognition performance, as you can see.

We filter out tables that we cannot align with the ground truth, since fintabnet and our layout model have slightly different detection methods (this results in some tables being split/merged).

Running your own benchmarks

You can benchmark the performance of marker on your machine. Install marker manually with:

git clone https://github.com/VikParuchuri/marker.git
poetry install

Overall PDF Conversion

Download the benchmark data here and unzip. Then run the overall benchmark like this:

python benchmarks/overall.py --methods marker --scores heuristic,llm

Options:

  • --use_llm use an llm to improve the marker results.
  • --max_rows how many rows to process for the benchmark.
  • --methods can be llamaparse, mathpix, docling, marker. Comma separated.
  • --scores which scoring functions to use, can be llm, heuristic. Comma separated.

Table Conversion

The processed FinTabNet dataset is hosted here and is automatically downloaded. Run the benchmark with:

python benchmarks/table/table.py --max_rows 100

Options:

  • --use_llm uses an llm with marker to improve accuracy.
  • --use_gemini also benchmarks gemini 2.0 flash.

How it works

Marker is a pipeline of deep learning models:

  • Extract text, OCR if necessary (heuristics, surya)
  • Detect page layout and find reading order (surya)
  • Clean and format each block (heuristics, texify, surya)
  • Optionally use an LLM to improve quality
  • Combine blocks and postprocess complete text

It only uses models where necessary, which improves speed and accuracy.

Limitations

PDF is a tricky format, so marker will not always work perfectly. Here are some known limitations that are on the roadmap to address:

  • Very complex layouts, with nested tables and forms, may not work
  • Forms may not be rendered well

Note: Passing the --use_llm flag will mostly solve these issues.

Thanks

This work would not have been possible without amazing open source models and datasets, including (but not limited to):

  • Surya
  • Texify
  • Pypdfium2/pdfium
  • DocLayNet from IBM

Thank you to the authors of these models and datasets for making them available to the community!