Skip to content

Conversation

mapo80
Copy link

@mapo80 mapo80 commented Aug 13, 2025

Add optional page/line/word bounding boxes for PDF & image inputs (--emit-bbox) + evaluation notes

Summary

This PR adds an opt-in capability to emit page-anchored bounding boxes for text extracted from PDFs and images. When enabled, MarkItDown produces a sidecar JSON file that contains:

  • page geometry
  • line-level boxes and their span offsets into the generated Markdown
  • word-level boxes with optional OCR confidence values

The feature is off by default and does not change any existing outputs. It is designed for downstream tasks that need spatial grounding—e.g., overlaying selections, table cell alignment checks, redaction previews, or training data generation for doc-layout models.

The fork also documents performance measurements on the Docling test set and a delta analysis vs Docling’s outputs, to help frame accuracy/robustness and cost. ([GitHub]1)


CLI / API

New CLI flags

  • --emit-bbox
    When present, MarkItDown writes a sidecar JSON file next to the Markdown output (e.g., sample.pdfsample.bbox.json). Applies to PDF and image inputs. For image-only or scanned PDFs (no text layer), bounding boxes are obtained via OCR. ([GitHub]1)

  • --ocr-lang <lang-codes> (optional)
    Controls OCR language(s) for cases where OCR is used. Mirrors MARKITDOWN_OCR_LANG (see Env Vars). ([GitHub]1)

Environment variables

  • MARKITDOWN_OCR_LANG – default OCR language(s) when --emit-bbox triggers OCR.
  • TESSDATA_PREFIX – path to custom tessdata if needed.
    (Both only matter when OCR is used, i.e., scanned PDFs / images.) ([GitHub]1)

Upstream currently does not expose per-line / per-token coordinates, which is a common user ask. This PR proposes a minimal, backwards-compatible way to add that capability. ([GitHub]2)


Output format (sidecar JSON)

When --emit-bbox is set, MarkItDown writes <basename>.bbox.json with the following structure:

{
  "version": "1.0",
  "source": "sample.pdf",
  "pages": [
    { "page": 1, "width": 612, "height": 792 }
  ],
  "lines": [
    {
      "page": 1,
      "text": "Hello",
      "bbox_norm": [0, 0, 0, 0],
      "bbox_abs":  [0, 0, 0, 0],
      "confidence": null,
      "md_span": { "start": null, "end": null }
    }
  ],
  "words": [
    {
      "page": 1,
      "text": "Hello",
      "bbox_norm": [0, 0, 0, 0],
      "bbox_abs":  [0, 0, 0, 0],
      "confidence": null,
      "line_id": 0
    }
  ]
}

Semantics

  • bbox_abs is in pixel units of the page/image, top-left origin.
  • bbox_norm is normalized [x0, y0, x1, y1] in [0,1] relative to page width/height.
  • md_span links a line back into result.text_content via character offsets (start, end), enabling exact highlighting in the Markdown string.
  • line_id associates word items with their parent line.
  • confidence is null when unavailable (e.g., embedded text), or a numeric value when the OCR engine returns one. ([GitHub]1)

How it works (high level)

  • PDF with text layer: Extract the text and geometry from the PDF’s text layer, build line- and word-level boxes, and record their corresponding spans in the produced Markdown.
  • Scanned PDFs / Images: When no text layer is present, run OCR (Tesseract) for text + geometry; languages can be set via --ocr-lang / MARKITDOWN_OCR_LANG. TESSDATA_PREFIX is respected for custom language packs. ([GitHub]1)

This design keeps the default UX unchanged and only introduces extra work when explicitly requested.


Performance notes (Docling test data)

To help reviewers understand the runtime impact, the fork includes a small timing study on the Docling test dataset (12 documents across PDF/PNG/TIFF). Highlights:

  • Average Markdown conversion time: 3.18 s

  • Average bbox generation time (with --emit-bbox): 5.10 s

  • By type (avg MD / avg BBox):

    • PDF: 3.29 s / 5.14 s
    • PNG: 2.51 s / 5.56 s
    • TIFF: 2.57 s / 4.19 s

Full per-file table is included in the README section of the fork. ([GitHub]1)


Accuracy / quality observations

On the same Docling test set, a simple delta analysis compares MarkItDown outputs to Docling’s ground truth:

  • Markdown content delta: ~45% average difference
  • Bounding-box coordinate deviation: ~18% average

Bigger discrepancies were seen on right-to-left pages and scanned forms; these are called out for future iteration. (Context and numbers are documented in the fork’s README.) ([GitHub]1)

The fork also includes comparison notes against FUNSD (see funsd_bbox_comparison.md) to illustrate layout alignment behavior on form-like documents. ([GitHub]1)


Why this belongs in MarkItDown

  • Spatial grounding is increasingly required for RAG, redaction, annotation tools, and evaluation pipelines.
  • The feature is opt-in and unobtrusive; it does not alter existing Markdown outputs or require new mandatory dependencies.
  • It fills a well-documented gap—upstream does not currently ship span/box geometry. ([GitHub]2)

Backwards compatibility

  • Default behavior unchanged.
  • Sidecar JSON is emitted only when --emit-bbox is provided.
  • OCR is invoked only when needed (images or PDFs lacking a text layer). ([GitHub]1)

Documentation added

The fork’s README adds a “Bounding Boxes” section with:

  • the CLI usage (--emit-bbox)
  • schema explanation (absolute vs normalized boxes, top-left origin, md_span)
  • OCR language/env configuration
  • links to the Docling comparison docs and timing table

These doc bits can be ported verbatim or adapted into the upstream README. ([GitHub]1)


Testing & metrics (what reviewers can look at)

  • Schema sanity checks (implicit in the docs):

    • coordinates clamped to page, x0 ≤ x1, y0 ≤ y1;
    • normalized boxes in [0,1];
    • md_span ranges map back to the same text content as the corresponding line.
  • Benchmarks included in README: per-file timings + averages; delta analysis vs Docling ground truth (content diff and coord deviation). ([GitHub]1)

References
• Fork README “Bounding Boxes”, Docling comparisons, and timing table. ([GitHub]1)
• Docling project (test dataset used for comparisons). ([GitHub]3)
• Upstream issue commentary noting lack of coordinate exposure. ([GitHub]2)


Limitations & next steps (proposed follow-ups)

  • Improve robustness on RTL and scanned forms (noted as the main sources of deviation). ([GitHub]1)
  • Consider emitting token-to-Markdown mapping for words (currently md_span is provided at line-level), enabling finer-grained highlight sync.
  • Optional polygon support for non-axis-aligned text (e.g., rotated scans).
  • Expand test corpus beyond Docling to include more invoices/forms (e.g., FUNSD) with IoU-based metrics in CI.

Checklist

  • Feature is opt-in (--emit-bbox), default behavior unchanged
  • Sidecar schema documented (abs/normalized coords, md spans, confidence)
  • OCR language and tessdata path can be configured via flag/env vars
  • Performance and accuracy notes included (Docling test set)
  • No breaking API changes

Screenshots / Examples

Run:

markitdown sample.pdf --emit-bbox
# Produces: sample.md  and  sample.bbox.json

Excerpt from the emitted JSON is shown above; see the fork README for full details and per-file timings. ([GitHub]1)


Thanks for reviewing! Happy to split this into a docs-only PR + feature PR if you prefer, or to iterate on schema details (e.g., add per-word md_span, include page DPI, or expose rotation angles).

@Copilot Copilot AI review requested due to automatic review settings August 13, 2025 07:40
@mapo80
Copy link
Author

mapo80 commented Aug 13, 2025

@microsoft-github-policy-service agree

@mapo80
Copy link
Author

mapo80 commented Aug 13, 2025

@mapo80 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"

Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”), and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your contributions to Microsoft open source projects. This Agreement is effective as of the latest signature date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

@microsoft-github-policy-service agree

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds an optional bounding box emission feature to MarkItDown for PDF and image inputs, along with comprehensive performance and accuracy evaluations against the Docling test dataset. When enabled via the --emit-bbox flag, the feature produces a sidecar JSON file containing page geometries, line/word-level bounding boxes, and OCR confidence values without affecting existing Markdown outputs.

Key changes:

  • New opt-in --emit-bbox CLI flag and corresponding API parameter for spatial grounding capabilities
  • Implementation of OCR fallback for scanned PDFs and images using Tesseract when no text layer exists
  • Comprehensive evaluation documentation comparing outputs against Docling ground truth data

Reviewed Changes

Copilot reviewed 20 out of 20 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
packages/markitdown/src/markitdown/bbox.py New data structures for bounding box information (BBoxDoc, BBoxPage, BBoxLine, BBoxWord)
packages/markitdown/src/markitdown/_base_converter.py Added bbox parameter to DocumentConverterResult
packages/markitdown/src/markitdown/_markitdown.py Updated all convert methods to support emit_bbox and ocr_lang parameters
packages/markitdown/src/markitdown/main.py Added CLI flags --emit-bbox and --ocr-lang with sidecar file output logic
packages/markitdown/src/markitdown/converters/_pdf_converter.py Implemented bbox extraction for PDFs with OCR fallback for scanned documents
packages/markitdown/src/markitdown/converters/_image_converter.py Added OCR-based bbox extraction for image inputs
packages/markitdown/tests/bbox/ Comprehensive test suite covering schema validation, PDF/image processing, and CLI integration
packages/markitdown/pyproject.toml Added bbox optional dependency group with pdfplumber, pytesseract, Pillow, jsonschema

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

x1 / width,
y1 / height,
widthw / width,
heighth / height,
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bbox_norm calculation is incorrect. It should be normalized coordinates [x1/width, y1/height, x2/width, y2/height] but is currently calculating [x1/width, y1/height, width/width, height/height]. The third and fourth values should be x2/width and y2/height respectively.

Suggested change
heighth / height,
x2 / width,
y2 / height,

Copilot uses AI. Check for mistakes.

x1, y1, x2, y2 = left, top, left + w, top + h
conf = float(row.conf) if row.conf != -1 else None
bbox_abs = [x1, y1, x2, y2]
bbox_norm = [x1 / width, y1 / height, w / width, h / height]
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bbox_norm calculation is inconsistent with the absolute coordinates. The bbox_abs uses [x1, y1, x2, y2] format but bbox_norm uses [x1, y1, width, height] format. This should be [x1/width, y1/height, x2/width, y2/height] to match the absolute coordinate format.

Suggested change
bbox_norm = [x1 / width, y1 / height, w / width, h / height]
bbox_norm = [x1 / width, y1 / height, x2 / width, y2 / height]

Copilot uses AI. Check for mistakes.

x1 / width,
y1 / height,
(x2 - x1) / width,
(y2 - y1) / height,
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same bbox_norm calculation error as line 141. The normalized coordinates should be [x1/width, y1/height, x2/width, y2/height] but are currently [x1/width, y1/height, (x2-x1)/width, (y2-y1)/height].

Suggested change
(y2 - y1) / height,
x2 / width,
y2 / height,

Copilot uses AI. Check for mistakes.

x0 / width,
top / height,
(x1 - x0) / width,
(bottom - top) / height,
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another instance of the same bbox_norm calculation error. Should be [x0/width, top/height, x1/width, bottom/height] instead of [x0/width, top/height, (x1-x0)/width, (bottom-top)/height].

Suggested change
(bottom - top) / height,
x1 / width,
bottom / height,

Copilot uses AI. Check for mistakes.

x1 / width,
y1 / height,
(x2 - x1) / width,
(y2 - y1) / height,
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same bbox_norm calculation error. The format is inconsistent between absolute coordinates (x1, y1, x2, y2) and normalized coordinates (x1, y1, width, height).

Suggested change
(y2 - y1) / height,
x2 / width,
y2 / height,

Copilot uses AI. Check for mistakes.

x1 / width,
y1 / height,
(x2 - x1) / width,
(y2 - y1) / height,
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same bbox_norm calculation inconsistency as previous instances. Should use [x1/width, y1/height, x2/width, y2/height] format to match the absolute coordinate format.

Suggested change
(y2 - y1) / height,
x2 / width,
y2 / height,

Copilot uses AI. Check for mistakes.

@@ -0,0 +1,34 @@
import io
from pathlib import Path
import io
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate import statement. The 'io' module is imported twice (lines 1 and 4).

Suggested change
import io

Copilot uses AI. Check for mistakes.

import io
import json
from pathlib import Path
import io
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate import statement. The 'io' module is imported twice (lines 1 and 4).

Suggested change
import io

Copilot uses AI. Check for mistakes.

emit_bbox=emit_bbox,
ocr_lang=ocr_lang,
**kwargs,
)

def _convert(
self, *, file_stream: BinaryIO, stream_info_guesses: List[StreamInfo], **kwargs
Copy link
Preview

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _convert method signature needs to be updated to accept emit_bbox and ocr_lang parameters to properly pass them to converters, but the current implementation only passes them via **kwargs which could lead to inconsistent behavior.

Suggested change
self, *, file_stream: BinaryIO, stream_info_guesses: List[StreamInfo], **kwargs
self,
*,
file_stream: BinaryIO,
stream_info_guesses: List[StreamInfo],
emit_bbox: bool = False,
ocr_lang: Optional[str] = None,
**kwargs

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant