Skip to content

Omniparse v2 #112

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -161,3 +161,6 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
poetry.lock

# VS Code
.vscode/
1 change: 1 addition & 0 deletions .python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.10
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ RUN CHROMEDRIVER_VERSION=$(curl -sS chromedriver.storage.googleapis.com/LATEST_R
# COPY --from=builder /usr/local/bin/chromedriver /usr/local/bin/chromedriver

# Install PyTorch and related packages
RUN pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
RUN pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# Set up working directory and copy application code
COPY . /app
Expand All @@ -55,7 +55,7 @@ WORKDIR /app
# Install Python package (assuming it has a setup.py)
RUN pip3 install --no-cache-dir -e .

RUN pip install transformers==4.41.2
RUN pip install transformers==4.50.3

# Set environment variables
ENV CHROME_BIN=/usr/bin/google-chrome \
Expand Down
23 changes: 9 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

> [!IMPORTANT]
>
>OmniParse is a platform that ingests and parses any unstructured data into structured, actionable data optimized for GenAI (LLM) applications. Whether you are working with documents, tables, images, videos, audio files, or web pages, OmniParse prepares your data to be clean, structured, and ready for AI applications such as RAG, fine-tuning, and more
> OmniParse is a platform that ingests and parses any unstructured data into structured, actionable data optimized for GenAI (LLM) applications. Whether you are working with documents, tables, images, videos, audio files, or web pages, OmniParse prepares your data to be clean, structured, and ready for AI applications such as RAG, fine-tuning, and more

## Try it out
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adithya-s-k/omniparse/blob/main/examples/OmniParse_GoogleColab.ipynb)
Expand Down Expand Up @@ -50,13 +50,15 @@ conda activate omniparse-venv
Install Dependencies:

```bash
poetry install
uv sync
# or
pip install -e .
# or
pip install -r pyproject.toml
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect pip install command

The command pip install -r pyproject.toml is incorrect. The -r flag expects a requirements.txt file, not a pyproject.toml file.

Apply this diff to fix the installation command:

-pip install -r pyproject.toml
+pip install .

Alternatively, if you want to list all installation methods:

-uv sync
-# or
-pip install -e .
-# or
-pip install -r pyproject.toml
+uv sync              # Using uv package manager
+# or
+pip install -e .     # Editable install with pip
+# or  
+pip install .        # Regular install with pip
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pip install -r pyproject.toml
pip install .
🤖 Prompt for AI Agents
In README.md at line 57, the pip install command incorrectly uses the -r flag
with pyproject.toml, which is invalid because -r expects a requirements.txt
file. Replace the command with the correct installation method for
pyproject.toml, such as using a tool like poetry or pip install with the
appropriate options, or remove the -r flag if installing from a requirements
file. Update the command to reflect the proper way to install dependencies from
pyproject.toml.

```

If using `uv`, activate the venv created by `uv sync`.

### 🛳️ Docker

To use OmniParse with Docker, execute the following commands:
Expand Down Expand Up @@ -91,7 +93,7 @@ Run the Server:
python server.py --host 0.0.0.0 --port 8000 --documents --media --web
```

- `--documents`: Load in all the models that help you parse and ingest documents (Surya OCR series of models and Florence-2).
- `--documents`: Load in all the models that help you parse and ingest documents (Docling models and Florence-2).
- `--media`: Load in Whisper model to transcribe audio and video files.
- `--web`: Set up selenium crawler.

Expand All @@ -102,7 +104,7 @@ If you want to download the models before starting the server
python download.py --documents --media --web
```

- `--documents`: Load in all the models that help you parse and ingest documents (Surya OCR series of models and Florence-2).
- `--documents`: Load in all the models that help you parse and ingest documents (Docling models and Florence-2).
- `--media`: Load in Whisper model to transcribe audio and video files.
- `--web`: Set up selenium crawler.

Expand Down Expand Up @@ -283,7 +285,6 @@ Arguments:
🛠️ One magic API: just feed in your file prompt what you want, and we will take care of the rest
🔧 Dynamic model selection and support for external APIs
📄 Batch processing for handling multiple files at once
📦 New open-source model to replace Surya OCR and Marker

**Final goal**: replace all the different models currently being used with a single MultiModel Model to parse any type of data and get the data you need.

Expand All @@ -294,7 +295,7 @@ There is a need for a GPU with 8~10 GB minimum VRAM as we are using deep learnin

Document Parsing Limitations
\
- [Marker](https://github.com/VikParuchuri/marker) which is the underlying PDF parser will not convert 100% of equations to LaTeX because it has to detect and then convert them.
- [Docling](https://github.com/docling-project/docling) is the underlying PDF parser and its limitations will apply.
- It is good at parsing english but might struggle for languages such as Chinese
- Tables are not always formatted 100% correctly; text can be in the wrong column.
- Whitespace and indentations are not always respected.
Expand All @@ -304,19 +305,13 @@ Document Parsing Limitations

## License
OmniParse is licensed under the GPL-3.0 license. See `LICENSE` for more information.
The project uses Marker under the hood, which has a commercial license that needs to be followed. Here are the details:

### Commercial Usage
Marker and Surya OCR Models are designed to be as widely accessible as possible while still funding development and training costs. Research and personal usage are always allowed, but there are some restrictions on commercial usage.
The weights for the models are licensed under cc-by-nc-sa-4.0. However, this restriction is waived for any organization with less than $5M USD in gross revenue in the most recent 12-month period AND less than $5M in lifetime VC/angel funding raised. To remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options provided.
Please refer to [Marker](https://github.com/VikParuchuri/marker) for more Information about the License of the Model weights

## Acknowledgements

This project builds upon the remarkable [Marker](https://github.com/VikParuchuri/marker) project created by [Vik Paruchuri](https://twitter.com/VikParuchuri). We express our gratitude for the inspiration and foundation provided by this project. Special thanks to [Surya-OCR](https://github.com/VikParuchuri/surya) and [Texify](https://github.com/VikParuchuri/texify) for the OCR models extensively used in this project, and to [Crawl4AI](https://github.com/unclecode/crawl4ai) for their contributions.
This project builds upon the remarkable [Docling](https://github.com/docling-project/docling) project. We express our gratitude for the inspiration and foundation provided by this project. Special thanks to [Crawl4AI](https://github.com/unclecode/crawl4ai) for their contributions.

Models being used:
- Surya OCR, Detect, Layout, Order, and Texify
- Docling IBM models
- Florence-2 base
- Whisper Small

Expand Down
11 changes: 5 additions & 6 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Run the Server:
python server.py --host 0.0.0.0 --port 8000 --documents --media --web
```

* `--documents`: Load in all the models that help you parse and ingest documents (Surya OCR series of models and Florence-2).
* `--documents`: Load in all the models that help you parse and ingest documents (Docling models and Florence-2).
* `--media`: Load in Whisper model to transcribe audio and video files.
* `--web`: Set up selenium crawler.

Expand Down Expand Up @@ -287,7 +287,6 @@ Arguments:
🛠️ One magic API: just feed in your file prompt what you want, and we will take care of the rest\
🔧 Dynamic model selection and support for external APIs\
📄 Batch processing for handling multiple files at once\
📦 New open-source model to replace Surya OCR and Marker

**Final goal**: replace all the different models currently being used with a single MultiModel Model to parse any type of data and get the data you need.

Expand All @@ -297,13 +296,13 @@ OmniParse is licensed under the GPL-3.0 license. See `LICENSE` for more informat

### Acknowledgements

This project builds upon the remarkable [Marker](https://github.com/VikParuchuri/marker) project created by [Vik Paruchuri](https://twitter.com/VikParuchuri). We express our gratitude for the inspiration and foundation provided by this project. Special thanks to [Surya-OCR](https://github.com/VikParuchuri/surya) and [Texify](https://github.com/VikParuchuri/texify) for the OCR models extensively used in this project, and to [Crawl4AI](https://github.com/unclecode/crawl4ai) for their contributions.
This project builds upon the remarkable [Docling](https://github.com/docling-project/docling) project. We express our gratitude for the inspiration and foundation provided by this project. Special thanks to [Crawl4AI](https://github.com/unclecode/crawl4ai) for their contributions.

Models being used:

* Surya OCR, Detect, Layout, Order, and Texify
* Florence-2 base
* Whisper Small
- Docling IBM models
- Florence-2 base
- Whisper Small

Thank you to the authors for their contributions to these models.

Expand Down
14 changes: 1 addition & 13 deletions docs/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,16 +28,4 @@ docker run --gpus all -p 8000:8000 omniparse
# else
docker run -p 8000:8000 omniparse

```

## ✈️ Skypilot(coming soon)

SkyPilot is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution. To deploy Marker API using Skypilot on any cloud provider, execute the following command:

```bash
pip install skypilot-nightly[all]

# setup skypilot with the cloud provider our your

sky launch skypilot.yaml
```
```
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,6 @@ Run the Server:
python server.py --host 0.0.0.0 --port 8000 --documents --media --web
```

* `--documents`: Load in all the models that help you parse and ingest documents (Surya OCR series of models and Florence-2).
* `--documents`: Load in all the models that help you parse and ingest documents (Docling models and Florence-2).
* `--media`: Load in Whisper model to transcribe audio and video files.
* `--web`: Set up selenium crawler.
22 changes: 15 additions & 7 deletions examples/OmniParse_GoogleColab.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@
"🛠️ One magic API: just feed in your file prompt what you want, and we will take care of the rest \n",
"🔧 Dynamic model selection and support for external APIs \n",
"📄 Batch processing for handling multiple files at once \n",
"🦙 New open-source model to replace Surya OCR and Marker \n",
"\n",
"**Final goal** - replace all the different models currently being used with a single MultiModel Model to parse any type of data and get the data you need\n",
"\n",
Expand All @@ -44,6 +43,15 @@
"| [![Original PDF](https://github.com/adithya-s-k/marker-api/raw/master/data/images/original\\_pdf.png)](https://github.com/adithya-s-k/marker-api/blob/master/data/images/original\\_pdf.png) | [![OmniParse-API](https://github.com/adithya-s-k/marker-api/raw/master/data/images/marker\\_api.png)](https://github.com/adithya-s-k/marker-api/blob/master/data/images/marker\\_api.png) | [![PyPDF](https://github.com/adithya-s-k/marker-api/raw/master/data/images/pypdf.png)](https://github.com/adithya-s-k/marker-api/blob/master/data/images/pypdf.png) |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install uv"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -73,7 +81,7 @@
"## Install dependencies\n",
"## if you get a restart session warning you can ignore it\n",
"\n",
"%pip install -e ."
"!uv -q sync"
]
},
{
Expand All @@ -82,7 +90,9 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install transformers==4.41.2"
"## Matplotlib backend error might occur, so we install matplotlib-inline\n",
"\n",
"!uv add matplotlib-inline"
]
},
{
Expand Down Expand Up @@ -169,7 +179,6 @@
"import threading\n",
"import time\n",
"import socket\n",
"import urllib.request\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
Expand All @@ -186,12 +195,11 @@
" l = line.decode()\n",
" if \"trycloudflare.com \" in l:\n",
" print(\"This is the URL to access OmniPrase:\", l[l.find(\"http\"):], end='')\n",
" #print(l, end='')\n",
"\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8000,)).start()\n",
"\n",
"!python server.py --host 127.0.0.1 --port 8000 --documents --media --web"
"!uv run server.py --host 127.0.0.1 --port 8000 --documents --media --web"
]
},
{
Expand Down Expand Up @@ -239,7 +247,7 @@
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8000,)).start()\n",
"\n",
"!python server.py --host 127.0.0.1 --port 8000 --documents --media --web"
"!uv run server.py --host 127.0.0.1 --port 8000 --documents --media --web"
]
}
],
Expand Down
43 changes: 28 additions & 15 deletions omniparse/__init__.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,17 @@
"""
Title: OmniPrase
Author: Adithya S Kolavi
Date: 2024-07-02
Date: 2025-05-23

This code includes portions of code from the marker repository by VikParuchuri.
Original repository: https://github.com/VikParuchuri/marker
This code includes portions of code from the Docling repository.
Original repository: https://github.com/docling-project/docling

Original Author: VikParuchuri
Original Date: 2024-01-15

License: GNU General Public License (GPL) Version 3
URL: https://github.com/VikParuchuri/marker/blob/master/LICENSE
License: MIT
URL: https://github.com/docling-project/docling/blob/main/LICENSE

Description:
This section of the code was adapted from the marker repository to load all the OCR, layout and reading order detection models.
All credits for the original implementation go to VikParuchuri.
This section of the code was adapted from the Docling repository to enhance text pdf/word/ppt parsing.
All credits for the original implementation go to Docling.
"""

import torch
Expand All @@ -24,12 +21,17 @@
import whisper
from omniparse.utils import print_omniparse_text_art
from omniparse.web.web_crawler import WebCrawler
from marker.models import load_all_models
# from omniparse.documents.models import load_all_models
from docling.utils.model_downloader import download_models
from docling.document_converter import (
DocumentConverter,
PdfFormatOption,
)
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import PdfPipelineOptions


class SharedState(BaseModel):
model_list: Any = None
docling_converter: Any = None
vision_model: Any = None
vision_processor: Any = None
whisper_model: Any = None
Expand All @@ -38,18 +40,29 @@ class SharedState(BaseModel):

shared_state = SharedState()

IMAGE_RESOLUTION_SCALE = 2.0
pipeline_options = PdfPipelineOptions()
pipeline_options.images_scale = IMAGE_RESOLUTION_SCALE
pipeline_options.generate_page_images = True
pipeline_options.generate_picture_images = True


def load_omnimodel(load_documents: bool, load_media: bool, load_web: bool):
global shared_state
print_omniparse_text_art()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if load_documents:
print("[LOG] ✅ Loading OCR Model")
shared_state.model_list = load_all_models()
download_models()
shared_state.docling_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
print("[LOG] ✅ Loading Vision Model")
# if device == "cuda":
shared_state.vision_model = AutoModelForCausalLM.from_pretrained(
"microsoft/Florence-2-base", trust_remote_code=True
"microsoft/Florence-2-base", torch_dtype=torch.float32, trust_remote_code=True
).to(device)
shared_state.vision_processor = AutoProcessor.from_pretrained(
"microsoft/Florence-2-base", trust_remote_code=True
Expand Down
2 changes: 1 addition & 1 deletion omniparse/demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@
python server.py --host 0.0.0.0 --port 8000 --documents --media --web
```
 
- `--documents`: Load in all the models that help you parse and ingest documents (Surya OCR series of models and Florence-2).
- `--documents`: Load in all the models that help you parse and ingest documents (Docling models and Florence-2).
- `--media`: Load in Whisper model to transcribe audio and video files.
- `--web`: Set up selenium crawler.

Expand Down
Loading
Loading