Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,9 @@ ansible-playbook -i inventory.ini swarm.yml --extra-vars "VAULT_ADDR='http://127
```
ansible-playbook swarm.yml -i inventory.ini
```

### Docker GPU Support
```to do```
## Contributing
Contributions to AI Toolchain are welcome! To contribute, please follow these guidelines:

Expand Down
Binary file not shown.
Binary file added src/chunking/MPNet/local/testing/NCF2023.pdf
Binary file not shown.
720 changes: 720 additions & 0 deletions src/chunking/MPNet/local/testing/customers.xml

Large diffs are not rendered by default.

27 changes: 27 additions & 0 deletions src/chunking/MPNet/local/testing/img.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import fitz # PyMuPDF

def extract_images_from_pdf(pdf_path, output_folder):
pdf_document = fitz.open(pdf_path)

for page_num in range(pdf_document.page_count):
page = pdf_document[page_num]
images = page.get_images(full=True)

for img_index, img in enumerate(images):
xref = img[0]
base_image = pdf_document.extract_image(xref)
image = base_image["image"]

image_file_extension = base_image["ext"]
image_filename = f"{output_folder}/image_page_{page_num + 1}_img_{img_index + 1}.{image_file_extension}"

with open(image_filename, "wb") as image_file:
image_file.write(image)

pdf_document.close()

if __name__ == "__main__":
pdf_path = "ICT_India_Working_Paper_50.pdf" # Change this to your PDF file's path
output_folder = "output_images" # Change this to the desired output folder

extract_images_from_pdf(pdf_path, output_folder)
1 change: 1 addition & 0 deletions src/chunking/MPNet/local/testing/output.txt

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
30 changes: 30 additions & 0 deletions src/chunking/MPNet/local/testing/parse.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import xml.etree.ElementTree as ET
import csv

def extract_text_from_pages(xml_file, output_csv):
tree = ET.parse(xml_file)
root = tree.getroot()

with open(output_csv, 'w', newline='', encoding='utf-8') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['text', 'page_number', 'index']) # CSV header

recent_index = None

for _, page in enumerate(root.findall('.//LTPage'), start=0):
for element in page.iter():
if element.tag.startswith('LTText'):
text = element.text
if text is not None:
text = text.strip()
if text:
page_number = page.get('page_index')
if 'index' in element.attrib:
recent_index = element.text
csv_writer.writerow([text, page_number, recent_index])

if __name__ == '__main__':
input_xml_file = 'customers.xml' # Replace with your input XML file path
output_csv_file = 'output.csv' # Replace with your desired output CSV file path

extract_text_from_pages(input_xml_file, output_csv_file)
6 changes: 6 additions & 0 deletions src/chunking/MPNet/local/testing/pfquery.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
import pdfquery

pdf = pdfquery.PDFQuery('ICT_India_Working_Paper_50.pdf.pdf')
pdf.load()

pdf.tree.write('customers.xml', pretty_print = True)