Skip to content

Conversation

HarshVaragiya
Copy link

Fixed the requests to be async and concurrent using aiohttp to work correctly.

Pull Request

Description

  • Modified the synthetic-data-kit/models/llm_client.py function LLMClient._vllm_batch_completion to fix the requests not being concurrent. Now the requests are sent via the aiohttp client in parallel and vLLM backend can process it in parallel.

  • Modified the init function for the LLMClient class to expose http_request_timeout configuration key for the HTTP Timeout value configuration for all vLLM requests.

Fixes #67

Type of change

Please non-releavant options

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change
  • Documentation update
  • Configuration update (added the key vllm.http_request_timeout with same default as before)

Copy link

meta-cla bot commented Sep 5, 2025

Hi @HarshVaragiya!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Copy link

meta-cla bot commented Sep 5, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 5, 2025
@HarshVaragiya
Copy link
Author

Just a thought here: Something like a requests queue or something similar to golang channels to keep the vLLM server fed with requests at all times would help improve the overall throughput as even with the batching fixed, there are instances where the GPU is left processing 1 or 2 long requests much slowly when the other requests in the batch are marked done and the next batch is not sent yet by synthetic-data-kit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: requests are not truly concurrent with LLMClient._vllm_batch_completion
1 participant