-
Notifications
You must be signed in to change notification settings - Fork 18.7k
Labels
bugRelated to a bug, vulnerability, unexpected error with an existing featureRelated to a bug, vulnerability, unexpected error with an existing featurehelp wantedGood issue for contributorsGood issue for contributorsopen-swe-max-autofor langchain-ai/open-swe - let agent decide tasks (opus 4)for langchain-ai/open-swe - let agent decide tasks (opus 4)
Description
Checked other resources
- This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Example Code
Code Example
import asyncio
import httpx
from langchain_openai import AzureChatOpenAI
# Track httpx client creations
original_init = httpx.AsyncClient.__init__
counter = {"count": 0}
def counting_init(self, *args, **kwargs):
counter["count"] += 1
print(f"New AsyncClient created: {counter['count']}")
original_init(self, *args, **kwargs)
httpx.AsyncClient.__init__ = counting_init
def create_llm():
return AzureChatOpenAI(
azure_endpoint="https://<REDACTED-ENDPOINT>/",
deployment_name="<REDACTED-DEPLOYMENT-NAME>",
temperature=0.7
)
async def main():
llms = [create_llm() for _ in range(5)]
print(f"Total AsyncClient instances: {counter['count']}")
asyncio.run(main())
Error Message and Stack Trace (if applicable)
No response
Description
langchain-core==0.3.65
langchain-openai==0.3.23
System Info
backports.tarfile==1.2.0
certifi==2025.8.3
charset-normalizer==3.4.2
docutils==0.22
id==1.5.0
idna==3.10
importlib_metadata==8.7.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.2.1
keyring==25.6.0
markdown-it-py==3.0.0
mdurl==0.1.2
more-itertools==10.7.0
nh3==0.3.0
packaging==25.0
Pygments==2.19.2
readme_renderer==44.0
requests==2.32.4
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==14.1.0
twine==6.1.0
urllib3==2.5.0
zipp==3.23.0
Agent Context
{ "tasks": [ { "id": "9622b662-dbd4-4d1c-a8f7-a364beca43d7", "taskIndex": 0, "request": "[original issue]\n**AzureChatOpenAI() creates a new httpx.AsyncClient/Client on each instantiation instead of reusing a default client.**\n### Checked other resources\n\n- [x] This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).\n- [x] I added a clear and descriptive title that summarizes this issue.\n- [x] I used the GitHub search to find a similar question and didn't find it.\n- [x] I am sure that this is a bug in LangChain rather than my code.\n- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).\n- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).\n- [x] I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.\n\n### Example Code\n\nCode Example\n\n``` python\nimport asyncio\nimport httpx\nfrom langchain_openai import AzureChatOpenAI\n\n# Track httpx client creations\noriginal_init = httpx.AsyncClient.__init__\ncounter = {\"count\": 0}\n\ndef counting_init(self, *args, **kwargs):\n counter[\"count\"] += 1\n print(f\"New AsyncClient created: {counter['count']}\")\n original_init(self, *args, **kwargs)\n\nhttpx.AsyncClient.__init__ = counting_init\n\ndef create_llm():\n return AzureChatOpenAI(\n azure_endpoint=\"https:///\",\n deployment_name=\"\",\n temperature=0.7\n )\n\nasync def main():\n llms = [create_llm() for _ in range(5)]\n print(f\"Total AsyncClient instances: {counter['count']}\")\n\nasyncio.run(main())\n```\n\n### Error Message and Stack Trace (if applicable)\n\n_No response_\n\n### Description\n\nlangchain-core==0.3.65\nlangchain-openai==0.3.23\n\n### System Info\n\nbackports.tarfile==1.2.0\ncertifi==2025.8.3\ncharset-normalizer==3.4.2\ndocutils==0.22\nid==1.5.0\nidna==3.10\nimportlib_metadata==8.7.0\njaraco.classes==3.4.0\njaraco.context==6.0.1\njaraco.functools==4.2.1\nkeyring==25.6.0\nmarkdown-it-py==3.0.0\nmdurl==0.1.2\nmore-itertools==10.7.0\nnh3==0.3.0\npackaging==25.0\nPygments==2.19.2\nreadme_renderer==44.0\nrequests==2.32.4\nrequests-toolbelt==1.0.0\nrfc3986==2.0.0\nrich==14.1.0\ntwine==6.1.0\nurllib3==2.5.0\nzipp==3.23.0", "title": "Fix AzureChatOpenAI to reuse httpx clients instead of creating new ones on each instantiation", "createdAt": 1755097740206, "completed": true, "planRevisions": [ { "revisionIndex": 0, "plans": [ { "index": 0, "plan": "**Fix the AzureChatOpenAI client initialization to use cached httpx clients**\n\nModify `/home/daytona/langchain/libs/partners/openai/langchain_openai/chat_models/azure.py` to:\n- Import the caching helper functions `_get_default_httpx_client` and `_get_default_async_httpx_client` from `langchain_openai.chat_models._client_utils` (add after line 21)\n- Update the client initialization logic in lines 664-680 to use these cached client functions when `self.http_client` or `self.http_async_client` are not explicitly provided, following the same pattern as the base class in `/home/daytona/langchain/libs/partners/openai/langchain_openai/chat_models/base.py` lines 786-787 and 804-806\n- Specifically, replace line 665 `sync_specific = {\"http_client\": self.http_client}` with `sync_specific = {\"http_client\": self.http_client or _get_default_httpx_client(self.azure_endpoint, self.request_timeout)}`\n- Replace line 669 `async_specific = {\"http_client\": self.http_async_client}` with `async_specific = {\"http_client\": self.http_async_client or _get_default_async_httpx_client(self.azure_endpoint, self.request_timeout)}`", "completed": true, "summary": "Successfully fixed the AzureChatOpenAI client initialization to use cached httpx clients.\n\n**Changes made to `/home/daytona/langchain/libs/partners/openai/langchain_openai/chat_models/azure.py`:**\n\n1. **Added imports (lines 22-25):** Imported the caching helper functions `_get_default_httpx_client` and `_get_default_async_httpx_client` from `langchain_openai.chat_models._client_utils`\n\n2. **Updated sync client initialization (lines 669-672):** Modified the sync client creation to use cached clients when `self.http_client` is not explicitly provided:\n - Changed from: `sync_specific = {\"http_client\": self.http_client}`\n - Changed to: `sync_specific = {\"http_client\": self.http_client or _get_default_httpx_client(self.azure_endpoint, self.request_timeout)}`\n\n3. **Updated async client initialization (lines 676-679):** Modified the async client creation to use cached clients when `self.http_async_client` is not explicitly provided:\n - Changed from: `async_specific = {\"http_client\": self.http_async_client}`\n - Changed to: `async_specific = {\"http_client\": self.http_async_client or _get_default_async_httpx_client(self.azure_endpoint, self.request_timeout)}`\n\nThese changes ensure that AzureChatOpenAI now reuses httpx clients across multiple instantiations with the same configuration, following the same pattern as the base ChatOpenAI class. This fixes the issue where new httpx clients were being created on each instantiation, reducing unnecessary overhead and improving memory efficiency." }, { "index": 1, "plan": "**Add a unit test to verify AzureChatOpenAI client caching works correctly**\n\nAdd a new test function `test_azure_client_caching` to `/home/daytona/langchain/libs/partners/openai/tests/unit_tests/chat_models/test_azure.py` that:\n- Creates multiple `AzureChatOpenAI` instances with the same configuration (azure_endpoint, deployment_name, api_version)\n- Verifies that `llm1.root_client._client is llm2.root_client._client` returns `True` for instances with identical configurations\n- Verifies that instances with different configurations (e.g., different endpoints or timeouts) get different clients\n- Follow the same pattern as `test_openai_client_caching` in `/home/daytona/langchain/libs/partners/openai/tests/unit_tests/chat_models/test_base.py` lines 83-105", "completed": true, "summary": "Successfully added a comprehensive unit test to verify AzureChatOpenAI client caching works correctly.\n\n**Changes made to `/home/daytona/langchain/libs/partners/openai/tests/unit_tests/chat_models/test_azure.py`:**\n\n1. **Added import (line 6):** Imported `httpx` module to use `httpx.Timeout` in test cases\n\n2. **Added test function `test_azure_client_caching` (lines 88-144):** Created a comprehensive test that verifies client caching behavior:\n - Tests that instances with identical configuration (same azure_endpoint, deployment_name, api_version) share the same httpx client\n - Tests that instances with different endpoints create different clients\n - Tests that instances with `timeout=None` reuse the same client as instances with default timeout\n - Tests that instances with different timeout values create different clients\n - Tests that instances with `httpx.Timeout` objects create different clients\n - Tests that instances with tuple timeout values create different clients\n\nThe test follows the same pattern as `test_openai_client_caching` in test_base.py and covers all the important caching scenarios to ensure the fix works correctly. The test verifies that the caching mechanism properly reuses httpx clients when configurations match and creates new clients when configurations differ." }, { "index": 2, "plan": "**Run tests and code quality checks**\n\nExecute the following commands from `/home/daytona/langchain/libs/partners/openai`:\n- Run the new test with `AZURE_OPENAI_API_KEY=test-key uv run --group test pytest -xvs tests/unit_tests/chat_models/test_azure.py::test_azure_client_caching` to verify the fix works\n- Run existing Azure tests with `AZURE_OPENAI_API_KEY=test-key uv run --group test pytest tests/unit_tests/chat_models/test_azure.py` to ensure no regressions\n- Run linting with `make lint` to check code style\n- Run formatting with `make format` to ensure consistent code formatting", "completed": true, "summary": "Successfully ran all tests and code quality checks to verify the AzureChatOpenAI client caching fix.\n\n**Test Results:**\n\n1. **New test `test_azure_client_caching`:** ✅ PASSED\n - Verified that AzureChatOpenAI instances with identical configurations share the same httpx client\n - Verified that instances with different configurations create different clients\n - Test execution time: 0.40s\n\n2. **All Azure unit tests:** ✅ ALL 6 TESTS PASSED\n - `test_initialize_azure_openai` - PASSED\n - `test_initialize_more` - PASSED\n - `test_initialize_azure_openai_with_openai_api_base_set` - PASSED\n - `test_structured_output_old_model` - PASSED\n - `test_azure_client_caching` - PASSED\n - `test_max_completion_tokens_in_payload` - PASSED\n - No regressions detected\n\n3. **Code formatting (`make format`):** ✅ COMPLETED\n - Fixed import sorting in azure.py\n - Fixed line length issue (split long line across multiple lines)\n - 2 files reformatted\n\n4. **Linting (`make lint`):** ✅ ALL CHECKS PASSED\n - Ruff check: All checks passed\n - Ruff format: 62 files already formatted\n - MyPy: Success - no issues found in 62 source files\n\nThe fix has been successfully implemented, tested, and validated. The AzureChatOpenAI class now properly reuses httpx clients across instances with the same configuration, matching the behavior of the base ChatOpenAI class." } ], "createdAt": 1755097740206, "createdBy": "agent" } ], "activeRevisionIndex": 0, "pullRequestNumber": 32531, "completedAt": 1755098266332, "summary": "- Changed langchain_openai/chat_models/azure.py: imported _get_default_httpx_client and _get_default_async_httpx_client and updated sync/async client init to use cached httpx clients (use self.http_client or _get_default_*(...)).\n- Added test test_azure_client_caching in tests/unit_tests/chat_models/test_azure.py to verify client reuse and differences by endpoint/timeout.\n- Verified changes: new test passed, all Azure unit tests passed, make format ran, make lint and mypy checks passed — no regressions." } ], "activeTaskIndex": 0 }Metadata
Metadata
Assignees
Labels
bugRelated to a bug, vulnerability, unexpected error with an existing featureRelated to a bug, vulnerability, unexpected error with an existing featurehelp wantedGood issue for contributorsGood issue for contributorsopen-swe-max-autofor langchain-ai/open-swe - let agent decide tasks (opus 4)for langchain-ai/open-swe - let agent decide tasks (opus 4)