Skip to content

Commit b220215

Browse files
authored
Merge branch 'wip-v0.4' into typing-dict
2 parents 82a3f22 + 281488a commit b220215

File tree

120 files changed

+4019
-2730
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

120 files changed

+4019
-2730
lines changed

.github/workflows/_release.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ jobs:
220220
with:
221221
python-version: ${{ env.PYTHON_VERSION }}
222222

223-
- uses: actions/download-artifact@v4
223+
- uses: actions/download-artifact@v5
224224
with:
225225
name: dist
226226
path: ${{ inputs.working-directory }}/dist/
@@ -379,7 +379,7 @@ jobs:
379379
with:
380380
python-version: ${{ env.PYTHON_VERSION }}
381381

382-
- uses: actions/download-artifact@v4
382+
- uses: actions/download-artifact@v5
383383
if: startsWith(inputs.working-directory, 'libs/core')
384384
with:
385385
name: dist
@@ -447,7 +447,7 @@ jobs:
447447
with:
448448
python-version: ${{ env.PYTHON_VERSION }}
449449

450-
- uses: actions/download-artifact@v4
450+
- uses: actions/download-artifact@v5
451451
with:
452452
name: dist
453453
path: ${{ inputs.working-directory }}/dist/
@@ -486,7 +486,7 @@ jobs:
486486
with:
487487
python-version: ${{ env.PYTHON_VERSION }}
488488

489-
- uses: actions/download-artifact@v4
489+
- uses: actions/download-artifact@v5
490490
with:
491491
name: dist
492492
path: ${{ inputs.working-directory }}/dist/

.github/workflows/_test_release.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ jobs:
8585
steps:
8686
- uses: actions/checkout@v4
8787

88-
- uses: actions/download-artifact@v4
88+
- uses: actions/download-artifact@v5
8989
with:
9090
name: test-dist
9191
path: ${{ inputs.working-directory }}/dist/

docs/docs/concepts/async.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Async programming with langchain
1+
# Async programming with LangChain
22

33
:::info Prerequisites
44
* [Runnable interface](/docs/concepts/runnables)
@@ -12,7 +12,7 @@ You are expected to be familiar with asynchronous programming in Python before r
1212
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming.
1313
:::
1414

15-
## Langchain asynchronous APIs
15+
## LangChain asynchronous APIs
1616

1717
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
1818

docs/docs/concepts/tools.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ The key attributes that correspond to the tool's **schema**:
3131
The key methods to execute the function associated with the **tool**:
3232

3333
- **invoke**: Invokes the tool with the given arguments.
34-
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async).
34+
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with LangChain](/docs/concepts/async).
3535

3636
## Create tools using the `@tool` decorator
3737

docs/docs/how_to/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ These are the core building blocks you can use when building applications.
3434
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
3535
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
3636

37-
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
37+
- [How to: initialize any model in one line](/docs/how_to/chat_models_universal_init/)
3838
- [How to: work with local models](/docs/how_to/local_llms)
3939
- [How to: do function/tool calling](/docs/how_to/tool_calling)
4040
- [How to: get models to return structured output](/docs/how_to/structured_output)

docs/docs/how_to/query_high_cardinality.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
1616
"metadata": {},
1717
"source": [
18-
"# How deal with high cardinality categoricals when doing query analysis\n",
18+
"# How to deal with high-cardinality categoricals when doing query analysis\n",
1919
"\n",
2020
"You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.\n",
2121
"\n",

docs/docs/how_to/structured_output.ipynb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -614,6 +614,7 @@
614614
" HumanMessage(\"Now about caterpillars\", name=\"example_user\"),\n",
615615
" AIMessage(\n",
616616
" \"\",\n",
617+
" name=\"example_assistant\",\n",
617618
" tool_calls=[\n",
618619
" {\n",
619620
" \"name\": \"joke\",\n",
@@ -909,7 +910,7 @@
909910
" ),\n",
910911
" (\"human\", \"{query}\"),\n",
911912
" ]\n",
912-
").partial(schema=People.schema())\n",
913+
").partial(schema=People.model_json_schema())\n",
913914
"\n",
914915
"\n",
915916
"# Custom parser\n",

docs/docs/integrations/chat/openai.ipynb

Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -447,6 +447,163 @@
447447
")"
448448
]
449449
},
450+
{
451+
"cell_type": "markdown",
452+
"id": "c5d9d19d-8ab1-4d9d-b3a0-56ee4e89c528",
453+
"metadata": {},
454+
"source": [
455+
"### Custom tools\n",
456+
"\n",
457+
":::info Requires ``langchain-openai>=0.3.29``\n",
458+
"\n",
459+
":::\n",
460+
"\n",
461+
"[Custom tools](https://platform.openai.com/docs/guides/function-calling#custom-tools) support tools with arbitrary string inputs. They can be particularly useful when you expect your string arguments to be long or complex."
462+
]
463+
},
464+
{
465+
"cell_type": "code",
466+
"execution_count": 1,
467+
"id": "a47c809b-852f-46bd-8b9e-d9534c17213d",
468+
"metadata": {},
469+
"outputs": [
470+
{
471+
"name": "stdout",
472+
"output_type": "stream",
473+
"text": [
474+
"================================\u001b[1m Human Message \u001b[0m=================================\n",
475+
"\n",
476+
"Use the tool to calculate 3^3.\n",
477+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
478+
"\n",
479+
"[{'id': 'rs_6894ff5747c0819d9b02fc5645b0be9c000169fd9fb68d99', 'summary': [], 'type': 'reasoning'}, {'call_id': 'call_7SYwMSQPbbEqFcKlKOpXeEux', 'input': 'print(3**3)', 'name': 'execute_code', 'type': 'custom_tool_call', 'id': 'ctc_6894ff5b9f54819d8155a63638d34103000169fd9fb68d99', 'status': 'completed'}]\n",
480+
"Tool Calls:\n",
481+
" execute_code (call_7SYwMSQPbbEqFcKlKOpXeEux)\n",
482+
" Call ID: call_7SYwMSQPbbEqFcKlKOpXeEux\n",
483+
" Args:\n",
484+
" __arg1: print(3**3)\n",
485+
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
486+
"Name: execute_code\n",
487+
"\n",
488+
"[{'type': 'custom_tool_call_output', 'output': '27'}]\n",
489+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
490+
"\n",
491+
"[{'type': 'text', 'text': '27', 'annotations': [], 'id': 'msg_6894ff5db3b8819d9159b3a370a25843000169fd9fb68d99'}]\n"
492+
]
493+
}
494+
],
495+
"source": [
496+
"from langchain_openai import ChatOpenAI, custom_tool\n",
497+
"from langgraph.prebuilt import create_react_agent\n",
498+
"\n",
499+
"\n",
500+
"@custom_tool\n",
501+
"def execute_code(code: str) -> str:\n",
502+
" \"\"\"Execute python code.\"\"\"\n",
503+
" return \"27\"\n",
504+
"\n",
505+
"\n",
506+
"llm = ChatOpenAI(model=\"gpt-5\", output_version=\"responses/v1\")\n",
507+
"\n",
508+
"agent = create_react_agent(llm, [execute_code])\n",
509+
"\n",
510+
"input_message = {\"role\": \"user\", \"content\": \"Use the tool to calculate 3^3.\"}\n",
511+
"for step in agent.stream(\n",
512+
" {\"messages\": [input_message]},\n",
513+
" stream_mode=\"values\",\n",
514+
"):\n",
515+
" step[\"messages\"][-1].pretty_print()"
516+
]
517+
},
518+
{
519+
"cell_type": "markdown",
520+
"id": "5ef93be6-6d4c-4eea-acfd-248774074082",
521+
"metadata": {},
522+
"source": [
523+
"<details>\n",
524+
"<summary>Context-free grammars</summary>\n",
525+
"\n",
526+
"OpenAI supports the specification of a [context-free grammar](https://platform.openai.com/docs/guides/function-calling#context-free-grammars) for custom tool inputs in `lark` or `regex` format. See [OpenAI docs](https://platform.openai.com/docs/guides/function-calling#context-free-grammars) for details. The `format` parameter can be passed into `@custom_tool` as shown below:"
527+
]
528+
},
529+
{
530+
"cell_type": "code",
531+
"execution_count": 3,
532+
"id": "2ae04586-be33-49c6-8947-7867801d868f",
533+
"metadata": {},
534+
"outputs": [
535+
{
536+
"name": "stdout",
537+
"output_type": "stream",
538+
"text": [
539+
"================================\u001b[1m Human Message \u001b[0m=================================\n",
540+
"\n",
541+
"Use the tool to calculate 3^3.\n",
542+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
543+
"\n",
544+
"[{'id': 'rs_689500828a8481a297ff0f98e328689c0681550c89797f43', 'summary': [], 'type': 'reasoning'}, {'call_id': 'call_jzH01RVhu6EFz7yUrOFXX55s', 'input': '3 * 3 * 3', 'name': 'do_math', 'type': 'custom_tool_call', 'id': 'ctc_6895008d57bc81a2b84d0993517a66b90681550c89797f43', 'status': 'completed'}]\n",
545+
"Tool Calls:\n",
546+
" do_math (call_jzH01RVhu6EFz7yUrOFXX55s)\n",
547+
" Call ID: call_jzH01RVhu6EFz7yUrOFXX55s\n",
548+
" Args:\n",
549+
" __arg1: 3 * 3 * 3\n",
550+
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
551+
"Name: do_math\n",
552+
"\n",
553+
"[{'type': 'custom_tool_call_output', 'output': '27'}]\n",
554+
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
555+
"\n",
556+
"[{'type': 'text', 'text': '27', 'annotations': [], 'id': 'msg_6895009776b881a2a25f0be8507d08f20681550c89797f43'}]\n"
557+
]
558+
}
559+
],
560+
"source": [
561+
"from langchain_openai import ChatOpenAI, custom_tool\n",
562+
"from langgraph.prebuilt import create_react_agent\n",
563+
"\n",
564+
"grammar = \"\"\"\n",
565+
"start: expr\n",
566+
"expr: term (SP ADD SP term)* -> add\n",
567+
"| term\n",
568+
"term: factor (SP MUL SP factor)* -> mul\n",
569+
"| factor\n",
570+
"factor: INT\n",
571+
"SP: \" \"\n",
572+
"ADD: \"+\"\n",
573+
"MUL: \"*\"\n",
574+
"%import common.INT\n",
575+
"\"\"\"\n",
576+
"\n",
577+
"format_ = {\"type\": \"grammar\", \"syntax\": \"lark\", \"definition\": grammar}\n",
578+
"\n",
579+
"\n",
580+
"# highlight-next-line\n",
581+
"@custom_tool(format=format_)\n",
582+
"def do_math(input_string: str) -> str:\n",
583+
" \"\"\"Do a mathematical operation.\"\"\"\n",
584+
" return \"27\"\n",
585+
"\n",
586+
"\n",
587+
"llm = ChatOpenAI(model=\"gpt-5\", output_version=\"responses/v1\")\n",
588+
"\n",
589+
"agent = create_react_agent(llm, [do_math])\n",
590+
"\n",
591+
"input_message = {\"role\": \"user\", \"content\": \"Use the tool to calculate 3^3.\"}\n",
592+
"for step in agent.stream(\n",
593+
" {\"messages\": [input_message]},\n",
594+
" stream_mode=\"values\",\n",
595+
"):\n",
596+
" step[\"messages\"][-1].pretty_print()"
597+
]
598+
},
599+
{
600+
"cell_type": "markdown",
601+
"id": "c63430c9-c7b0-4e92-a491-3f165dddeb8f",
602+
"metadata": {},
603+
"source": [
604+
"</details>"
605+
]
606+
},
450607
{
451608
"cell_type": "markdown",
452609
"id": "84833dd0-17e9-4269-82ed-550639d65751",

docs/docs/integrations/providers/gradientai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# ChatGradient
1+
# DigitalOcean Gradient
22

33
This will help you getting started with DigitalOcean Gradient [chat models](/docs/concepts/chat_models).
44

docs/scripts/packages_yml_get_downloads.py

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
from datetime import datetime, timedelta, timezone
22
from pathlib import Path
3+
import re
34

45
import requests
56
from ruamel.yaml import YAML
@@ -11,10 +12,18 @@
1112

1213

1314
def _get_downloads(p: dict) -> int:
14-
url = f"https://pypistats.org/api/packages/{p['name']}/recent?period=month"
15-
r = requests.get(url)
16-
r.raise_for_status()
17-
return r.json()["data"]["last_month"]
15+
url = f"https://pepy.tech/badge/{p['name']}/month"
16+
svg = requests.get(url, timeout=10).text
17+
texts = re.findall(r"<text[^>]*>([^<]+)</text>", svg)
18+
latest = texts[-1].strip() if texts else "0"
19+
20+
# parse "1.2k", "3.4M", "12,345" -> int
21+
latest = latest.replace(",", "")
22+
if latest.endswith(("k", "K")):
23+
return int(float(latest[:-1]) * 1_000)
24+
if latest.endswith(("m", "M")):
25+
return int(float(latest[:-1]) * 1_000_000)
26+
return int(float(latest) if "." in latest else int(latest))
1827

1928

2029
current_datetime = datetime.now(timezone.utc)

0 commit comments

Comments
 (0)