Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
7090060
select changes from wip-v0.4/core
ccurme Aug 11, 2025
54a3c5f
x
ccurme Aug 11, 2025
f8244b9
type required on tool_call_chunk; keep messages.tool.ToolCallChunk
ccurme Aug 11, 2025
1b9ec25
update init on aimessage
ccurme Aug 11, 2025
8426db4
update init on HumanMessage, SystemMessage, ToolMessage
ccurme Aug 11, 2025
91b2bb3
Merge branch 'wip-v1.0' into cc/1.0/standard_content
ccurme Aug 12, 2025
0ddab9f
start on duplicate content
ccurme Aug 12, 2025
98d5f46
Revert "start on duplicate content"
ccurme Aug 12, 2025
6eaa172
implement output_version on BaseChatModel
ccurme Aug 12, 2025
3ae7535
openai: pull in _compat from 0.4 branch
ccurme Aug 12, 2025
c1d65a7
x
ccurme Aug 12, 2025
c0e4361
core: populate tool_calls when initializing AIMessage via content_blocks
ccurme Aug 12, 2025
5c961ca
update test_base
ccurme Aug 12, 2025
0c7294f
openai: pull in responses api integration tests from 0.4 branch
ccurme Aug 13, 2025
3ae37b5
openai: integration tests pass
ccurme Aug 13, 2025
2f604eb
openai: carry over refusals fix
ccurme Aug 13, 2025
803d19f
Merge branch 'wip-v1.0' into cc/1.0/standard_content
ccurme Aug 13, 2025
153db48
openai: misc fixes for computer calls and custom tools
ccurme Aug 13, 2025
0aac20e
openai: tool calls in progress
ccurme Aug 14, 2025
624300c
core: populate tool_call_chunks in content_blocks
ccurme Aug 14, 2025
d111965
Merge branch 'wip-v1.0' into cc/1.0/standard_content
mdrxy Aug 15, 2025
7e39cd1
feat: allow kwargs on content block factories (#32568)
mdrxy Aug 15, 2025
601fa7d
Merge branch 'wip-v1.0' into cc/1.0/standard_content
mdrxy Aug 15, 2025
c9e847f
chore: format `output_version` docstring
mdrxy Aug 15, 2025
8d11059
chore: more content block docstring formatting
mdrxy Aug 15, 2025
3db8c60
chore: more content block formatting
mdrxy Aug 15, 2025
301a425
snapshot
mdrxy Aug 15, 2025
a3b20b0
clean up id test
mdrxy Aug 15, 2025
8fc1973
test: add note about for tuple conversion in ToolMessage
mdrxy Aug 15, 2025
86252d2
refactor: move ID prefixes
mdrxy Aug 15, 2025
f691dc3
refactor: make `ensure_id` public
mdrxy Aug 15, 2025
7a8c639
clarify: meaning of provider
mdrxy Aug 15, 2025
987031f
fix: `_LC_ID_PREFIX` back
mdrxy Aug 15, 2025
08cd5bb
clarify intent of `extras` under data blocks
mdrxy Aug 15, 2025
7f9727e
refactor: `is_data_content_block`
mdrxy Aug 15, 2025
00345c4
tests: add more data content block tests
mdrxy Aug 15, 2025
0199b56
rfc `test_utils` to make clearer what was existing before and after, …
mdrxy Aug 15, 2025
2375c3a
add note
mdrxy Aug 15, 2025
aca7c1f
fix(core): temporarily fix tests (#32589)
ccurme Aug 18, 2025
aeea0e3
fix(langchain): fix tests on standard content branch (#32590)
ccurme Aug 18, 2025
4790c72
feat(core): lazy-load standard content (#32570)
ccurme Aug 18, 2025
8ee0cbb
refactor(core): prefixes (#32597)
mdrxy Aug 18, 2025
0e6c172
refactor(core): prefixes, again (#32599)
mdrxy Aug 18, 2025
313ed7b
Merge branch 'wip-v1.0' into cc/1.0/standard_content
mdrxy Aug 19, 2025
27d81cf
test(openai): address some type issues in tests (#32601)
mdrxy Aug 19, 2025
43b9d3d
feat(core): implement dynamic translator registration for model provi…
mdrxy Aug 19, 2025
e41693a
Merge branch 'wip-v1.0' into cc/1.0/standard_content
ccurme Aug 19, 2025
0444e26
refactor: convert message content inside `BaseChatModel` (#32606)
ccurme Aug 19, 2025
3c8edbe
Merge branch 'wip-v1.0' into cc/1.0/standard_content
ccurme Aug 21, 2025
5bcf7d0
refactor(core): data block handling, normalize message formats, strip…
mdrxy Aug 21, 2025
26833f2
feat(anthropic): v1 support (#32623)
ccurme Aug 22, 2025
62d746e
feat(core): (v1) restore separate type for AIMessage.tool_calls (#32668)
ccurme Aug 25, 2025
2d9fe70
Merge branch 'wip-v1.0' into cc/1.0/standard_content
ccurme Aug 25, 2025
4e0fd33
fix: update `content_blocks` property docstring
mdrxy Aug 25, 2025
5ef18e8
feat(core): add `.text` property, introduce `TextAccessor` for backwa…
mdrxy Aug 25, 2025
97bd2cf
fix(core): (v1) fix PDF input translation for openai chat completions…
ccurme Aug 25, 2025
fe9599f
feat(core): parse `tool_call_chunks` in content in aggregated stream …
ccurme Aug 25, 2025
c63c3ea
feat(core): (v1) add sentinel value to `output_version` (#32692)
ccurme Aug 26, 2025
518f4df
.
mdrxy Aug 26, 2025
f1b676c
.
mdrxy Aug 26, 2025
706ea1b
.
mdrxy Aug 26, 2025
df3db47
Merge branch 'cc/1.0/standard_content' of github.com:langchain-ai/lan…
mdrxy Aug 26, 2025
659d282
standard tests: update multimodal tests
ccurme Aug 26, 2025
19d3a73
.
mdrxy Aug 26, 2025
72b2436
ss
mdrxy Aug 26, 2025
5222d51
.
mdrxy Aug 26, 2025
720d08e
.
mdrxy Aug 26, 2025
620779f
.
mdrxy Aug 26, 2025
dc5ac66
.
mdrxy Aug 26, 2025
2bbd034
fix(core): (v1) invoke callback prior to yielding final chunk (#32695)
ccurme Aug 26, 2025
3955157
Merge branch 'wip-v1.0' into cc/1.0/standard_content
mdrxy Aug 26, 2025
fa5d49f
Merge branch 'cc/1.0/standard_content' of github.com:langchain-ai/lan…
mdrxy Aug 26, 2025
447db13
feat(openai): (v1) support `content_blocks` on legacy v0 responses AP…
ccurme Aug 26, 2025
18d1cf2
.
mdrxy Aug 26, 2025
04bcccf
Merge branch 'cc/1.0/standard_content' of github.com:langchain-ai/lan…
mdrxy Aug 26, 2025
e49156e
chore: rfc to use `.text` instead of `.text()` (#32699)
mdrxy Aug 26, 2025
2d450d4
fix(core): (v1) finish test (#32701)
ccurme Aug 26, 2025
8a14148
.
mdrxy Aug 26, 2025
32941d6
.
mdrxy Aug 27, 2025
313f5f2
.
mdrxy Aug 27, 2025
8f3674c
Merge branch 'cc/1.0/standard_content' into mdrxy/call-version
mdrxy Aug 27, 2025
419450d
.
mdrxy Aug 27, 2025
624afb1
ss
mdrxy Aug 27, 2025
7f218e2
.
mdrxy Aug 27, 2025
e1add11
fix: openai
mdrxy Aug 27, 2025
19e5e96
fix: anthropic
mdrxy Aug 27, 2025
007a3a4
Merge branch 'wip-v1.0' into mdrxy/call-version
mdrxy Aug 27, 2025
4fab33a
Merge branch 'wip-v1.0' into mdrxy/call-version
mdrxy Sep 11, 2025
91d6c05
Merge branch 'wip-v1.0' into mdrxy/call-version
mdrxy Sep 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
309 changes: 273 additions & 36 deletions libs/core/langchain_core/language_models/chat_models.py

Large diffs are not rendered by default.

66 changes: 62 additions & 4 deletions libs/core/langchain_core/language_models/fake_chat_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain_core.language_models._utils import (
_update_message_content_to_blocks,
)
from langchain_core.language_models.chat_models import BaseChatModel, SimpleChatModel
from langchain_core.messages import AIMessage, AIMessageChunk, BaseMessage
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
Expand Down Expand Up @@ -248,10 +251,32 @@ def _generate(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> ChatResult:
message = next(self.messages)
message_ = AIMessage(content=message) if isinstance(message, str) else message

if output_version == "v1":
message_ = _update_message_content_to_blocks(message_, "v1")

# Only set in response metadata if output_version is explicitly provided
# (If output_version is "v0" and self.output_version is None, it's the default)
output_version_explicit = not (
output_version == "v0" and getattr(self, "output_version", None) is None
)
if output_version_explicit:
if hasattr(message_, "response_metadata"):
message_.response_metadata = {"output_version": output_version}
else:
message_ = AIMessage(
content=message_.content,
additional_kwargs=message_.additional_kwargs,
response_metadata={"output_version": output_version},
id=message_.id,
)

generation = ChatGeneration(message=message_)
return ChatResult(generations=[generation])

Expand All @@ -260,10 +285,16 @@ def _stream(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
chat_result = self._generate(
messages, stop=stop, run_manager=run_manager, **kwargs
messages,
stop=stop,
run_manager=run_manager,
output_version="v0", # Always call with v0 to get original string content
**kwargs,
)
if not isinstance(chat_result, ChatResult):
msg = (
Expand Down Expand Up @@ -302,6 +333,21 @@ def _stream(
and not message.additional_kwargs
):
chunk.message.chunk_position = "last"

if output_version == "v1":
chunk.message = _update_message_content_to_blocks(
chunk.message, "v1"
)

output_version_explicit = not (
output_version == "v0"
and getattr(self, "output_version", None) is None
)
if output_version_explicit:
chunk.message.response_metadata = {"output_version": output_version}
else:
chunk.message.response_metadata = {}

if run_manager:
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
Expand All @@ -321,7 +367,7 @@ def _stream(
id=message.id,
content="",
additional_kwargs={
"function_call": {fkey: fvalue_chunk}
"function_call": {fkey: fvalue_chunk},
},
)
)
Expand All @@ -336,7 +382,9 @@ def _stream(
message=AIMessageChunk(
id=message.id,
content="",
additional_kwargs={"function_call": {fkey: fvalue}},
additional_kwargs={
"function_call": {fkey: fvalue},
},
)
)
if run_manager:
Expand All @@ -348,7 +396,9 @@ def _stream(
else:
chunk = ChatGenerationChunk(
message=AIMessageChunk(
id=message.id, content="", additional_kwargs={key: value}
id=message.id,
content="",
additional_kwargs={key: value},
)
)
if run_manager:
Expand All @@ -358,6 +408,14 @@ def _stream(
)
yield chunk

# Add a final chunk with chunk_position="last" after all additional_kwargs
final_chunk = ChatGenerationChunk(
message=AIMessageChunk(id=message.id, content="", chunk_position="last")
)
if run_manager:
run_manager.on_llm_new_token("", chunk=final_chunk)
yield final_chunk

@property
def _llm_type(self) -> str:
return "generic-fake-chat-model"
Expand Down
13 changes: 11 additions & 2 deletions libs/core/langchain_core/utils/_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,16 @@ def merge_dicts(left: dict[str, Any], *others: dict[str, Any]) -> dict[str, Any]
)
raise TypeError(msg)
elif isinstance(merged[right_k], str):
if right_k == "output_version":
if merged[right_k] == right_v:
continue
msg = (
"Unable to merge. Two different values seen for "
f"'output_version': {merged[right_k]} and {right_v}. "
"'output_version' should have the same value across "
"all chunks in a generation."
)
raise ValueError(msg)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BaseChatModel is inserting the output version, right? how would we hit this ValueError?

If a user receives this ValueError, are they just stuck? What can they do?

I think in most other cases in the merging logic we just accept the left value.

# TODO: Add below special handling for 'type' key in 0.3 and remove
# merge_lists 'type' logic.
#
Expand All @@ -58,8 +68,7 @@ def merge_dicts(left: dict[str, Any], *others: dict[str, Any]) -> dict[str, Any]
# "all dicts."
# )
if (right_k == "index" and merged[right_k].startswith("lc_")) or (
right_k in ("id", "output_version", "model_provider")
and merged[right_k] == right_v
right_k in ("id", "model_provider") and merged[right_k] == right_v
):
continue
merged[right_k] += right_v
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,10 @@
import pytest
from typing_extensions import override

from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.callbacks import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain_core.language_models import (
BaseChatModel,
FakeListChatModel,
Expand Down Expand Up @@ -185,6 +188,8 @@ def _generate(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> ChatResult:
"""Top Level call."""
Expand Down Expand Up @@ -218,6 +223,8 @@ def _generate(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> ChatResult:
"""Top Level call."""
Expand All @@ -229,6 +236,8 @@ def _stream(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model."""
Expand All @@ -244,19 +253,21 @@ def _llm_type(self) -> str:
model = ModelWithSyncStream()
chunks = list(model.stream("anything"))
assert chunks == [
_any_id_ai_message_chunk(content="a"),
_any_id_ai_message_chunk(
content="a",
content="b",
chunk_position="last",
),
_any_id_ai_message_chunk(content="b", chunk_position="last"),
]
assert len({chunk.id for chunk in chunks}) == 1
assert type(model)._astream == BaseChatModel._astream
astream_chunks = [chunk async for chunk in model.astream("anything")]
assert astream_chunks == [
_any_id_ai_message_chunk(content="a"),
_any_id_ai_message_chunk(
content="a",
content="b",
chunk_position="last",
),
_any_id_ai_message_chunk(content="b", chunk_position="last"),
]
assert len({chunk.id for chunk in astream_chunks}) == 1

Expand All @@ -270,6 +281,8 @@ def _generate(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> ChatResult:
"""Top Level call."""
Expand All @@ -280,7 +293,9 @@ async def _astream(
self,
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None, # type: ignore[override]
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
*,
output_version: Optional[str] = "v0",
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
"""Stream the output of the model."""
Expand All @@ -296,10 +311,11 @@ def _llm_type(self) -> str:
model = ModelWithAsyncStream()
chunks = [chunk async for chunk in model.astream("anything")]
assert chunks == [
_any_id_ai_message_chunk(content="a"),
_any_id_ai_message_chunk(
content="a",
content="b",
chunk_position="last",
),
_any_id_ai_message_chunk(content="b", chunk_position="last"),
]
assert len({chunk.id for chunk in chunks}) == 1

Expand Down Expand Up @@ -351,6 +367,8 @@ def _generate(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> ChatResult:
return ChatResult(generations=[ChatGeneration(message=AIMessage("invoke"))])
Expand All @@ -367,6 +385,8 @@ def _stream(
messages: list[BaseMessage],
stop: Optional[list[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
*,
output_version: str = "v0",
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
yield ChatGenerationChunk(message=AIMessageChunk(content="stream"))
Expand Down
Loading
Loading