Using openai.with_structured_output and tools together #29760
Replies: 2 comments 3 replies
-
|
Hello, @ThePrimeJnr! I'm here to help you with any bugs, questions, or contributions you have. Let me know how I can assist you while you wait for a human maintainer. You cannot directly combine To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other | Bug Report |
Beta Was this translation helpful? Give feedback.
-
|
I think the need to combine tools and structured outputs on the same LLM instance is legitimate. Imagine a graph where the LLM is called in a loop until it produces no more tool call requests. The resulting structured output then goes to a validation node, which will fallback to the LLM loop for any errors or return the output to the caller if it satisfies all pydantic constraints. This allows integration of GenAI as a reliable component in a programmatic workflow, with built-in error handling. This simple structure can be upgraded with reasoning, planning or other patterns as needed, yet the tool calling + structured output LLM remains the central building block. The langchain design choice to disallow this is unfortunate, despite e.g. OpenAI LLMs explicitly allowing the specification of both tool definitions and structured output schema in the same request (and making use of one or the other in each turn). The workarounds that come to mind suffer from limitations:
@dosu any thoughts on the above? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
How can i use openai.with_structured_output and tools together.
i can do this with PydanticOutputParser, but I would like to use with structured output instead since it has better results
from typing import Any, Dict, List
from langchain_core.exceptions import OutputParserException
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode
from app.schemas.chat import AssistantResponse
from app.services.prompt import (
BASE_PROMPT,
CONTACT_PROMPT,
SERVICE_PROMPT,
)
from app.services.tools import get_matching_services, search_for_contact
from app.services.utils import openai
def create_workflow(tools: List[Any], system_message: str):
"""Create a workflow using LCEL (LangChain Expression Language)"""
parser = PydanticOutputParser(pydantic_object=AssistantResponse)
service_workflow = create_workflow(
tools=[get_matching_services], system_message=SERVICE_PROMPT
)
contact_workflow = create_workflow(
tools=[search_for_contact], system_message=CONTACT_PROMPT
)
general_workflow = create_workflow(tools=[], system_message=BASE_PROMPT)
def create_agent_thread(workflow):
"""Create a new conversation thread"""
checkpointer = MemorySaver()
thread_id = id(checkpointer)
Beta Was this translation helpful? Give feedback.
All reactions