UTCP Agent is the easiest way to build custom, ready-to-use agents which have intelligent tool calling capabilities and can connect to any native endpoint. The agent automatically discovers, searches, and executes UTCP tools based on user queries.
The Universal Tool Calling Protocol (UTCP) is an open standard that enables AI agents to discover and directly call tools across various communication protocols, eliminating the need for wrapper servers and reducing latency.
Feature | Description |
---|---|
π€ Intelligent Tool Discovery | Automatically searches and selects relevant UTCP tools based on user queries. |
π Multi-LLM Support | Compatible with OpenAI, Anthropic, and other LangChain-supported language models. |
π LangGraph Workflow | Uses LangGraph for structured agent execution with proper state management. |
π¨ Streaming Support | Optional streaming of workflow execution steps for real-time feedback. |
π§ Conversation Memory | Built-in conversation history and checkpointing for continuous conversations. |
π§ Flexible Configuration | Easily configurable through UTCP client config and agent config. |
pip install utcp-agent langchain-openai
Set your API key:
export OPENAI_API_KEY=your_api_key_here
import asyncio
import os
from langchain_openai import ChatOpenAI
from utcp_agent import UtcpAgent
async def main():
# Set your OpenAI API key
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.getenv("OPENAI_API_KEY")
)
# Create agent with book search capability
agent = await UtcpAgent.create(
llm=llm,
utcp_config={
"manual_call_templates": [{
"name": "openlibrary",
"call_template_type": "http",
"http_method": "GET",
"url": "https://openlibrary.org/static/openapi.json",
"content_type": "application/json"
}]
}
)
# Chat with the agent
response = await agent.chat("Can you search for books by George Orwell?")
print(f"Agent: {response}")
if __name__ == "__main__":
asyncio.run(main())
from utcp_agent import UtcpAgent, UtcpAgentConfig
from langgraph.checkpoint.memory import MemorySaver
agent_config = UtcpAgentConfig(
max_tools_per_search=10,
checkpointer=MemorySaver(),
system_prompt="You are a helpful AI assistant with access to various tools through UTCP."
)
agent = await UtcpAgent.create(
llm=llm,
utcp_config=utcp_config,
agent_config=agent_config
)
# Use thread_id for conversation continuity
response = await agent.chat("Find me a science fiction book", thread_id="user_1")
from pathlib import Path
utcp_config = {
"load_variables_from": [{
"variable_loader_type": "dotenv",
"env_file_path": str(Path(__file__).parent / ".env")
}],
"manual_call_templates": [{
"name": "openlibrary",
"call_template_type": "http",
"http_method": "GET",
"url": "https://openlibrary.org/static/openapi.json",
"content_type": "application/json"
}]
}
async for step in agent.stream("Search for AI books"):
print(f"Step: {step}")
The agent follows a structured workflow using LangGraph, a library for building stateful, multi-actor applications with LLMs.
- Analyze Task: Understands the user's query and formulates the current task.
- Search Tools: Uses UTCP to find relevant tools for the task.
- Decide Action: Determines whether to call tools or respond directly.
- Execute Tools: Calls the selected tool with appropriate arguments.
- Respond: Formats and returns the final response to the user.
graph TD
A[User Input] --> B[Analyze Task]
B --> C[Search Tools]
C --> D[Decide Action]
D --> E{Action Type}
E -->|Call Tool| F[Execute Tools]
E -->|Respond| G[Generate Response]
F --> G
G --> H[End]
See the examples/
directory for comprehensive examples:
basic_openai.py
: Using GPT models with book search.basic_anthropic.py
: Using Claude models.streaming_example.py
: Real-time workflow monitoring.config_file_example.py
: Loading UTCP configuration from files.memory_conversation.py
: Multi-turn conversations with memory.
Option | Description |
---|---|
max_iterations |
Maximum workflow iterations (default: 3). |
max_tools_per_search |
Maximum tools to retrieve per search (default: 10). |
system_prompt |
Custom system prompt for the agent. |
checkpointer |
LangGraph checkpointer for conversation memory. |
callbacks |
LangChain callbacks for observability. |
summarize_threshold |
Token count threshold for context summarization (default: 80000). |
The agent accepts a standard UTCP client configuration, which can include:
- Variable definitions and loading
- Manual call templates
- Tool provider configurations
create(llm, utcp_config=None, agent_config=None, root_dir=None)
- Creates and initializes a UtcpAgent with an automatic UTCP client.
-
chat(user_input: str, thread_id: Optional[str] = None) -> str
- Processes user input and returns the agent's response.
- Use
thread_id
for maintaining conversational continuity.
-
stream(user_input: str, thread_id: Optional[str] = None)
- Streams the workflow execution steps.
The agent includes comprehensive error handling to manage:
- Tool execution failures
- JSON parsing errors in LLM responses
- UTCP client errors
- Fallback responses to ensure the agent always provides a reply
Enable logging to monitor the agent's behavior:
import logging
logging.basicConfig(level=logging.INFO)
# Disable UTCP library logging for cleaner output
logging.getLogger("utcp").setLevel(logging.WARNING)
- Follow the existing code style and patterns
- Add tests for new functionality
- Update documentation for API changes
- Ensure compatibility with UTCP core library
See LICENSE file for details.