-
Notifications
You must be signed in to change notification settings - Fork 215
Description
🧭 Epic
Title: Sample Agent – LangChain Integration (OpenAI & A2A Endpoints)
Goal: Deliver a ready‑to‑fork reference agent built with LangChain that can:
- Connect to an MCP Gateway instance via REST & WebSocket.
- Dynamically retrieve the current tool catalog or accept a user‑supplied CSV list of tool IDs.
- Expose both OpenAI‑compatible
/v1/chat/completions
and A2A JSON‑RPC endpoints so existing clients and other gateways can invoke it.
Why now: Teams spinning up PoCs need a concrete, well‑documented example to understand “the happy path.” A drop‑in agent accelerates integrations, showcases best practices and feeds future tutorials, demos and benchmarking suites.
🧭 Type of Feature
- Developer tooling / sample code
- Integration & connectivity
🙋♂️ User Story 1 — Dynamic Tool Discovery
As a: Gateway app developer
I want: the agent to call GET /tools
on startup (or accept a comma‑separated TOOLS=
env/CLI arg)
So that: the available tool list stays in sync without code edits.
✅ Acceptance Criteria
Scenario: Auto‑discover tools at launch
Given the gateway URL is "https://gw.local"
And TOOLS env var is *unset*
When the agent starts
Then it MUST call GET https://gw.local/tools
And cache the returned tool schema for routing
🙋♂️ User Story 2 — Dual Endpoint Exposure
As a: Client integrator
I want: the agent to serve both /v1/chat/completions
(OpenAI spec) and /a2a
JSON‑RPC
So that: existing OpenAI SDKs and A2A‑aware gateways can call the same business logic.
✅ Acceptance Criteria
Scenario: OpenAI endpoint responds
Given the agent is running on :8000
When I POST valid OpenAI chat JSON to /v1/chat/completions
Then I receive a streaming "choices" response with tool calls resolved via MCP
🙋♂️ User Story 3 — Parameterised Tool Allow‑List
As a: DevOps engineer
I want: to restrict the agent to --tools "weather.get,math.calculate"
So that: only approved tools are callable in production.
✅ Acceptance Criteria
Scenario: CSV override takes precedence
Given CLI flag --tools="weather.get,math.calculate"
When the agent starts
Then it MUST NOT call /tools autodiscovery
And only map those two tools into the LangChain agent
🙋♂️ User Story 4 — Tool Schema Introspection
As a: Tool author
I want: the agent to inspect the JSON schema of each tool
So that: it can auto‑generate structured‑output prompts and argument validation.
📐 Design Sketch
flowchart TD
subgraph Startup
A[Parse CLI / ENV] --> B{Tool list?}
B -- CSV given --> C[Load CSV list]
B -- none --> D[GET /tools]
C --> E[Build LangChain tool wrappers]
D --> E
end
E --> F[ChatAgent - OpenAI API]
E --> G[A2A RPC handler]
F --> H[MCP Gateway REST / WS]
G --> H
Component | Change | Detail |
---|---|---|
agent_langchain.py |
NEW | Core agent; wraps LangChain AgentExecutor with tool router |
gateway_client.py |
UPDATE | Add /tools fetch & auth token support |
cli.py |
NEW | sample-agent start --gateway-url --tools CSV |
openai_adapter.py |
NEW | Flask / FastAPI blueprint that mimics /v1/chat/completions |
a2a_rpc.py |
NEW | JSON‑RPC handler exposing agent.invoke |
docker/Dockerfile |
NEW | Multi‑arch image for quick deploy |
Docs | ADD | Quick‑start, env‑vars, curl examples, helm chart snippet |
🔄 Roll‑out Plan
- Phase 0: Skeleton repo + CI + lint.
- Phase 1: Implement tool discovery & CSV override.
- Phase 2: Wire OpenAI endpoint; add handful of unit tests (pytest).
- Phase 3: Wire A2A RPC; integration tests against local gateway.
- Phase 4: Docker/Helm artefacts; publish to ghcr.io.
📣 Next Steps
- Scaffold repo under
agent_runtimes/langchain_agent
. - Add a tutorial in docs