Skip to content

[Feature Request]: Sample Agent - LangChain Integration (OpenAI & A2A Endpoints) #262

@crivetimihai

Description

@crivetimihai

🧭 Epic

Title: Sample Agent – LangChain Integration (OpenAI & A2A Endpoints)

Goal: Deliver a ready‑to‑fork reference agent built with LangChain that can:

  • Connect to an MCP Gateway instance via REST & WebSocket.
  • Dynamically retrieve the current tool catalog or accept a user‑supplied CSV list of tool IDs.
  • Expose both OpenAI‑compatible /v1/chat/completions and A2A JSON‑RPC endpoints so existing clients and other gateways can invoke it.

Why now: Teams spinning up PoCs need a concrete, well‑documented example to understand “the happy path.” A drop‑in agent accelerates integrations, showcases best practices and feeds future tutorials, demos and benchmarking suites.


🧭 Type of Feature

  • Developer tooling / sample code
  • Integration & connectivity

🙋‍♂️ User Story 1 — Dynamic Tool Discovery

As a: Gateway app developer
I want: the agent to call GET /tools on startup (or accept a comma‑separated TOOLS= env/CLI arg)
So that: the available tool list stays in sync without code edits.

✅ Acceptance Criteria

Scenario: Auto‑discover tools at launch
Given the gateway URL is "https://gw.local"
And TOOLS env var is *unset*
When the agent starts
Then it MUST call GET https://gw.local/tools
And cache the returned tool schema for routing

🙋‍♂️ User Story 2 — Dual Endpoint Exposure

As a: Client integrator
I want: the agent to serve both /v1/chat/completions (OpenAI spec) and /a2a JSON‑RPC
So that: existing OpenAI SDKs and A2A‑aware gateways can call the same business logic.

✅ Acceptance Criteria

Scenario: OpenAI endpoint responds
Given the agent is running on :8000
When I POST valid OpenAI chat JSON to /v1/chat/completions
Then I receive a streaming "choices" response with tool calls resolved via MCP

🙋‍♂️ User Story 3 — Parameterised Tool Allow‑List

As a: DevOps engineer
I want: to restrict the agent to --tools "weather.get,math.calculate"
So that: only approved tools are callable in production.

✅ Acceptance Criteria

Scenario: CSV override takes precedence
Given CLI flag --tools="weather.get,math.calculate"
When the agent starts
Then it MUST NOT call /tools autodiscovery
And only map those two tools into the LangChain agent

🙋‍♂️ User Story 4 — Tool Schema Introspection

As a: Tool author
I want: the agent to inspect the JSON schema of each tool
So that: it can auto‑generate structured‑output prompts and argument validation.


📐 Design Sketch

flowchart TD
    subgraph Startup
        A[Parse CLI / ENV] --> B{Tool list?}
        B -- CSV given --> C[Load CSV list]
        B -- none --> D[GET /tools]
        C --> E[Build LangChain tool wrappers]
        D --> E
    end
    E --> F[ChatAgent - OpenAI API]
    E --> G[A2A RPC handler]
    F --> H[MCP Gateway REST / WS]
    G --> H
Loading
Component Change Detail
agent_langchain.py NEW Core agent; wraps LangChain AgentExecutor with tool router
gateway_client.py UPDATE Add /tools fetch & auth token support
cli.py NEW sample-agent start --gateway-url --tools CSV
openai_adapter.py NEW Flask / FastAPI blueprint that mimics /v1/chat/completions
a2a_rpc.py NEW JSON‑RPC handler exposing agent.invoke
docker/Dockerfile NEW Multi‑arch image for quick deploy
Docs ADD Quick‑start, env‑vars, curl examples, helm chart snippet

🔄 Roll‑out Plan

  1. Phase 0: Skeleton repo + CI + lint.
  2. Phase 1: Implement tool discovery & CSV override.
  3. Phase 2: Wire OpenAI endpoint; add handful of unit tests (pytest).
  4. Phase 3: Wire A2A RPC; integration tests against local gateway.
  5. Phase 4: Docker/Helm artefacts; publish to ghcr.io.

📣 Next Steps

  • Scaffold repo under agent_runtimes/langchain_agent.
  • Add a tutorial in docs

Metadata

Metadata

Assignees

Labels

agentsAgent SamplesenhancementNew feature or requestpythonPython / backend development (FastAPI)triageIssues / Features awaiting triage

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions