-
Notifications
You must be signed in to change notification settings - Fork 38
Prototype OpenTelemetry Traces in MCP Server #274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Docs preview readyThe preview is ready to be viewed. View the preview File Changes 0 new, 5 changed, 0 removed
Build ID: 567da6c89ab9259c96ee61f9 URL: https://www.apollographql.com/docs/deploy-preview/567da6c89ab9259c96ee61f9 |
❌ Changeset file missing for PR All changes should include an associated changeset file. |
❌ Changeset file missing for PR All changes should include an associated changeset file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces OpenTelemetry tracing support to the Apollo MCP Server as a prototype feature. The changes enable distributed tracing by integrating OTel SDKs and instrumenting key operations with tracing capabilities.
- Adds OpenTelemetry tracing infrastructure with proper initialization and shutdown handling
- Instruments HTTP requests and GraphQL operations with trace spans
- Refactors logging system to work alongside OpenTelemetry layers
Reviewed Changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
Cargo.toml | Adds OpenTelemetry-related dependencies and enables tower-http trace features |
src/main.rs | Replaces standalone logging setup with integrated tracing subscriber initialization |
src/runtime.rs | Exposes new trace module |
src/runtime/trace.rs | Implements OpenTelemetry tracer and meter provider initialization with proper cleanup |
src/runtime/logging.rs | Refactors logging to return components for integration with OpenTelemetry layers |
src/server/states/starting.rs | Adds Axum and Tower HTTP tracing middleware to the router |
src/server/states/running.rs | Instruments ServerHandler methods with tracing macros |
src/graphql.rs | Adds OpenTelemetry instrumentation to GraphQL client requests |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
let tracer_provider = init_tracer_provider()?; | ||
let meter_provider = init_meter_provider()?; | ||
let env_filter = Logging::env_filter(config)?; | ||
let (logging_layer, logging_guard) = Logging::logging_layer(config)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logging setup logic is split between this module and the Logging module, creating tight coupling. Consider consolidating all tracing setup logic in one place or making the dependency more explicit through better separation of concerns.
let (logging_layer, logging_guard) = Logging::logging_layer(config)?; | |
let env_filter = build_env_filter(config)?; | |
let (logging_layer, logging_guard) = build_logging_layer(config)?; |
Copilot uses AI. Check for mistakes.
if let Err(err) = self.meter_provider.shutdown() { | ||
eprintln!("{err:?}"); | ||
} | ||
self.logging_guard.take(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking the logging_guard without dropping it explicitly may not properly clean up the logging worker. The guard should be allowed to drop naturally or explicitly dropped to ensure proper cleanup.
self.logging_guard.take(); | |
drop(self.logging_guard.take()); |
Copilot uses AI. Check for mistakes.
@@ -180,6 +183,7 @@ impl ServerHandler for Running { | |||
Ok(self.get_info()) | |||
} | |||
|
|||
#[tracing::instrument(skip(self, context), fields(tool_name = request.name.as_ref(), request_id = %context.id.clone()))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The context.id.clone()
in the tracing fields creates an unnecessary clone on every function call. Consider using a reference or moving the field extraction into the span creation to avoid the clone overhead.
#[tracing::instrument(skip(self, context), fields(tool_name = request.name.as_ref(), request_id = %context.id.clone()))] | |
#[tracing::instrument(skip(self, context), fields(tool_name = request.name.as_ref(), request_id = %context.id))] |
Copilot uses AI. Check for mistakes.
Co-authored-by: Dale Seo <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing my feedback! 🙇
This change pulls in new crates and SDKs for instrumenting the Apollo MCP Server with Open Telemetry Traces. There is still plenty to work on so this PR is set up to merge into a long standing feature branch
feature/telemetry
.Changes
Testing
Currently no unit testing since most tracing is happening through macros and there is not yet any configuration beyond the environment variables used within the SDKs.
I ran the all-in-one Jaegar container and started the mcp server with the following env vars
This was able to produce traces such as...
