-
Notifications
You must be signed in to change notification settings - Fork 188
Ingest internal telemetry from the OTel Collector when it is running #9928
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This pull request does not have a backport label. Could you fix it @faec? 🙏
|
@@ -486,6 +512,47 @@ func (b *BeatsMonitor) monitoringNamespace() string { | |||
return defaultMonitoringNamespace | |||
} | |||
|
|||
func (b *BeatsMonitor) getCollectorTelemetryEndpoint() string { | |||
type metricsReaderConfig struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the least verbose way I could find to extract the prometheus endpoint configuration (the mapstructure annotations don't seem to accept dotted field names), but I'd be happy to learn if there's a better one.
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
|
Add a monitoring component to ingest telemetry values from the collector's prometheus endpoint into the
elastic_agent.collector
dataset.Checklist
./changelog/fragments
using the changelog toolHow to test this PR locally
Enable agent monitoring while at least one component (including monitoring itself) uses the otel runtime. Prometheus metrics from the collector should be ingested to the
elastic_agent.collector
dataset.Related issues