-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Reduce apm internal metrics overhead (CSI-1279) #42063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
git-thuerk-done
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ajgajg1134 just a small suggestion but approved!
dustmop
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, with one suggestion
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 18cb39c Optimization Goals: ❌ Regression(s) detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | docker_containers_cpu | % cpu utilization | +32.20 | [+30.07, +34.34] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | docker_containers_cpu | % cpu utilization | +32.20 | [+30.07, +34.34] | 1 | Logs |
| ❌ | docker_containers_memory | memory utilization | +5.98 | [+5.53, +6.43] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +1.78 | [-1.04, +4.61] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics | memory utilization | +0.74 | [+0.57, +0.91] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.34 | [+0.28, +0.40] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | +0.28 | [+0.15, +0.41] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.21 | [+0.17, +0.25] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.19 | [-0.01, +0.39] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.18 | [+0.12, +0.24] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.09 | [-0.52, +0.70] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.07 | [-0.53, +0.66] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.00 | [-0.61, +0.61] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.25, +0.23] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.02 | [-0.62, +0.59] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.10 | [-0.23, +0.02] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.14 | [-0.28, +0.01] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.20 | [-0.24, -0.16] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.35 | [-0.42, -0.28] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.48 | [-0.52, -0.43] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_metrics | memory utilization | -0.84 | [-0.97, -0.71] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -1.77 | [-1.95, -1.58] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
|
Co-authored-by: Alicia Thuerk <[email protected]>
Serverless Benchmark Results
tl;drUse these benchmarks as an insight tool during development.
What is this benchmarking?The The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type. How do I interpret these charts?The charts below comes from The benchstat docs explain how to interpret these charts.
I need more helpFirst off, do not worry if the benchmarks are failing. They are not tests. The intention is for them to be a tool for you to use during development. If you would like a hand interpreting the results come chat with us in Benchmark stats |
|
/merge |
|
View all feedbacks in Devflow UI.
The expected merge time in
|
What does this PR do?
Skip zero value metrics that are rarely used, these metrics provide some debugging value, but don't provide any signal when
0, thus we can stop sending these and reduce the overhead here a small amount. DM me for exact cost calculationsMotivation
Reduce overhead
Describe how you validated your changes
Ran locally and observed these metrics do not appear, but the other ones still do. Also unit tests
Additional Notes