Hi everyone, I have a use case where I'm trying to track the use of a feature that isn't often used, and I've decided to use a counter.
To give you some stats, for the moment this counter will be incremented 50 times over 24h on average. This functionality is implemented within a service that is deployed and replicated on 10 to 20 pods (infra k8s), with metrics scrapped at regular frequency (30sec). We have a label on the metrics to identify the pods and avoid collisions, so this metric evolves very little and is spread over a number of time series. Here's a small example of the “flat” side of this metric [image: Capture d’écran 2024-12-20 à 09.26.03.png] The first problem we had to solve was losing the 0 to 1 transition (we tested the feature beta created timestamps zero injection <https://prometheus.io/docs/prometheus/latest/feature_flags/#created-timestamps-zero-injection>, but it generated a significant CPU overload, so we didn't activate it). So we went with a request like this : clamp_min( sum (max_over_time(import_processed_total{}[1m]) or vector(0)) - sum (max_over_time(import_processed_total{}[1m] offset 1m) or vector(0)), 0) And i fix the "Min interval" of query options in grafana to 1m. It's still imperfect at the end of time series, but arrives at a result close to reality if I analyze it over time windows of 24 / 48 hours. However, it becomes unusable if I use this approach over 30 days. The questions I have are the following: - Is there a different approach (promql query) to exploit this metric without losing precision? - Is Prometheus suitable for this kind of use case? - Couldn't an “adaptive metrics <https://grafana.com/blog/2023/05/09/adaptive-metrics-grafana-cloud-announcement/>” approach be a solution for cleaning up this metric and generating a synthetic version for one day, which can then be analyzed over 30 days? Thx for the read and futur answers -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/prometheus-users/6347ddf9-44a3-4e2a-acbf-5967c503ea03n%40googlegroups.com.

