Hi everyone,

I have Prometheus installed as a statefulset in a Kubernetes cluster 
running in HA mode (with 2 instances). Currently, I use Prometheus as a 
statefulset to enable retries in remote write, which I use to store my 
metrics. However, this setup has its drawbacks: it prevents autoscaling 
when I scrape a large volume of metrics.

Is there a way to configure Prometheus solely as a collector that doesn’t 
perform any relabeling or buffering, and only scrapes and sends metrics to 
a remote write endpoint without losing metrics during the scrape? I 
understand there might be some metric loss due to the removal of the buffer.

I considered using a daemonset, but this approach is costly and there is a 
risk of losing metrics during a pod rollout on the node. Alternatively, 
using a deployment would require handling deduplication.

Has anyone implemented a similar configuration?

I hope this helps! Let me know if you need any further adjustments.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/39590679-6209-4fbb-ae21-75ceed36b7bcn%40googlegroups.com.

Reply via email to