We are using **Cloudwatch-exporter** to scrape cloudwatch metrics and
adding exporter job in **Prometheus** to scrape those metrics from exporter
and then forwarding those metrics from prometheus to mimir **to enable
alerting on metrics.**
There is no consistency in **keys** of tags at aws side, due to which we
are facing issue of getting **blank values of tags in alert email** as many
of them **are not matching keys in alert template code** as we can add
**only one pattern (Uppercase or lowercase)** in template code.
So we are trying to **standardise** the incoming tags from all aws services
**to one fix pattern as lowercase** and then forward them to mimir so this
issue will be overcome and will able to print all tags in alert email.
Please suggest how we can achieve this with prometheus job scrapinng config
that we have added as below -
- job_name: 'cloudwatch-exporter'
kubernetes_sd_configs:
- role: service
scrape_interval: 1m
metrics_path: /metrics
relabel_configs:
- target_label: __address__
replacement:
XXX-prom-cloudwatch-exp-XXX.XXX.svc.cluster.local:9106
tags we are getting through below metrics -
**aws_resource_info**{job="aws_ec2",instance="",**tag_Name**="XXX",**tag_Owner**="XXX",tag_businessunit="XXX",**tag_environment**="dev"}
1.0
So, here we want to modify **tag_Name** to *tag_name*, **tag_Owner** to
*tag_owner*
Please suggest.
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/875e994f-1554-4c6a-b2c4-e320b61c53cdn%40googlegroups.com.