Thanks for your response.

What would you recommend in a situation with several hundredes or thousands 
of servers or systems within a kubernetes cluster which should have the 
node_exporter installed.
my idea was to install the node_exporter + prometheus agent. agent scrapes 
local node_exporter and then remotw_writes the results to a central 
prometheus server or a loadbalancer which distributes to different 
prometheus servers.
my idea was to user the same configu for alle node_exporter + prometheus 
agents. For that reason they all have the same job name which would be ok.

However I think I will have a problem because if I use "127.0.0.1:9100" as 
target to scrape then all instances are equal.

Is there any possibility to use a variable in the scrape_config which 
reflects any environment variable from linux system or any other mechanism 
to make this instance unique?


Brian Candler schrieb am Donnerstag, 14. März 2024 um 13:04:07 UTC+1:

> As long as all the time series have distinct label sets (in particular, 
> different "instance" labels), and you're not mixing scraping with 
> remote-writing for the same targets, then I don't see any problem with all 
> the agents using the same "job" label when remote-writing.
>
> On Tuesday 12 March 2024 at 22:30:22 UTC Alexander Wilke wrote:
>
>> At the moment I am running the job with name
>> "node_exporter" which has 20 different targets. (instances)
>> With this configuration there should not be any conflict.
>>
>> my idea is to install the prometheus agent on the nodes itself.
>> technically it looks like it work if I use the same job_name on the agent 
>> and central prometheus as long as the targets/instances are different.
>>
>> In general I avoid conflicting job_names but in this situation it may be 
>> ok from my point of view.
>>
>> what do you think, recommend in this specific scenario ?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/706133c4-aa70-4d60-b1a0-dc0d85bcd5een%40googlegroups.com.

Reply via email to