(I mean /a, /b, etc. of course, not a, b, ...)

On Sat, Jan 30, 2021 at 10:49 PM Julius Volz <[email protected]>
wrote:

> On Fri, Jan 29, 2021 at 3:50 PM Vitaly Shupak <[email protected]>
> wrote:
>
>> This is possible to do using relabel_configs:
>>
>> scrape_configs:
>>   - job_name: 'job'
>>     static_configs:
>>         - targets:
>>             - 'example.com/a'
>>             - 'example.com/b'
>>             - 'example.com/c'
>>             - 'example.com/d'
>>     relabel_configs:
>>         - source_labels: [__address__]
>>           target_label: __metrics_path__
>>           regex: '(.*)/(.*)'
>>           replacement: '/$2'
>>         - source_labels: [__address__]
>>           target_label: instance
>>         - source_labels: [__address__]
>>           regex: '(.*)/(.*)'
>>           replacement: '$1'
>>           target_label: __address__
>>
>
> This does not require relabeling, as you can just attach a
> "__metrics_path__" label directly to each target, e.g.:
>
> scrape_configs:
>   - job_name: 'job'
>     static_configs:
>         - targets:
>             - 'example.com'
>           labels:
>             __metrics_path__: a
>         - targets:
>             - 'example.com'
>           labels:
>             __metrics_path__: b
>         - targets:
>             - 'example.com'
>           labels:
>             __metrics_path__: c
>         - targets:
>             - 'example.com'
>           labels:
>             __metrics_path__: b
>
> I'd probably still just group them under different scrape configs though,
> as you will get different metrics from each of the endpoints, and thus
> you'll likely want to group them under a different job name (although you
> could also override the "job" label like above, in one scrape config).
>
>
>>
>> On Monday, December 14, 2020 at 1:32:23 PM UTC-5 [email protected]
>> wrote:
>>
>>> There is probably same nuance in arguing if and when this is a good
>>> idea and when not.
>>>
>>> But in fact, the famous
>>> https://github.com/kubernetes/kube-state-metrics is doing it. It's not
>>> using different paths, but different ports, but that's kind of
>>> similar.
>>>
>>> On the Prometheus side, however, you need separate scrape
>>> targets. There is currently no way of "iterating" through multiple
>>> ports or paths of a target. From the Prometheus side, a different
>>> port, a different path, or a different host is just all the same thing
>>> in defining a different target. (And that probably won't change
>>> anytime soon.)
>>>
>>> --
>>> Björn Rabenstein
>>> [PGP-ID] 0x851C3DA17D748D03
>>> [email] [email protected]
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/c604b921-828a-427b-8d10-471ad7ccdae9n%40googlegroups.com
>> <https://groups.google.com/d/msgid/prometheus-developers/c604b921-828a-427b-8d10-471ad7ccdae9n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> Julius Volz
> PromLabs - promlabs.com
>


-- 
Julius Volz
PromLabs - promlabs.com

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAObpH5yH-UUq9%2B%2Bs4WcDic_QtmU0qYmeq1pwKvgkJY-_oEP89A%40mail.gmail.com.

Reply via email to