Hi MR,
I definitively will try to make a PR for this as soon as I can.

Esau.

On Friday, April 9, 2021 at 1:12:43 PM UTC+1 [email protected] wrote:

> Would you mind making a PR to improve the documentation? As the expert it 
> is easy to write documentation that is *technically correct* but not 
> helpful to an unsuspecting user; you are now in the best position to make 
> this clear to the next one reading it.
>
> Best,
> MR
>
> On Wed, Apr 7, 2021 at 6:14 PM Esau Rodriguez <[email protected]> wrote:
>
>> Hi Chris,
>> thanks a lot for your response. I definitively didn't understand that 
>> line of the documentation. 
>>
>> I just tested the solution on the simple test and it works as expected.
>>
>> Thanks a lot,
>> Esau.
>> On Wednesday, April 7, 2021 at 5:00:11 PM UTC+1 [email protected] 
>> wrote:
>>
>>> Hello,
>>>
>>> It appears that there is a subtle bug/misunderstanding in the code that 
>>> is linked, though that is possibly due to the multiprocess documentation 
>>> not being clear enough. When the code specifies a registry for each metric (
>>> example 
>>> <https://github.com/amitsaha/python-prometheus-demo/blob/master/flask_app_prometheus_multiprocessing/src/helpers/middleware.py#L17>),
>>>  
>>> it is causing both a process-local metric, and the multi process metrics to 
>>> be registered, so depending on which process handles the request you will 
>>> get different responses for the metric with "Request latency" in the HELP 
>>> text. In the multiprocess documentation 
>>> <https://github.com/prometheus/client_python/#multiprocess-mode-gunicorn> 
>>> this is what is meant by "Registries can not be used as normal, all 
>>> instantiated metrics are exported". If you remove the registry=registry 
>>> lines in the example you will see just the multiprocess output as expected.
>>>
>>> You could also move the registry and MultiProcessCollector code into the 
>>> request handler to make it clear that the registry used by the 
>>> MultiProcessCollector should not have anything registered to it, as seen in 
>>> the example in the multiprocess documentation I linked above.
>>>
>>> Let me know if that was unclear or you have more questions,
>>> Chris
>>>
>>> On Wed, Apr 7, 2021 at 8:12 AM Esau Rodriguez <[email protected]> wrote:
>>>
>>>> Hi all, 
>>>> I'm not sure if I'm missing something but I'm seeing a behaviour with 
>>>> the python client and multiprocess using gunicorn and flask I'm not sure 
>>>> if 
>>>> I'm missing something or there's a bug there.
>>>>
>>>> When I hit the endpoint producing the prometheus text to be scrapped 
>>>> I'm seeing 2 versions for the same metrics with different help texts. I 
>>>> would expect to see only one metric (the multiprocess one).
>>>>
>>>> I thought I had something wrong in my setup so I tried it with e pretty 
>>>> simple project that I found here 
>>>> https://github.com/amitsaha/python-prometheus-demo/tree/master/flask_app_prometheus_multiprocessing
>>>>  
>>>> (not my code).
>>>>
>>>> I hit a random url and then the `/metrics` endpoint
>>>>
>>>> You can see in the raw response down here we have 2 entries for each 
>>>> metric, with different `types` and `help` texts. In this example there 
>>>> really wasn't any processes but in the real example in prod we have 
>>>> several 
>>>> processes and we see the prometheus scraper `picks` a different value 
>>>> depending on the order of the response.
>>>>
>>>> Am I missing something or is there a bug there? 
>>>>
>>>> The raw response was:
>>>>
>>>> <pre>
>>>> % curl --location --request GET 'http://localhost:5000/metrics'
>>>> # HELP request_latency_seconds Multiprocess metric
>>>> # TYPE request_latency_seconds histogram
>>>> request_latency_seconds_sum{app_name="webapp",endpoint="/metrics"} 
>>>> 0.00040912628173828125
>>>> request_latency_seconds_sum{app_name="webapp",endpoint="/"} 
>>>> 0.0001652240753173828
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.005"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.01"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.025"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.05"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.075"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.1"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.25"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.75"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="1.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="2.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="5.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="7.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="10.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="+Inf"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_count{app_name="webapp",endpoint="/metrics"} 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.005"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.01"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.025"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.05"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.075"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.1"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.25"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.75"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="1.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="2.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="5.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="7.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="10.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="+Inf"} 
>>>> 1.0
>>>> request_latency_seconds_count{app_name="webapp",endpoint="/"} 1.0
>>>> # HELP request_count_total Multiprocess metric
>>>> # TYPE request_count_total counter
>>>> request_count_total{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"}
>>>>  
>>>> 1.0
>>>> request_count_total{app_name="webapp",endpoint="/",http_status="404",method="GET"}
>>>>  
>>>> 1.0
>>>> # HELP request_count_total App Request Count
>>>> # TYPE request_count_total counter
>>>> request_count_total{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"}
>>>>  
>>>> 1.0
>>>> request_count_total{app_name="webapp",endpoint="/",http_status="404",method="GET"}
>>>>  
>>>> 1.0
>>>> # HELP request_count_created App Request Count
>>>> # TYPE request_count_created gauge
>>>> request_count_created{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"}
>>>>  
>>>> 1.617798968564061e+09
>>>> request_count_created{app_name="webapp",endpoint="/",http_status="404",method="GET"}
>>>>  
>>>> 1.61779898142748e+09
>>>> # HELP request_latency_seconds Request latency
>>>> # TYPE request_latency_seconds histogram
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.005"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.01"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.025"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.05"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.075"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.1"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.25"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.75"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="1.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="2.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="5.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="7.5"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="10.0"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="+Inf"}
>>>>  
>>>> 1.0
>>>> request_latency_seconds_count{app_name="webapp",endpoint="/metrics"} 1.0
>>>> request_latency_seconds_sum{app_name="webapp",endpoint="/metrics"} 
>>>> 0.00040912628173828125
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.005"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.01"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.025"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.05"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.075"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.1"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.25"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.75"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="1.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="2.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="5.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="7.5"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="10.0"} 
>>>> 1.0
>>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="+Inf"} 
>>>> 1.0
>>>> request_latency_seconds_count{app_name="webapp",endpoint="/"} 1.0
>>>> request_latency_seconds_sum{app_name="webapp",endpoint="/"} 
>>>> 0.0001652240753173828
>>>> # HELP request_latency_seconds_created Request latency
>>>> # TYPE request_latency_seconds_created gauge
>>>> request_latency_seconds_created{app_name="webapp",endpoint="/metrics"} 
>>>> 1.617798968520208e+09
>>>> request_latency_seconds_created{app_name="webapp",endpoint="/"} 
>>>> 1.617798981426993e+09
>>>> </pre>
>>>>
>>>> Kind regards,
>>>> Esau.
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Prometheus Developers" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/prometheus-developers/dd6d0ea6-ef26-4aaf-b926-9490a109c15cn%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/prometheus-developers/dd6d0ea6-ef26-4aaf-b926-9490a109c15cn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-developers/d33a0134-0033-434e-a268-8ccaf9984b3en%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/prometheus-developers/d33a0134-0033-434e-a268-8ccaf9984b3en%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/5b24005e-a55e-4dda-aa6c-7c35b88801bcn%40googlegroups.com.

Reply via email to