Hello, It appears that there is a subtle bug/misunderstanding in the code that is linked, though that is possibly due to the multiprocess documentation not being clear enough. When the code specifies a registry for each metric ( example <https://github.com/amitsaha/python-prometheus-demo/blob/master/flask_app_prometheus_multiprocessing/src/helpers/middleware.py#L17>), it is causing both a process-local metric, and the multi process metrics to be registered, so depending on which process handles the request you will get different responses for the metric with "Request latency" in the HELP text. In the multiprocess documentation <https://github.com/prometheus/client_python/#multiprocess-mode-gunicorn> this is what is meant by "Registries can not be used as normal, all instantiated metrics are exported". If you remove the registry=registry lines in the example you will see just the multiprocess output as expected.
You could also move the registry and MultiProcessCollector code into the request handler to make it clear that the registry used by the MultiProcessCollector should not have anything registered to it, as seen in the example in the multiprocess documentation I linked above. Let me know if that was unclear or you have more questions, Chris On Wed, Apr 7, 2021 at 8:12 AM Esau Rodriguez <[email protected]> wrote: > Hi all, > I'm not sure if I'm missing something but I'm seeing a behaviour with the > python client and multiprocess using gunicorn and flask I'm not sure if I'm > missing something or there's a bug there. > > When I hit the endpoint producing the prometheus text to be scrapped I'm > seeing 2 versions for the same metrics with different help texts. I would > expect to see only one metric (the multiprocess one). > > I thought I had something wrong in my setup so I tried it with e pretty > simple project that I found here > https://github.com/amitsaha/python-prometheus-demo/tree/master/flask_app_prometheus_multiprocessing > (not my code). > > I hit a random url and then the `/metrics` endpoint > > You can see in the raw response down here we have 2 entries for each > metric, with different `types` and `help` texts. In this example there > really wasn't any processes but in the real example in prod we have several > processes and we see the prometheus scraper `picks` a different value > depending on the order of the response. > > Am I missing something or is there a bug there? > > The raw response was: > > <pre> > % curl --location --request GET 'http://localhost:5000/metrics' > # HELP request_latency_seconds Multiprocess metric > # TYPE request_latency_seconds histogram > request_latency_seconds_sum{app_name="webapp",endpoint="/metrics"} > 0.00040912628173828125 > request_latency_seconds_sum{app_name="webapp",endpoint="/"} > 0.0001652240753173828 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.005"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.01"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.025"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.05"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.075"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.1"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.25"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.75"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="1.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="2.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="5.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="7.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="10.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="+Inf"} > 1.0 > request_latency_seconds_count{app_name="webapp",endpoint="/metrics"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.005"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.01"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.025"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.05"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.075"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.1"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.25"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.75"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="1.0"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="2.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="5.0"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="7.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="10.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="+Inf"} > 1.0 > request_latency_seconds_count{app_name="webapp",endpoint="/"} 1.0 > # HELP request_count_total Multiprocess metric > # TYPE request_count_total counter > request_count_total{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"} > 1.0 > request_count_total{app_name="webapp",endpoint="/",http_status="404",method="GET"} > 1.0 > # HELP request_count_total App Request Count > # TYPE request_count_total counter > request_count_total{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"} > 1.0 > request_count_total{app_name="webapp",endpoint="/",http_status="404",method="GET"} > 1.0 > # HELP request_count_created App Request Count > # TYPE request_count_created gauge > request_count_created{app_name="webapp",endpoint="/metrics",http_status="200",method="GET"} > 1.617798968564061e+09 > request_count_created{app_name="webapp",endpoint="/",http_status="404",method="GET"} > 1.61779898142748e+09 > # HELP request_latency_seconds Request latency > # TYPE request_latency_seconds histogram > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.005"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.01"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.025"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.05"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.075"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.1"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.25"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.75"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="1.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="2.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="5.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="7.5"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="10.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="+Inf"} > 1.0 > request_latency_seconds_count{app_name="webapp",endpoint="/metrics"} 1.0 > request_latency_seconds_sum{app_name="webapp",endpoint="/metrics"} > 0.00040912628173828125 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.005"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.01"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.025"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.05"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.075"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.1"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.25"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="0.75"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="1.0"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="2.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="5.0"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="7.5"} 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="10.0"} > 1.0 > request_latency_seconds_bucket{app_name="webapp",endpoint="/",le="+Inf"} > 1.0 > request_latency_seconds_count{app_name="webapp",endpoint="/"} 1.0 > request_latency_seconds_sum{app_name="webapp",endpoint="/"} > 0.0001652240753173828 > # HELP request_latency_seconds_created Request latency > # TYPE request_latency_seconds_created gauge > request_latency_seconds_created{app_name="webapp",endpoint="/metrics"} > 1.617798968520208e+09 > request_latency_seconds_created{app_name="webapp",endpoint="/"} > 1.617798981426993e+09 > </pre> > > Kind regards, > Esau. > > -- > You received this message because you are subscribed to the Google Groups > "Prometheus Developers" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/prometheus-developers/dd6d0ea6-ef26-4aaf-b926-9490a109c15cn%40googlegroups.com > <https://groups.google.com/d/msgid/prometheus-developers/dd6d0ea6-ef26-4aaf-b926-9490a109c15cn%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Prometheus Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-developers/CANVFovWGW_fNQYOxB_87J0be%3DQbvesP9gOKE1XRVpxqaZbkKXA%40mail.gmail.com.

