Hi team,

We ran into an issue when running single or multiple python processes via 
multiple Docker containers with a single MultiProcessCollector w/ Flask. We 
have a draft <https://github.com/prometheus/client_python/pull/572> PR open 
but still working through a proposed solution.

Let's assume there are 4 containers - 1 container running Flask w/ 
MultiProcessCollector and 3 containers which are replicas of one process, 
for this example each writing Summary metrics. These four containers are 
sharing the "prometheus_multiproc_dir" via a volume mount across the 
containers. Each of the single processes in the 3 replica containers will 
write summary_1.db to the prometheus_multiproc_dir and overwrite each 
others file. Since they are separate containers, and each is a replica, the 
pid will be the same in each container. There's a couple ways we are 
looking at solving this:

(1) Define our own "process_identifier()" 
<https://github.com/prometheus/client_python/blob/5b0a476489fc6a26c177683b539142c91005de06/prometheus_client/values.py#L31>.
 We 
also have to handle the cleanup w/ mark_process_dead() 
<https://github.com/prometheus/client_python/blob/5b0a476489fc6a26c177683b539142c91005de06/prometheus_client/multiprocess.py#L152>
 to 
include the new file name convention.

(2) Change the way the .db files, essentially just add in $HOST to the 
filename for metric files by default. The metric files become 
<metric>_{host}_{pid}.db, which ensures unique file names across Docker 
containers.

If (2) is not a good idea to explore, we could just add some documentation 
to the python_client readme based on (1).

Adam

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/8b189095-872b-4d7e-9a82-eaf4e4b053aao%40googlegroups.com.

Reply via email to