[issue31092] Potential multiprocessing.Manager() race condition

2017-07-31 Thread Prof Plum

New submission from Prof Plum:

So I was writing code that had multiple write thread and read thread "groups" 
in a single pool (in a group a few write threads write to a queue that a read 
thread reads), and I ran into what I think is a race condition with the 
multiprocessing.Manager() class. It looks managed queues are returned from 
Manager() before they are actually initialized and safe to use, but it is only 
noticeable when making many managed queues in quick succession. I've attached a 
simple demo script to reproduce the bug, the reason I believe this is race 
condition is because while the sleep(0.5) line is commented out python crashes, 
but when it's not it doesn't.

Also I'm on windows 10 and using 64 bit Python 3.5.2

--
files: bug_demo.py
messages: 299582
nosy: Prof Plum
priority: normal
severity: normal
status: open
title: Potential multiprocessing.Manager() race condition
type: crash
versions: Python 3.5
Added file: http://bugs.python.org/file47054/bug_demo.py

___
Python tracker 
<http://bugs.python.org/issue31092>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31092] multiprocessing.Manager() race condition

2017-09-21 Thread Prof Plum

Changes by Prof Plum :


--
title: Potential multiprocessing.Manager() race condition -> 
multiprocessing.Manager() race condition

___
Python tracker 
<https://bugs.python.org/issue31092>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31092] multiprocessing.Manager() race condition

2017-10-05 Thread Prof Plum

Prof Plum  added the comment:

Oh I see, I thought getting an error that caused the python code execution to 
terminate was considered a "crash".

On the note of whether you should fix this I think the answer is yes. When I 
call pool.apply_async() I expect it only to return when the worker process has 
been started and finished it's initialization process (i.e. sending the 
incr-ref request). That being said I could see people wanting to utilize the 
minor performance gain of having the worker start AND run asynchronously so I 
think this option should be available via a boolean arg to apply_async() but it 
should be off by default because that is the safer and intuitive behavior of 
apply_async().

--

___
Python tracker 
<https://bugs.python.org/issue31092>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31092] delicate behaviour of shared (managed) multiprocessing Queues

2017-10-08 Thread Prof Plum

Prof Plum  added the comment:

@Antoine Pitrou

>Well... it's called *async* for a reason, so I'm not sure why the behaviour 
>would be partially synchronous.

To a avoid race condition

>I'm not sure how.  In mp.Pool we don't want to keep references to input 
>objects longer than necessary.

Like I said you could just add some sort of "safe" flag to the apply_async() 
call safe=True would mean the initialization of the worker is done 
synchronously safe=False would be the normal behavior. Even if you decide it's 
the user's responsibility to not delete the queue if the user's code is exiting 
a function that would basically amount to them calling sleep() for some guessed 
amount of time. With a safe flag they wouldn't have to guess the time or call 
sleep which is kinda ugly IMO. Also if someone see's that apply_async() has a 
safe flag they are more likely to look up what it does than they are to read 
the full docs to apply_async().

--

___
Python tracker 
<https://bugs.python.org/issue31092>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com