[issue39535] multiprocessing.Process file descriptor resource leak
New submission from Robert Pierce : multiprocessing.Process opens a FIFO to the child. This FIFO is not documented the the Process class API and it's purpose is not clear from the documentation. It is a minor documentation bug that the class creates non-transparent resource utilization. The primary behavioral bug is that incorrect handling of this FIFO creates a resource leak, since the file descriptor is not closed on join(), or even when the parent Process object goes out of scope. The effect of this bug is that programs generating large numbers of Process objects will hit system resource limits of open file descriptors. -- assignee: docs@python components: Documentation, Library (Lib) files: proc_test.py messages: 361273 nosy: Robert Pierce, docs@python priority: normal severity: normal status: open title: multiprocessing.Process file descriptor resource leak type: resource usage versions: Python 3.6 Added file: https://bugs.python.org/file48878/proc_test.py ___ Python tracker <https://bugs.python.org/issue39535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39535] multiprocessing.Process file descriptor resource leak
Robert Pierce added the comment: It appears as if the problem is the sentinel FIFO opened by (for example) multiprocessing.popen_fork.Popen._launch(). It registers a finalization class to close the sentinel on garbage collection. Instead, it should be closed in poll() or wait() when the child process is reaped and known to be dead. The sentinel serves no purpose after the child is reaped, and waiting till garbage collection means that programs forking large numbers of processes cannot control file descriptor utilization. -- ___ Python tracker <https://bugs.python.org/issue39535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27880] cPickle fails on large objects (still - 2011 and counting)
New submission from Robert Pierce: cPickle fails on large objects, throwing a SystemError exception which is cryptic. The issue was fixed for pickle in python 3 back in 2011 (http://bugs.python.org/issue11564), but never addressed in 2.7. It seems to be a recurring complaint (e.g., http://bugs.python.org/issue11872), but always seems to be closed without being fixed or explained why it cannot be fixed. Test case from 2011 still fails: >>> import cPickle; cPickle.dumps('a' * (2 ** 31),-1) Traceback (most recent call last): File "", line 1, in SystemError: error return without exception set -- components: Library (Lib) messages: 273795 nosy: rob...@smithpierce.net priority: normal severity: normal status: open title: cPickle fails on large objects (still - 2011 and counting) type: behavior versions: Python 2.7 ___ Python tracker <https://bugs.python.org/issue27880> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com