spresse1 added the comment:
Oooh, thanks. I'll use that.
> But really, this sounds rather fragile.
Absolutely. I concur there is no good way to do this.
--
___
Python tracker
<http://bugs.python.org
spresse1 added the comment:
> I don't see how using os.fork() would make things any easier. In either
> case you need to prepare a list of fds which the child process should
> close before it starts, or alternatively a list of fds *not* to close.
With fork() I control where
spresse1 added the comment:
I'm actually a nix programmer by trade, so I'm pretty familiar with that
behavior =p However, I'm also used to inheriting some way to refer to these
fds, so that I can close them. Perhaps I've just missed somewhere a call to
ask the process fo
spresse1 added the comment:
>> So you're telling me that when I spawn a new child process, I have to
>> deal with the entirety of my parent process's memory staying around
>> forever?
>
> With a copy-on-write implementation of fork() this quite likely to
spresse1 added the comment:
So you're telling me that when I spawn a new child process, I have to deal with
the entirety of my parent process's memory staying around forever? I would
have expected this to call to fork(), which gives the child plenty of chance to
clean up, then
spresse1 added the comment:
The difference is that nonfunctional.py does not pass the write end of the
parent's pipe to the child. functional.py does, and closes it immediately
after breaking into a new process. This is what you mentioned to me as a
workaround. Corrected code
spresse1 added the comment:
Now also tested with source-built python 3.3.2. Issue still exists, same
example files.
--
___
Python tracker
<http://bugs.python.org/issue18
New submission from spresse1:
[Code demonstrating issue attached]
When overloading multiprocessing.Process and using pipes, a reference to a pipe
spawned in the parent is not properly garbage collected in the child. This
causes the write end of the pipe to be held open with no reference to