2012/12/11 Jean-Michel Pichavant <[email protected]>:
> ----- Original Message -----
>> So I implemented a simple decorator to run a function in a forked
>> process, as below.
>>
>> It works well but the problem is that the childs end up as zombies on
>> one machine, while strangely
>> I can't reproduce the same on mine..
>>
>> I know that this is not the perfect method to spawn a daemon, but I
>> also wanted to keep the code
>> as simple as possible since other people will maintain it..
>>
>> What is the easiest solution to avoid the creation of zombies and
>> maintain this functionality?
>> thanks
>>
>>
>> def on_forked_process(func):
>> from os import fork
>> """Decorator that forks the process, runs the function and gives
>> back control to the main process
>> """
>> def _on_forked_process(*args, **kwargs):
>> pid = fork()
>> if pid == 0:
>> func(*args, **kwargs)
>> _exit(0)
>> else:
>> return pid
>>
>> return _on_forked_process
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
> Ever though about using the 'multiprocessing' module? It's a slightly higher
> API and I don't have issues with zombie processes.
> You can combine this with a multiprocess log listener so that all logs are
> sent to the main process.
>
> See Vinay Sajip's code about multiprocessing and logging,
> http://plumberjack.blogspot.fr/2010/09/using-logging-with-multiprocessing.html
>
> I still had to write some cleanup code before leaving the main process, but
> once terminate is called on all remaining subprocesses, I'm not left with
> zombie processes.
> Here's the cleaning:
>
> for proc in multiprocessing.active_children():
> proc.terminate()
>
> JM
>
>
> -- IMPORTANT NOTICE:
>
> The contents of this email and any attachments are confidential and may also
> be privileged. If you are not the intended recipient, please notify the
> sender immediately and do not disclose the contents to any other person, use
> it for any purpose, or store or copy the information in any medium. Thank you.
Yes I thought about that but I want to be able to kill the parent
without killing the childs, because they can run for a long time..
Anyway I got something working now with this
def daemonize(func):
def _daemonize(*args, **kwargs):
# Perform first fork.
try:
pid = os.fork()
if pid > 0:
sys.exit(0) # Exit first parent.
except OSError as e:
sys.stderr.write("fork #1 failed: (%d) %s\n" % (e.errno,
e.strerror))
sys.exit(1)
# Decouple from parent environment.
# check if decoupling here makes sense in our case
# os.chdir("/")
# os.umask(0)
# os.setsid()
# Perform second fork.
try:
pid = os.fork()
if pid > 0:
return pid
except OSError, e:
sys.stderr.write("fork #2 failed: (%d) %s\n" % (e.errno,
e.strerror))
sys.exit(1)
# The process is now daemonized, redirect standard file descriptors.
sys.stdout.flush()
sys.stderr.flush()
func(*args, **kwargs)
return _daemonize
@daemonize
def long_smarter_process():
while True:
sleep(2)
print("Hello how are you?")
And it works exactly as before, but more correctly..
--
http://mail.python.org/mailman/listinfo/python-list