[issue22066] subprocess.communicate() does not receive full output from the called process.

2014-07-25 Thread juj

New submission from juj:

When Python 2.7 executes a Node .js application that prints to stdout and 
subsequently exits, Python does not capture full output printed by that 
application.

Steps to repro:
1. Download and unzip http://clb.demon.fi/bugs/python_proc_bug.zip
2. Run run_test.bat

Observed result: The .bat script prints:

Executing 'node jsfile.js' directly from command line. The js file outputs:
Line 1
Line 2

Executing 'jsfile.js' via a python script that calls 'node jsfile.js'. Now the 
js file outputs:
Line 1

Expected result: The second run via invoking from python should also print 
"Line 2".

Tested on Python v2.7.8 64-bit and Node v0.10.28 on Windows 7 64-bit.

--
components: Library (Lib)
messages: 223950
nosy: juj
priority: normal
severity: normal
status: open
title: subprocess.communicate() does not receive full output from the called 
process.
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue22066>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22066] subprocess.communicate() does not receive full output from the called process.

2014-07-27 Thread juj

juj added the comment:

Further testing suggests that this is not a Python issue, but instead an issue 
in node.js, reported already earlier here 
https://github.com/joyent/node/issues/1669

Closing this as invalid.

--

___
Python tracker 
<http://bugs.python.org/issue22066>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22066] subprocess.communicate() does not receive full output from the called process.

2014-07-27 Thread juj

Changes by juj :


--
resolution:  -> not a bug

___
Python tracker 
<http://bugs.python.org/issue22066>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22442] subprocess.check_call hangs on large PIPEd data.

2014-09-19 Thread juj

New submission from juj:

On Windows, write

a.py:

import subprocess

def ccall(cmdline, stdout, stderr):
  proc = subprocess.Popen(['python', 'b.py'], stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
  proc.communicate()
  if proc.returncode != 0: raise subprocess.CalledProcessError(proc.returncode, 
cmdline)
  return 0

# To fix subprocess.check_call, uncomment the following, which is functionally 
equivalent:
# subprocess.check_call = ccall

subprocess.check_call(['python', 'b.py'], stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
print 'Finished!'

Then write b.py:

import sys

str = 'aaa'
for i in range(0,16): str = str + str
for i in range(0,2): print >> sys.stderr, str
for i in range(0,2): print str

Finally, run 'python a.py'. The application will hang. Uncomment the specicied 
line to fix the execution.

This is a documented failure on the python subprocess page, but why not just 
fix it up directly in python itself?

One can think that modifying stdout or stderr is not the intent for 
subprocess.check_call, but python certainly should not hang because of that.

--
components: Library (Lib)
messages: 227095
nosy: juj
priority: normal
severity: normal
status: open
title: subprocess.check_call hangs on large PIPEd data.
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue22442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22442] subprocess.check_call hangs on large PIPEd data.

2014-09-19 Thread juj

juj added the comment:

The same observation applies to subprocess.call() as well.

--

___
Python tracker 
<http://bugs.python.org/issue22442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22442] subprocess.check_call hangs on large PIPEd data.

2014-09-20 Thread juj

juj added the comment:

Very good question akira. In one codebase where I have fixed this kind of bug, 
see

https://github.com/kripken/emscripten/commit/1b2badd84bc6f54a3125a494fa38a51f9dbb5877
https://github.com/kripken/emscripten/commit/2f048a4e452f5bacdb8fa31481c55487fd64d92a

the intended usage by the original author had certainly been to throw in a PIPE 
just to mute both stdout and stderr output, and there was no intent to capture 
the results or anything. I think passing PIPE to those is meaningless, since 
they effectively behave as "throw the results away", since they are not 
returned.

Throwing an exception might be nice, but perhaps that would break existing 
codebases and therefore is not good to add(?). Therefore I think the best 
course of action would be to do what is behaviorally as developer intends: 
"please treat as if stdout and stderr had been captured to a pipe, and throw 
those pipes away, since they aren't returned.", so your third option, while 
inconsistent with direct Popen(), sounds most correct in practice. What do you 
think?

I am not currently aware of other such cases, although it would be useful to go 
through the docs and recheck the commit history of when that documentation note 
was added in to see if there was more related discussions that occurred.

--

___
Python tracker 
<http://bugs.python.org/issue22442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22442] subprocess.check_call hangs on large PIPEd data.

2014-09-21 Thread juj

juj added the comment:

Hmm, that path does it for stdout=PIPE in subprocess.call only? It could 
equally apply to stderr=PIPE in subprocess.call as well, and also to both 
stdout=PIPE and stderr=PIPE in subprocess.check_call?

--

___
Python tracker 
<http://bugs.python.org/issue22442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23489] atexit handlers are not executed when using multiprocessing.Pool.map.

2015-02-20 Thread juj

New submission from juj:

When Multiprocessing.Pool.map is used for a script that registers atexit 
handlers, the atexit handlers are not executed when the pool threads quit.

STR:

1. Run attached file in Python 2.7 with 'python task_spawn.py'
2. Observe the printed output.

Observed:

Console prints:

CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_qef8r_
CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_axi9tt
CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_vx6fmu
task1
task2
ATEXIT: REMOVING TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_qef8r_

Expected:

Console should print:

CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_qef8r_
CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_axi9tt
CREATED TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_vx6fmu
task1
task2
ATEXIT: REMOVING TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_vx6fmu
ATEXIT: REMOVING TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_axi9tt
ATEXIT: REMOVING TEMP DIRECTORY c:\users\clb\appdata\local\temp\temp_qef8r_

--
components: Library (Lib)
files: task_spawn.py
messages: 236273
nosy: juj
priority: normal
severity: normal
status: open
title: atexit handlers are not executed when using multiprocessing.Pool.map.
type: behavior
versions: Python 2.7
Added file: http://bugs.python.org/file38185/task_spawn.py

___
Python tracker 
<http://bugs.python.org/issue23489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23489] atexit handlers are not executed when using multiprocessing.Pool.map.

2015-02-20 Thread juj

juj added the comment:

This was tested on Python 2.7.9 64-bit on Windows 8.1, however I believe that 
it occurs equally on OSX and Linux, since I am running servers with those OSes 
that also exhibit temp file leaking issues (although I did not specifically 
confirm if the root cause is the same as this).

--

___
Python tracker 
<http://bugs.python.org/issue23489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23489] atexit handlers are not executed when using multiprocessing.Pool.map.

2015-02-20 Thread juj

juj added the comment:

While the test case can be 'fixed' by changing the code to use "if __name__ == 
'__main__'", and I'm ok to do it in my code to work around the problem, I would 
argue the following:

1) calling this not a bug (or solving it only at documentation level) does not 
at all feel correct to reflect the situation, since the provided test case 
silently fails and does the unexpected. If atexit() does not work at all when 
invoked as a result of importing from multiprocessing.Pool.map(), then at 
minimum it would be better that calling atexit() in such a scenario should 
throw an exception "not available", rather than silently discarding the 
operation.

2) Why couldn't the atexit handlers be executed even on Windows when the 
multiprocessing processes quit, even if special code is required in python 
multiprocessing libraries to handle it? The explanation you are giving sounds 
like a lazy excuse. There should not be any technical obstacle why the cleanup 
handlers could not be tracked and honored here?

3) Saying that this should not be working like the (existing) documentation 
implies, is not at all obvious to the reader. I could not find it documented 
that processes that exit from multiprocessing would be somehow special, and the 
note that you pasted does is not in any way obvious to connect to this case, 
since a) I was not using signals, b) there was no internal error occurring, and 
c) I was not calling os._exit(). The documentation does not reflect that it is 
undefined whether atexit() handlers are executed or not when multiprocessing is 
used.

4) I would even argue that it is a bug that there is different cross platform 
observable behavior in terms of multiprocessing and script importing, but that 
is probably a different topic.

Overall, leaving this as a silent failure, instead of raising an exception, nor 
implementing the support on Windows, does not feel mature, since it leaves a 
hole of C/C++ style of undefined behavior in the libraries. For maturity, I 
would recommend something to be done, in the descending order of preference:

I) Fix multiprocessing importing on windows so that it is not a special case 
compared to other OSes.

II) If the above is not possible, fix the atexit() handlers so that they are 
executed when the processes quit on Windows.

III) If the above is not possible, make the atexit() function raise an 
exception if invoked from a script that has been spawned from multiprocessing, 
when it is known at atexit() call time that the script was spawned a as a 
result of multiprocessing, and the atexit() handlers will never be run.

If none of those are really not possible due to real technical reasons, then as 
a last resort, explicitly document both in the docs for atexit() and the docs 
for multiprocessing that the atexit() handlers are not executed if called on 
Windows when these two are used in conjunction.

Disregarding these kind of silent failure behavior especially when 
cross-platformness is involved with a shrug and a NotABug label is not a good 
practice!

--

___
Python tracker 
<http://bugs.python.org/issue23489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22442] subprocess.check_call hangs on large PIPEd data.

2015-05-19 Thread juj

juj added the comment:

This issue still reads open, but there has not been activity in a long time. 
May I ask what is the latest status on this?

Also, any chance whether this will be part of Python 2.x?

--

___
Python tracker 
<http://bugs.python.org/issue22442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com