[Python-Dev] Re: Small subprocess patch
On Tue, 30 Nov 2004, Peter Åstrand wrote: > 1) Is it OK to commit changes like this on the 2.4 branch, in addition to > trunk? I'm also wondering if patch 1071755 and 1071764 should go into release24-maint: * 1071755 makes subprocess raise TypeError if Popen is called with a bufsize that is not an integer. * 1071764 adds a new, small utility function. /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list [EMAIL PROTECTED] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] [ python-Bugs-1124637 ] test_subprocess is far too slow (fwd)
I'd like to have your opinion on this bug. Personally, I'd prefer to keep test_no_leaking as it is, but if you think otherwise... One thing that actually can motivate that test_subprocess takes 20% of the overall time is that this test is a good generic Python stress test - this test might catch some other startup race condition, for example. Regards, Åstrand -- Forwarded message -- Date: Thu, 17 Feb 2005 04:09:33 -0800 From: SourceForge.net <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Subject: [ python-Bugs-1124637 ] test_subprocess is far too slow Bugs item #1124637, was opened at 2005-02-17 11:10 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124637&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Peter Åstrand (astrand) Summary: test_subprocess is far too slow Initial Comment: test_subprocess takes multiple minutes. I'm pretty sure it's "test_no_leaking". It should either be sped up or only tested when some -u argument is passed to regrtest. -- >Comment By: Michael Hudson (mwh) Date: 2005-02-17 12:09 Message: Logged In: YES user_id=6656 Bog standard linux pc -- p3 933, 384 megs of ram. "$ time ./python ../Lib/test/regrtest.py test_subprocess" reports 2 minutes 7. This is a debug build, a release build might be quicker. A run of the entire test suite takes a hair over nine minutes, so 20-odd % of the time seems to be test_subprocess. It also takes ages on my old-ish ibook (600 Mhz G3, also 384 megs of ram), but that's at home and I can't time it. -- Comment By: Peter Åstrand (astrand) Date: 2005-02-17 11:50 Message: Logged In: YES user_id=344921 Tell me a bit about your type of OS and hardware. On my machine (P4 2.66 GHz with Linux), the test takes 28 seconds. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124637&group_id=5470 ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [ python-Bugs-1124637 ] test_subprocess is far too slow (fwd)
On Thu, 17 Feb 2005, Guido van Rossum wrote: > > I'd like to have your opinion on this bug. Personally, I'd prefer to keep > > test_no_leaking as it is, but if you think otherwise... > A suite of unit tests is a precious thing. We want to test as much as > we can, and as thoroughly as possible; but at the same time we want > the test to run reasonably fast. If the test takes too long, human > nature being what it is, this will actually cause less thorough > testing because developers don't feel like running the test suite > after each small change, and then we get frequent problems where Good point. > The Python test suite already has a way (the -u flag) to distinguish > between "regular" broad-coverage testing and deep coverage for > specific (or all) areas. Let's keep the really long-running tests out > of the regular test suite. I'm convinced. Is this easy to implement? Anyone interested in doing this? > There used to be a farm of machines that did nothing but run the test > suite ("snake-farm"). This seems to have stopped (it was run by > volunteers at a Swedish university). Maybe we should revive such an > effort, and make sure it runs with -u all. Yes, Snake Farm is/was a project at "Lysator", an academic computer society located at Linkoping University. As you can tell from my mail address, I'm a member as well. I haven't been involved in the Snake Farm project, though. /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding any() and all()
On Fri, 11 Mar 2005, Paul Moore wrote: > > Not sure this is pertinent but anyway: "any" and "all" are often used > > as variable names. "all" especially often and then almost always as a > > list of something. It would not be good to add "all" to the list of > > words to watch out for. Also, "all" is usually thought of as a list of > Using "any" and "all" as variables hides the builtins, but doesn't > disallow their use elsewhere. Personally, though, I wouldn't use "any" > or "all" as variable names, so that's a style issue. Even though you can use them as variables (and shadow the builtins), you will still get warnings from "pychecker". The code will also be harder to read: When you see "all" in the middle of some code, you don't know if it's referring to the builtin or a variable. Personally, I think Python has too many builtins already. /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Draft PEP to make file objects support non-blocking mode.
On Mon, 21 Mar 2005, Donovan Baarda wrote: > > I don't agree with that. There's no need to use non-blocking > > I/O when using select(), and in fact things are less confusing > > if you don't. > > You would think that... and the fact that select, popen2 etc all use > file objects encourage you to think that. However, this is a trap that > can catch you out badly. Check the attached python scripts that > demonstrate the problem. This is no "trap". When select() indicates that you can write or read, it means that you can write or read at least one byte. The .read() and .write() file methods, however, always writes and reads *everything*. These works, basically, just like fread()/fwrite(). > The only ways to ensure that a select process does not block like this, > without using non-blocking mode, are; > > 1) use a buffer size of 1 in the select process. > > 2) understand the child process's read/write behaviour and adjust the > selector process accordingly... ie by making the buffer sizes just right > for the child process, 3) Use os.read / os.write. > > >>The read method's current behaviour needs to be documented, so its actual > > >>behaviour can be used to differentiate between an empty non-blocking read, > > >>and EOF. This means recording that IOError(EAGAIN) is raised for an empty > > >>non-blocking read. > > > > Isn't that unix-specific? The file object is supposed to > > provide a more or less platform-independent interface, I > > thought. > > I think the fread/fwrite and read/write behaviour is posix standard and > possibly C standard stuff... so it _should_ be the same on other > platforms. Sorry if I've misunderstood your point, but fread()/fwrite() does not return EAGAIN. /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Draft PEP to make file objects support non-blocking mode.
On Mon, 21 Mar 2005, Donovan Baarda wrote: > > > The only ways to ensure that a select process does not block like this, > > > without using non-blocking mode, are; > > 3) Use os.read / os.write. > [...] > > but os.read / os.write will block too. No. >Try it... replace the file > read/writes in selector.py. They will only do partial reads if the file is > put into non-blocking mode. I've just tried it; I replaced: data = o.read(BUFF_SIZE) with: data = os.read(o.fileno(), BUFF_SIZE) Works for me without any hangs. Another example is the subprocess module, which does not use non-blocking mode in any way. (If you are using pipes, however, you shouldn't write more than PIPE_BUF bytes in each write.) > > > I think the fread/fwrite and read/write behaviour is posix standard and > > > possibly C standard stuff... so it _should_ be the same on other > > > platforms. > > > > Sorry if I've misunderstood your point, but fread()/fwrite() does not > > return EAGAIN. > > no, fread()/fwrite() will return 0 if nothing was read/written, and ferror() > will return EAGAIN to indicated that it was a "would block" condition at > least I think it does... the man page simply says ferror() returns a > non-zero value. fread() should loop internally on EAGAIN, in blocking mode. /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: subprocess.Popen(.... stdout=IGNORE, ...)
On Tue, 13 Jun 2006, Martin Blais wrote: Hi all. Now let's see if I remember something about my module... > In the subprocess module, by default the files handles in the child > are inherited from the parent. To ignore a child's output, I can use > the stdout or stderr options to send the output to a pipe:: > >p = Popen(command, stdout=PIPE, stderr=PIPE) > > However, this is sensitive to the buffer deadlock problem, where for > example the buffer for stderr might become full and a deadlock occurs > because the child is blocked on writing to stderr and the parent is > blocked on reading from stdout or waiting for the child to finish. > > For example, using this command will cause deadlock:: > >call('cat /boot/vmlinuz'.split(), stdout=PIPE, stderr=PIPE) Yes, the call convenience function is basically for the case when you are not interested in redirection. > Popen.communicate() implements a solution using either select() or > multiple threads (under Windows) to read from the pipes, and returns > the strings as a result. It works out like this:: > >p = Popen(command, stdout=PIPE, stderr=PIPE) >output, errors = p.communicate() >if p.returncode != 0: > ? > > Now, as a user of the subprocess module, sometimes I just want to > call some child process and simply ignore its output, and to do so I > am forced to use communicate() as above and wastefully capture and > ignore the strings. This is actually quite a common use case. "Just > run something, and check the return code". Yes, this is a common case, and using communicate() is indeed overkill and wasteful. > Right now, in order to do > this without polluting the parent's output, you cannot use the call() > convenience (or is there another way?). > > A workaround that works under UNIX is to do this:: > >FNULL = open('/dev/null', 'w') >returncode = call(command, stdout=FNULL, stderr=FNULL) Yes, this works. You can also do: returncode = subprocess.call(command, stdout=open('/dev/null', 'w'), stderr=subprocess.STDOUT) > Some feedback requested, I'd like to know what you think: > > 1. Would it not be nice to add a IGNORE constant to subprocess.py >that would do this automatically?, i.e. :: > > returncode = call(command, stdout=IGNORE, stderr=IGNORE) > >Rather than capture and accumulate the output, it would find an >appropriate OS-specific way to ignore the output (the /dev/null file >above works well under UNIX, how would you do this under Windows? >I'm sure we can find something.) I have a vague feeling of that this has been discussed before, but I cannot find a tracker for this. I guess an IGNORE constant would be nice. Using open('/dev/null', 'w') is only a few more characters to type, but as you say, it's not platform independent. So, feel free to submit a patch or a Feature Request Tracker. > 2. call() should be modified to not be sensitive to the deadlock >problem, since its interface provides no way to return the >contents of the output. The IGNORE value provides a possible >solution for this. How do you suggest the call() should be modified? I'm not really sure it can do more without being more complicated. Being simple is the main purpose of call(). > 3. With the /dev/null file solution, the following code actually >works without deadlock, because stderr is never blocked on writing >to /dev/null:: > > p = Popen(command, stdout=PIPE, stderr=IGNORE) > text = p.stdout.read() > retcode = p.wait() > >Any idea how this idiom could be supported using a more portable >solution (i.e. how would I make this idiom under Windows, is there >some equivalent to /dev/null)? Yes, as Terry Reedy points out, NUL: can be used. Regards, /Peter Åstrand <[EMAIL PROTECTED]> ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com