[ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class

2007-01-21 Thread SourceForge.net
Bugs item #1486663, was opened at 2006-05-11 16:17
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 7
Private: No
Submitted By: dib (dib_at_work)
Assigned to: Georg Brandl (gbrandl)
Summary: Over-zealous keyword-arguments check for built-in set class

Initial Comment:
The fix for bug #1119418 (xrange() builtin accepts
keyword arg silently) included in Python 2.4.2c1+
breaks code that passes keyword argument(s) into
classes derived from the built-in set class, even if
those derived classes explictly accept those keyword
arguments and avoid passing them down to the built-in
base class.

Simplified version of code in attached
BuiltinSetKeywordArgumentsCheckBroken.py fails at (G)
due to bug #1119418 if version < 2.4.2c1; if version >=
2.4.2c1 (G) passes thanks to that bug fix, but instead
(H) incorrectly-in-my-view fails.

[Presume similar cases would fail for xrange and the
other classes mentioned in #1119418.]

  -- David Bruce

(Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.)

--

>Comment By: Georg Brandl (gbrandl)
Date: 2007-01-21 10:29

Message:
Logged In: YES 
user_id=849994
Originator: NO

Committed as rev. 53509, 53510 (2.5).

--

Comment By: Georg Brandl (gbrandl)
Date: 2007-01-17 09:13

Message:
Logged In: YES 
user_id=849994
Originator: NO

I'll create the testcases and commit the patch (as well as NEWS entries :)
when I find the time.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2007-01-17 07:22

Message:
Logged In: YES 
user_id=33168
Originator: NO

Were these changes applied by Raymond?  I don't think there were NEWS
entries though.

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2007-01-11 20:43

Message:
Logged In: YES 
user_id=80475
Originator: NO

That looks about right.  Please add test cases that fail without the patch
and succeed with the patch.  Also, put a comment in Misc/NEWS.  If the
whole test suite passes, go ahead and check-in to Py2.5.1 and the head.  

Thanks, 

Raymond

--

Comment By: Georg Brandl (gbrandl)
Date: 2007-01-11 19:56

Message:
Logged In: YES 
user_id=849994
Originator: NO

Attaching patch.
File Added: nokeywordchecks.diff

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2007-01-11 18:30

Message:
Logged In: YES 
user_id=80475
Originator: NO

I fixed setobject.c in revisions 53380 and 53381.

Please apply similar fixes to all the other places being bitten my the
pervasive NoKeywords tests.

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2007-01-11 00:49

Message:
Logged In: YES 
user_id=80475
Originator: NO

My proposed solution:

- if(!PyArg_NoKeywords("set()", kwds)
+ if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds)

--

Comment By: Georg Brandl (gbrandl)
Date: 2007-01-10 21:30

Message:
Logged In: YES 
user_id=849994
Originator: NO

I'll do that, only in set_init, you have 
if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable))

Changing this to use PyArg_ParseTupleAndKeywords would require a format
string of
"|O:" + self->ob_type->tp_name

Is it worth constructing that string each time set_init() is called or
should it just be "|O:set" for
sets and frozensets?

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2007-01-06 02:26

Message:
Logged In: YES 
user_id=80475
Originator: NO

I prefer the approach used by list().

--

Comment By: Žiga Seilnacht (zseil)
Date: 2006-05-20 01:19

Message:
Logged In: YES 
user_id=1326842

See patch #1491939

--

Comment By: Žiga Seilnacht (zseil)
Date: 2006-05-19 20:02

Message:
Logged In: YES 
user_id=1326842

This bug was introduced as part of the fix for bug #1119418.

It also affects collections.deque.

Can't the _PyArg_NoKeywords check simply be moved
to set_init and deque_init as it was done for
zipimport.zipimporter?

array.array doesn't need to be changed, since it
already does all of its initialization in its
__new__ method.

The rest of the types changed in t

[ python-Bugs-1601399 ] urllib2 does not close sockets properly

2007-01-21 Thread SourceForge.net
Bugs item #1601399, was opened at 2006-11-22 21:04
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Private: No
Submitted By: Brendan Jurd (direvus)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib2 does not close sockets properly

Initial Comment:
Python 2.5 (release25-maint, Oct 29 2006, 12:44:11)
[GCC 4.1.2 20061026 (prerelease) (Debian 4.1.1-18)] on linux2

I first noticed this when a program of mine (which makes a brief HTTPS 
connection every 20 seconds) started having some weird crashes.  It turned out 
that the process had a massive number of file descriptors open.  I did some 
debugging, and it became clear that the program was opening two file 
descriptors for every HTTPS connection it made with urllib2, and it wasn't 
closing them, even though I was reading all data from the response objects and 
then explictly calling close() on them.

I found I could easily reproduce the behaviour using the interactive console.  
Try this while keeping an eye on the file descriptors held open by the python 
process:

To begin with, the process will have the usual FDs 0, 1 and 2 open for 
std(in|out|err), plus one other.

>>> import urllib2
>>> f = urllib2.urlopen("http://www.google.com";)

Now at this point the process has opened two more sockets.

>>> f.read()
[... HTML ensues ...]
>>> f.close()

The two extra sockets are still open.

>>> del f

The two extra sockets are STILL open.

>>> f = urllib2.urlopen("http://www.python.org";)
>>> f.read()
[...]
>>> f.close()

And now we have a total of four abandoned sockets open.

It's not until you terminate the process entirely, or the OS (eventually) 
closes the socket on idle timeout, that they are closed.

Note that if you do the same thing with httplib, the sockets are properly 
closed:

>>> import httplib
>>> c = httlib.HTTPConnection("www.google.com", 80)
>>> c.connect()

A socket has been opened.

>>> c.putrequest("GET", "/")
>>> c.endheaders()
>>> r = c.getresponse()
>>> r.read()
[...]
>>> r.close()

And the socket has been closed.

--

>Comment By: Georg Brandl (gbrandl)
Date: 2007-01-21 10:36

Message:
Logged In: YES 
user_id=849994
Originator: NO

Committed patch in rev. 53511, 53512 (2.5).

--

Comment By: John J Lee (jjlee)
Date: 2007-01-03 23:54

Message:
Logged In: YES 
user_id=261020
Originator: NO

Confirmed.  The cause is the (ab)use of socket._fileobject by
urllib2.AbstractHTTPHandler to provide .readline() and .readlines()
methods.  _fileobject simply does not close the socket on
_fileobject.close() (since in the original intended use of _fileobject,
_socketobject "owns" the socket, and _fileobject only has a reference to
it).  The bug was introduced with the upgrade to HTTP/1.1 in revision
36871.

The patch here fixes it:

http://python.org/sf/1627441


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1603907 ] subprocess: error redirecting i/o from non-console process

2007-01-21 Thread SourceForge.net
Bugs item #1603907, was opened at 2006-11-27 18:20
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
>Category: None
>Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Oren Tirosh (orenti)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess: error redirecting i/o from non-console process 

Initial Comment:
In IDLE, PythonWin or other non-console interactive Python under Windows:

>>> from subprocess import *
>>> Popen('cmd', stdout=PIPE)

Traceback (most recent call last):
  File "", line 1, in -toplevel-
Popen('', stdout=PIPE)
  File "C:\python24\lib\subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
  File "C:\python24\lib\subprocess.py", line 593, in _get_handles
p2cread = self._make_inheritable(p2cread)
  File "C:\python24\lib\subprocess.py", line 634, in _make_inheritable
DUPLICATE_SAME_ACCESS)
TypeError: an integer is required

The same command in a console windows is successful.

Why it happens: 
subprocess assumes that GetStdHandle always succeeds but when there is no 
console it returns None. DuplicateHandle then complains about getting a 
non-integer. This problem does not happen when redirecting all three standard 
handles.

Solution:
Replace None with -1 (INVALID_HANDLE_VALUE) in _make_inheritable.

Patch attached.

--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-21 16:31

Message:
Logged In: YES 
user_id=344921
Originator: NO

This the suggested patches are not ready for commit, I'm moving this issue
to "bugs" instead. 

--

Comment By: Oren Tirosh (orenti)
Date: 2007-01-07 19:13

Message:
Logged In: YES 
user_id=562624
Originator: YES

Oops. The new patch does not solve it in all cases in the win32api
version, either...

--

Comment By: Oren Tirosh (orenti)
Date: 2007-01-07 19:09

Message:
Logged In: YES 
user_id=562624
Originator: YES

If you duplicate INVALID_HANDLE_VALUE you get a new valid handle to
nothing :-) I guess the code really should not rely on this undocumented
behavior. The reason I didn't return INVALID_HANDLE_VALUE directly is
because DuplicateHandle returns a _subprocess_handle object, not an int.
It's expected to have a .Close() method elsewhere in the code.

Because of subtle difference between in the behavior of the _subprocess
and win32api implementations of GetStdHandle in this case solving this
issue this gets quite messy!
File Added: subprocess-noconsole2.patch

--

Comment By: Peter Åstrand (astrand)
Date: 2007-01-07 11:58

Message:
Logged In: YES 
user_id=344921
Originator: NO

This patch looks very interesting. However, it feels a little bit strange
to call DuplicateHandle with a handle of -1. Is this really allowed? What
will DuplicateHandle return in this case? INVALID_HANDLE_VALUE? In that
case, isn't it better to return INVALID_HANDLE_VALUE directly? 


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1634739 ] Problem running a subprocess

2007-01-21 Thread SourceForge.net
Bugs item #1634739, was opened at 2007-01-13 16:46
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
>Resolution: Invalid
Priority: 5
Private: No
Submitted By: Florent Rougon (frougon)
>Assigned to: Peter Åstrand (astrand)
Summary: Problem running a subprocess

Initial Comment:
Hello,

I have a problem running a subprocess from Python (see below). I first ran into 
it with the subprocess module, but it's also triggered by a simple os.fork() 
followed by os.execvp().

So, what is the problem, exactly? I have written the exact same minimal program 
in C and in Python, which uses fork() and execvp() in the most straightforward 
way to run the following command:

  transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png

(whose effect is to extract the 100th frame of /tmp/file.mpg and store it into 
snapshot.png)

The C program runs fast with no error, while the one in Python takes from 60 to 
145 times longer (!), and triggers error messages from transcode. This 
shouldn't happen, since both programs are merely calling transcode in the same 
way to perform the exact same thing.

Experiments


1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 
2 PS) [the first time fills the block IO cache], and store the output in 
extract_frame.output:

  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.82s user 0.33s system 53% cpu 
2.175 total
  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.79s user 0.29s system 96% cpu 
1.118 total

Basically, this takes 1 or 2 seconds. extract_frame.output is attached.

Second, I run the Python program (extract_frame.py) on the same .mpg file, and 
store the output in extract_frame.py.output:

  % time ./extract_frame.py >extract_frame.py.output 2>&1 
  ./extract_frame.py > extract_frame.py.output 2>& 1  81.59s user 25.98s system 
66% cpu 2:42.51 total

This takes more than 2 *minutes*, not seconds!
(of course, the system is idle for all tests)

In extract_frame.py.output, the following error message appears quickly after 
the process is started:

  failed to write Y plane of frame(demuxer.c) write program stream packet: 
Broken pipe

which is in fact composed of two error messages, the second one starting at 
"(demuxer.c)".

Once these messages are printed, the transcode subprocesses[1] seem to hang 
(with relatively high CPU usage), but eventually complete, after 2 minutes or 
so.

There are no such error messages in extract_frame.output.

2. Same test with another .mpg file. As far as time is concerned, we have the 
same problem:

  [C program]
  % time ./extract_frame >extract_frame.output2 2>&1 
  ./extract_frame > extract_frame.output2 2>& 1  0.73s user 0.28s system 43% 
cpu 2.311 total

  [Python program]
  % time ./extract_frame.py >extract_frame.py.output2 2>&1
  ./extract_frame.py > extract_frame.py.output2 2>& 1  92.84s user 12.20s 
system 76% cpu 2:18.14 total

We also get the first error message in extract_frame.py.output2:

  failed to write Y plane of frame

when running extract_frame.py, but this time, we do *not* have the second error 
message:

  (demuxer.c) write program stream packet: Broken pipe

All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge 
for 2.3 and 2.4, vanilla Python 2.5).

  % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion'
  2.5 (r25:51908, Jan  5 2007, 17:35:09) 
  [GCC 3.3.5 (Debian 1:3.3.5-13)]
  20500f0

  % transcode --version
  transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg

I'd hazard that Python is tweaking some process or threading parameter that is 
inherited by subprocesses and disturbs transcode, which doesn't happen when 
calling fork() and execvp() from a C program, but am unfortunately unable to 
precisely diagnose the problem.

Many thanks for considering.

Regards,

Florent

  [1] Plural because the transcode process spawns several childs: tcextract, 
tcdemux, etc.

--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-21 16:37

Message:
Logged In: YES 
user_id=344921
Originator: NO

>That's the only thing I managed to get with the C version. But with the
>Python version, if I don't list the contents of /proc//fd
immediately
>after the transcode process started,

I find it very hard to believe that just listing the contents of a
kernel-virtual directory can change the behaviour of an application. I
think it's much more likely that you have a timing issue.  

Since nothing indicates that th

[ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck

2007-01-21 Thread SourceForge.net
Bugs item #1598181, was opened at 2006-11-17 07:40
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
Resolution: Fixed
Priority: 5
Private: No
Submitted By: Ralf W. Grosse-Kunstleve (rwgk)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess.py: O(N**2) bottleneck

Initial Comment:
subprocess.py (Python 2.5, current SVN, probably all versions) contains this 
O(N**2) code:

  bytes_written = os.write(self.stdin.fileno(), input[:512])
  input = input[bytes_written:]

For large but reasonable "input" the second line is rate limiting. Luckily, it 
is very easy to remove this bottleneck. I'll upload a simple patch. Below is a 
small script that demonstrates the huge speed difference. The output on my 
machine is:

creating input
0.888417959213
slow slicing input
61.1553330421
creating input
0.863168954849
fast slicing input
0.0163860321045
done

The numbers are times in seconds.

This is the source:

import time
import sys
size = 100
t0 = time.time()
print "creating input"
input = "\n".join([str(i) for i in xrange(size)])
print time.time()-t0
t0 = time.time()
print "slow slicing input"
n_out_slow = 0
while True:
  out = input[:512]
  n_out_slow += 1
  input = input[512:]
  if not input:
break
print time.time()-t0
t0 = time.time()
print "creating input"
input = "\n".join([str(i) for i in xrange(size)])
print time.time()-t0
t0 = time.time()
print "fast slicing input"
n_out_fast = 0
input_done = 0
while True:
  out = input[input_done:input_done+512]
  n_out_fast += 1
  input_done += 512
  if input_done >= len(input):
break
print time.time()-t0
assert n_out_fast == n_out_slow
print "done"


--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-21 16:45

Message:
Logged In: YES 
user_id=344921
Originator: NO

Backported to 2.5, in rev. 53513.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2007-01-17 08:00

Message:
Logged In: YES 
user_id=33168
Originator: NO

Peter this is fine for 2.5.1.  Please apply and update Misc/NEWS. Thanks!

--

Comment By: Ralf W. Grosse-Kunstleve (rwgk)
Date: 2007-01-07 16:15

Message:
Logged In: YES 
user_id=71407
Originator: YES

Thanks for the fixes!


--

Comment By: Peter Åstrand (astrand)
Date: 2007-01-07 15:36

Message:
Logged In: YES 
user_id=344921
Originator: NO

Fixed in trunk revision 53295. Is this a good candidate for backporting to
25-maint?

--

Comment By: Mike Klaas (mklaas)
Date: 2007-01-04 19:20

Message:
Logged In: YES 
user_id=1611720
Originator: NO

I reviewed the patch--the proposed fix looks good.  Minor comments:
  - "input_done" sounds like a flag, not a count of written bytes
  - buffer() could be used to avoid the 512-byte copy created by the slice

--

Comment By: Ralf W. Grosse-Kunstleve (rwgk)
Date: 2006-11-17 07:43

Message:
Logged In: YES 
user_id=71407
Originator: YES

Sorry, I didn't know the tracker would destroy the indentation.
I'm uploading the demo source as a separate file.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1634739 ] Problem running a subprocess

2007-01-21 Thread SourceForge.net
Bugs item #1634739, was opened at 2007-01-13 15:46
Message generated for change (Comment added) made by frougon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Closed
>Resolution: Works For Me
Priority: 5
Private: No
Submitted By: Florent Rougon (frougon)
Assigned to: Peter Åstrand (astrand)
Summary: Problem running a subprocess

Initial Comment:
Hello,

I have a problem running a subprocess from Python (see below). I first ran into 
it with the subprocess module, but it's also triggered by a simple os.fork() 
followed by os.execvp().

So, what is the problem, exactly? I have written the exact same minimal program 
in C and in Python, which uses fork() and execvp() in the most straightforward 
way to run the following command:

  transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png

(whose effect is to extract the 100th frame of /tmp/file.mpg and store it into 
snapshot.png)

The C program runs fast with no error, while the one in Python takes from 60 to 
145 times longer (!), and triggers error messages from transcode. This 
shouldn't happen, since both programs are merely calling transcode in the same 
way to perform the exact same thing.

Experiments


1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 
2 PS) [the first time fills the block IO cache], and store the output in 
extract_frame.output:

  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.82s user 0.33s system 53% cpu 
2.175 total
  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.79s user 0.29s system 96% cpu 
1.118 total

Basically, this takes 1 or 2 seconds. extract_frame.output is attached.

Second, I run the Python program (extract_frame.py) on the same .mpg file, and 
store the output in extract_frame.py.output:

  % time ./extract_frame.py >extract_frame.py.output 2>&1 
  ./extract_frame.py > extract_frame.py.output 2>& 1  81.59s user 25.98s system 
66% cpu 2:42.51 total

This takes more than 2 *minutes*, not seconds!
(of course, the system is idle for all tests)

In extract_frame.py.output, the following error message appears quickly after 
the process is started:

  failed to write Y plane of frame(demuxer.c) write program stream packet: 
Broken pipe

which is in fact composed of two error messages, the second one starting at 
"(demuxer.c)".

Once these messages are printed, the transcode subprocesses[1] seem to hang 
(with relatively high CPU usage), but eventually complete, after 2 minutes or 
so.

There are no such error messages in extract_frame.output.

2. Same test with another .mpg file. As far as time is concerned, we have the 
same problem:

  [C program]
  % time ./extract_frame >extract_frame.output2 2>&1 
  ./extract_frame > extract_frame.output2 2>& 1  0.73s user 0.28s system 43% 
cpu 2.311 total

  [Python program]
  % time ./extract_frame.py >extract_frame.py.output2 2>&1
  ./extract_frame.py > extract_frame.py.output2 2>& 1  92.84s user 12.20s 
system 76% cpu 2:18.14 total

We also get the first error message in extract_frame.py.output2:

  failed to write Y plane of frame

when running extract_frame.py, but this time, we do *not* have the second error 
message:

  (demuxer.c) write program stream packet: Broken pipe

All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge 
for 2.3 and 2.4, vanilla Python 2.5).

  % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion'
  2.5 (r25:51908, Jan  5 2007, 17:35:09) 
  [GCC 3.3.5 (Debian 1:3.3.5-13)]
  20500f0

  % transcode --version
  transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg

I'd hazard that Python is tweaking some process or threading parameter that is 
inherited by subprocesses and disturbs transcode, which doesn't happen when 
calling fork() and execvp() from a C program, but am unfortunately unable to 
precisely diagnose the problem.

Many thanks for considering.

Regards,

Florent

  [1] Plural because the transcode process spawns several childs: tcextract, 
tcdemux, etc.

--

>Comment By: Florent Rougon (frougon)
Date: 2007-01-21 16:24

Message:
Logged In: YES 
user_id=310088
Originator: YES

I never wrote that it was the listing of /proc//fd that was changing
the behavior of transcode. Please don't put words in my mouth. I wrote
that some fds are open soon after the transcode process is started, and
quickly closed afterwards, when run from the Python test script.

The rest of your answer again shows that you didn't read the bug report.
I'll repeat a last time. The title of this bug report is 

[ python-Bugs-1634739 ] Problem running a subprocess

2007-01-21 Thread SourceForge.net
Bugs item #1634739, was opened at 2007-01-13 15:46
Message generated for change (Settings changed) made by frougon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Closed
>Resolution: Invalid
Priority: 5
Private: No
Submitted By: Florent Rougon (frougon)
Assigned to: Peter Åstrand (astrand)
Summary: Problem running a subprocess

Initial Comment:
Hello,

I have a problem running a subprocess from Python (see below). I first ran into 
it with the subprocess module, but it's also triggered by a simple os.fork() 
followed by os.execvp().

So, what is the problem, exactly? I have written the exact same minimal program 
in C and in Python, which uses fork() and execvp() in the most straightforward 
way to run the following command:

  transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png

(whose effect is to extract the 100th frame of /tmp/file.mpg and store it into 
snapshot.png)

The C program runs fast with no error, while the one in Python takes from 60 to 
145 times longer (!), and triggers error messages from transcode. This 
shouldn't happen, since both programs are merely calling transcode in the same 
way to perform the exact same thing.

Experiments


1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 
2 PS) [the first time fills the block IO cache], and store the output in 
extract_frame.output:

  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.82s user 0.33s system 53% cpu 
2.175 total
  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.79s user 0.29s system 96% cpu 
1.118 total

Basically, this takes 1 or 2 seconds. extract_frame.output is attached.

Second, I run the Python program (extract_frame.py) on the same .mpg file, and 
store the output in extract_frame.py.output:

  % time ./extract_frame.py >extract_frame.py.output 2>&1 
  ./extract_frame.py > extract_frame.py.output 2>& 1  81.59s user 25.98s system 
66% cpu 2:42.51 total

This takes more than 2 *minutes*, not seconds!
(of course, the system is idle for all tests)

In extract_frame.py.output, the following error message appears quickly after 
the process is started:

  failed to write Y plane of frame(demuxer.c) write program stream packet: 
Broken pipe

which is in fact composed of two error messages, the second one starting at 
"(demuxer.c)".

Once these messages are printed, the transcode subprocesses[1] seem to hang 
(with relatively high CPU usage), but eventually complete, after 2 minutes or 
so.

There are no such error messages in extract_frame.output.

2. Same test with another .mpg file. As far as time is concerned, we have the 
same problem:

  [C program]
  % time ./extract_frame >extract_frame.output2 2>&1 
  ./extract_frame > extract_frame.output2 2>& 1  0.73s user 0.28s system 43% 
cpu 2.311 total

  [Python program]
  % time ./extract_frame.py >extract_frame.py.output2 2>&1
  ./extract_frame.py > extract_frame.py.output2 2>& 1  92.84s user 12.20s 
system 76% cpu 2:18.14 total

We also get the first error message in extract_frame.py.output2:

  failed to write Y plane of frame

when running extract_frame.py, but this time, we do *not* have the second error 
message:

  (demuxer.c) write program stream packet: Broken pipe

All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge 
for 2.3 and 2.4, vanilla Python 2.5).

  % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion'
  2.5 (r25:51908, Jan  5 2007, 17:35:09) 
  [GCC 3.3.5 (Debian 1:3.3.5-13)]
  20500f0

  % transcode --version
  transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg

I'd hazard that Python is tweaking some process or threading parameter that is 
inherited by subprocesses and disturbs transcode, which doesn't happen when 
calling fork() and execvp() from a C program, but am unfortunately unable to 
precisely diagnose the problem.

Many thanks for considering.

Regards,

Florent

  [1] Plural because the transcode process spawns several childs: tcextract, 
tcdemux, etc.

--

Comment By: Florent Rougon (frougon)
Date: 2007-01-21 16:24

Message:
Logged In: YES 
user_id=310088
Originator: YES

I never wrote that it was the listing of /proc//fd that was changing
the behavior of transcode. Please don't put words in my mouth. I wrote that
some fds are open soon after the transcode process is started, and quickly
closed afterwards, when run from the Python test script.

The rest of your answer again shows that you didn't read the bug report.
I'll repeat a last time. The title of this bug report is "Pr

[ python-Bugs-1546442 ] subprocess.Popen can't read file object as stdin after seek

2007-01-21 Thread SourceForge.net
Bugs item #1546442, was opened at 2006-08-25 07:52
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: GaryD (gazzadee)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess.Popen can't read file object as stdin after seek

Initial Comment:
When I use an existing file object as stdin for a call
to subprocess.Popen, then Popen cannot read the file if
I have called seek on it more than once.

eg. in the following python code:

>>> import subprocess
>>> rawfile = file('hello.txt', 'rb')
>>> rawfile.readline()
'line 1\n'
>>> rawfile.seek(0)
>>> rawfile.readline()
'line 1\n'
>>> rawfile.seek(0)
>>> process_object = subprocess.Popen(["cat"],
stdin=rawfile, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)

process_object.stdout now contains nothing, implying
that nothing was on process_object.stdin.

Note that this only applies for a non-trivial seek (ie.
where the file-pointer actually changes). Calling
seek(0) multiple times in a row does not change
anything (obviously).

I have not investigated whether this reveals a problem
with seek not changing the underlying file descriptor,
or a problem with Popen not handling the file
descriptor properly.

I have attached some complete python scripts that
demonstrate this problem. One shows cat working after
calling seek once, the other shows cat failing after
calling seek twice.

Python version being used:
Python 2.4.2 (#1, Nov  3 2005, 12:41:57)
[GCC 3.4.3-20050110 (Gentoo Linux 3.4.3.20050110,
ssp-3.4.3.20050110-0, pie-8.7 on linux2


--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-21 20:43

Message:
Logged In: YES 
user_id=344921
Originator: NO

It's not obvious that the subprocess module is doing anything wrong here.
Mixing streams and file descriptors is always problematic and should best
be avoided
(http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_232.html).
However, the subprocess module *does* accept a file object (based on a
libc stream), for convenience. For things to work correctly, the
application and the subprocess module needs to cooperate. I admit that the
documentation needs improvement on this topic, though. 

It's quite easy to demonstrate the problem, you don't need to use seek at
all. Here's a simple test case:

import subprocess
rawfile = file('hello.txt', 'rb')
rawfile.readline()
p = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
print "File contents from Popen() call to cat:"
print p.stdout.read()
p.wait()

The descriptor offset is at the end, since the stream buffers.
http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_233.html
describes the need for "cleaning up" a stream, when you switch from stream
functions to descriptor functions. This is described at
http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_235.html#SEC244.
The documentation recommends the fclean() function, but it's only available
on GNU systems and not in Python. As I understand it, fflush() works good
for cleaning an output stream. 

For input streams, however, things are difficult. fflush() might work
sometimes, but to be sure, you must set the file pointer as well. And,
this does not work for files that are not random access, since there's no
way of move the buffered data back to the operating system. 

So, since subprocess cannot reliable deal with this situation, I believe
it shouldn't try. I think it makes more sense that the application
prepares the file object for low-level operations. There are many other
Python modules that uses the .fileno() method, for example the select()
module, and as far as I understand, this module doesn't try to clean
streams or anything like that. 

To summarize: I'm leaning towards a documentation solution. 

--

Comment By: lplatypus (ldeller)
Date: 2006-08-25 09:13

Message:
Logged In: YES 
user_id=1534394

I found the cause of this bug:

A libc FILE* (used by python file objects) may hold a
different file offset than the underlying OS file
descriptor.  The posix version of Popen._get_handles does
not take this into account, resulting in this bug.

The following patch against svn trunk fixes the problem.  I
don't have permission to attach files to this item, so I'll
have to paste the patch here:

Index: subprocess.py
===
--- subprocess.py   (revision 51581)
+++ subprocess.py   (working copy)
@@ -907,6 +907,12 @@
 else:
 # Assuming file-like object

[ python-Bugs-1634739 ] Problem running a subprocess

2007-01-21 Thread SourceForge.net
Bugs item #1634739, was opened at 2007-01-13 16:46
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Closed
Resolution: Invalid
Priority: 5
Private: No
Submitted By: Florent Rougon (frougon)
Assigned to: Peter Åstrand (astrand)
Summary: Problem running a subprocess

Initial Comment:
Hello,

I have a problem running a subprocess from Python (see below). I first ran into 
it with the subprocess module, but it's also triggered by a simple os.fork() 
followed by os.execvp().

So, what is the problem, exactly? I have written the exact same minimal program 
in C and in Python, which uses fork() and execvp() in the most straightforward 
way to run the following command:

  transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png

(whose effect is to extract the 100th frame of /tmp/file.mpg and store it into 
snapshot.png)

The C program runs fast with no error, while the one in Python takes from 60 to 
145 times longer (!), and triggers error messages from transcode. This 
shouldn't happen, since both programs are merely calling transcode in the same 
way to perform the exact same thing.

Experiments


1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 
2 PS) [the first time fills the block IO cache], and store the output in 
extract_frame.output:

  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.82s user 0.33s system 53% cpu 
2.175 total
  % time ./extract_frame >extract_frame.output 2>&1
  ./extract_frame > extract_frame.output 2>& 1  0.79s user 0.29s system 96% cpu 
1.118 total

Basically, this takes 1 or 2 seconds. extract_frame.output is attached.

Second, I run the Python program (extract_frame.py) on the same .mpg file, and 
store the output in extract_frame.py.output:

  % time ./extract_frame.py >extract_frame.py.output 2>&1 
  ./extract_frame.py > extract_frame.py.output 2>& 1  81.59s user 25.98s system 
66% cpu 2:42.51 total

This takes more than 2 *minutes*, not seconds!
(of course, the system is idle for all tests)

In extract_frame.py.output, the following error message appears quickly after 
the process is started:

  failed to write Y plane of frame(demuxer.c) write program stream packet: 
Broken pipe

which is in fact composed of two error messages, the second one starting at 
"(demuxer.c)".

Once these messages are printed, the transcode subprocesses[1] seem to hang 
(with relatively high CPU usage), but eventually complete, after 2 minutes or 
so.

There are no such error messages in extract_frame.output.

2. Same test with another .mpg file. As far as time is concerned, we have the 
same problem:

  [C program]
  % time ./extract_frame >extract_frame.output2 2>&1 
  ./extract_frame > extract_frame.output2 2>& 1  0.73s user 0.28s system 43% 
cpu 2.311 total

  [Python program]
  % time ./extract_frame.py >extract_frame.py.output2 2>&1
  ./extract_frame.py > extract_frame.py.output2 2>& 1  92.84s user 12.20s 
system 76% cpu 2:18.14 total

We also get the first error message in extract_frame.py.output2:

  failed to write Y plane of frame

when running extract_frame.py, but this time, we do *not* have the second error 
message:

  (demuxer.c) write program stream packet: Broken pipe

All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge 
for 2.3 and 2.4, vanilla Python 2.5).

  % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion'
  2.5 (r25:51908, Jan  5 2007, 17:35:09) 
  [GCC 3.3.5 (Debian 1:3.3.5-13)]
  20500f0

  % transcode --version
  transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg

I'd hazard that Python is tweaking some process or threading parameter that is 
inherited by subprocesses and disturbs transcode, which doesn't happen when 
calling fork() and execvp() from a C program, but am unfortunately unable to 
precisely diagnose the problem.

Many thanks for considering.

Regards,

Florent

  [1] Plural because the transcode process spawns several childs: tcextract, 
tcdemux, etc.

--

>Comment By: Peter Åstrand (astrand)
Date: 2007-01-21 21:22

Message:
Logged In: YES 
user_id=344921
Originator: NO

>I never wrote that it was the listing of /proc//fd that was
changing
>the behavior of transcode. Please don't put words in my mouth. I wrote
that
>some fds are open soon after the transcode process is started, and
quickly
>closed afterwards, when run from the Python test script.

Sorry about that, I did misunderstand you. 

>I wrote and attached minimal example programs that reproduce the bug.

The problem is th

[ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace

2007-01-21 Thread SourceForge.net
Bugs item #1599254, was opened at 2006-11-19 16:03
Message generated for change (Comment added) made by baikie
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 9
Private: No
Submitted By: David Watson (baikie)
Assigned to: A.M. Kuchling (akuchling)
Summary: mailbox: other programs' messages can vanish without trace

Initial Comment:
The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement 
the flush() method by writing the new mailbox contents into a temporary file 
which is then renamed over the original. Unfortunately, if another program 
tries to deliver messages while mailbox.py is working, and uses only fcntl() 
locking, it will have the old file open and be blocked waiting for the lock to 
become available. Once mailbox.py has replaced the old file and closed it, 
making the lock available, the other program will write its messages into the 
now-deleted "old" file, consigning them to oblivion.

I've caused Postfix on Linux to lose mail this way (although I did have to turn 
off its use of dot-locking to do so).

A possible fix is attached.  Instead of new_file being renamed, its contents 
are copied back to the original file.  If file.truncate() is available, the 
mailbox is then truncated to size.  Otherwise, if truncation is required, it's 
truncated to zero length beforehand by reopening self._path with mode wb+.  In 
the latter case, there's a check to see if the mailbox was replaced while we 
weren't looking, but there's still a race condition.  Any alternative ideas?

Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the 
replacement file as it had the execute bit set.


--

>Comment By: David Watson (baikie)
Date: 2007-01-21 22:10

Message:
Logged In: YES 
user_id=1504904
Originator: YES

Hold on, I have a plan.  If _toc is only regenerated on locking, or at
the end of a flush(), then the only way self._pending can be set at
that time is if the application has made modifications before calling
lock().  If we make that an exception-raising offence, then we can
assume that self._toc is a faithful representation of the last known
contents of the file.  That means we can preserve the existing message
keys on a reread without any of that _user_toc nonsense.

Diff attached, to apply on top of mailbox-unified2.  It's probably had
even less review and testing than the previous version, but it appears
to pass all the regression tests and doesn't change any existing
semantics.

File Added: mailbox-update-toc-new.diff

--

Comment By: A.M. Kuchling (akuchling)
Date: 2007-01-21 03:16

Message:
Logged In: YES 
user_id=11375
Originator: NO

  I'm starting to lose track of all the variations on the bug. 
Maybe we should just add more warnings to the documentation about locking
the mailbox when modifying it and not try to fix this at all.


--

Comment By: David Watson (baikie)
Date: 2007-01-20 18:20

Message:
Logged In: YES 
user_id=1504904
Originator: YES

Hang on.  If a message's key changes after recreating _toc, that does
not mean that another process has modified the mailbox.  If the
application removes a message and then (inadvertently) causes _toc to
be regenerated, the keys of all subsequent messages will be
decremented by one, due only to the application's own actions.

That's what happens in the "broken locking" test case: the program
intends to remove message 0, flush, and then remove message 1, but
because _toc is regenerated in between, message 1 is renumbered as 0,
message 2 is renumbered as 1, and so the program deletes message 2
instead.  To clear _toc in such code without attempting to preserve
the message keys turns possible data loss (in the case that another
process modified the mailbox) into certain data loss.  That's what I'm
concerned about.


--

Comment By: A.M. Kuchling (akuchling)
Date: 2007-01-19 15:24

Message:
Logged In: YES 
user_id=11375
Originator: NO

After reflection, I don't think the potential changing actually makes
things any worse.  _generate() always starts numbering keys with 1, so if
a message's key changes because of lock()'s, re-reading, that means
someone else has already modified the mailbox.  Without the ToC clearing,
you're already fated to have a corrupted mailbox because the new mailbox
will be written using outdated file offsets.  With the ToC clearing, you
delete the wrong message.  Neither outcome is good, but data is lost
either way.  

The

[ python-Bugs-1641109 ] 2.3.6.4 Error in append and extend descriptions

2007-01-21 Thread SourceForge.net
Bugs item #1641109, was opened at 2007-01-21 23:34
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: ilalopoulos (arafin)
Assigned to: Nobody/Anonymous (nobody)
Summary: 2.3.6.4 Error in append and extend descriptions

Initial Comment:
2.3.6.4 Mutable Sequence Types (2.4.4 Python Doc) 

Error in the table describing append and extend operations for the list type.

specificaly:

s.append(x) same as s[len(s):len(s)] = [x] (2) 
s.extend(x) same as s[len(s):len(s)] = x (3) 

should be:

s.append(x) same as s[len(s):len(s)] = x (2) 
s.extend(x) same as s[len(s):len(s)] = [x] (3)

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1546442 ] subprocess.Popen can't read file object as stdin after seek

2007-01-21 Thread SourceForge.net
Bugs item #1546442, was opened at 2006-08-25 15:52
Message generated for change (Comment added) made by ldeller
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: GaryD (gazzadee)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess.Popen can't read file object as stdin after seek

Initial Comment:
When I use an existing file object as stdin for a call
to subprocess.Popen, then Popen cannot read the file if
I have called seek on it more than once.

eg. in the following python code:

>>> import subprocess
>>> rawfile = file('hello.txt', 'rb')
>>> rawfile.readline()
'line 1\n'
>>> rawfile.seek(0)
>>> rawfile.readline()
'line 1\n'
>>> rawfile.seek(0)
>>> process_object = subprocess.Popen(["cat"],
stdin=rawfile, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)

process_object.stdout now contains nothing, implying
that nothing was on process_object.stdin.

Note that this only applies for a non-trivial seek (ie.
where the file-pointer actually changes). Calling
seek(0) multiple times in a row does not change
anything (obviously).

I have not investigated whether this reveals a problem
with seek not changing the underlying file descriptor,
or a problem with Popen not handling the file
descriptor properly.

I have attached some complete python scripts that
demonstrate this problem. One shows cat working after
calling seek once, the other shows cat failing after
calling seek twice.

Python version being used:
Python 2.4.2 (#1, Nov  3 2005, 12:41:57)
[GCC 3.4.3-20050110 (Gentoo Linux 3.4.3.20050110,
ssp-3.4.3.20050110-0, pie-8.7 on linux2


--

Comment By: lplatypus (ldeller)
Date: 2007-01-22 12:23

Message:
Logged In: YES 
user_id=1534394
Originator: NO

Fair enough, that's probably cleaner and more efficient than playing games
with fflush and lseek anyway.  If file objects are not supported properly
then maybe they shouldn't be accepted at all, forcing the application to
call fileno() if that's what is wanted.  That might break a lot of
existing code though.  Then again it may be beneficial to get everyone to
review code which passes file objects to Popen in light of this behaviour.

--

Comment By: Peter Åstrand (astrand)
Date: 2007-01-22 06:43

Message:
Logged In: YES 
user_id=344921
Originator: NO

It's not obvious that the subprocess module is doing anything wrong here.
Mixing streams and file descriptors is always problematic and should best
be avoided
(http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_232.html).
However, the subprocess module *does* accept a file object (based on a
libc stream), for convenience. For things to work correctly, the
application and the subprocess module needs to cooperate. I admit that the
documentation needs improvement on this topic, though. 

It's quite easy to demonstrate the problem, you don't need to use seek at
all. Here's a simple test case:

import subprocess
rawfile = file('hello.txt', 'rb')
rawfile.readline()
p = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
print "File contents from Popen() call to cat:"
print p.stdout.read()
p.wait()

The descriptor offset is at the end, since the stream buffers.
http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_233.html
describes the need for "cleaning up" a stream, when you switch from stream
functions to descriptor functions. This is described at
http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_235.html#SEC244.
The documentation recommends the fclean() function, but it's only available
on GNU systems and not in Python. As I understand it, fflush() works good
for cleaning an output stream. 

For input streams, however, things are difficult. fflush() might work
sometimes, but to be sure, you must set the file pointer as well. And,
this does not work for files that are not random access, since there's no
way of move the buffered data back to the operating system. 

So, since subprocess cannot reliable deal with this situation, I believe
it shouldn't try. I think it makes more sense that the application
prepares the file object for low-level operations. There are many other
Python modules that uses the .fileno() method, for example the select()
module, and as far as I understand, this module doesn't try to clean
streams or anything like that. 

To summarize: I'm leaning towards a documentation solution. 

--

Comment By: lplatypus (ldeller)
Date: 2006-08-25 17:13

Message:
Logged In: YES 
user_id=1534394

I found the cause of this b

[ python-Bugs-654766 ] asyncore.py and "handle_expt"

2007-01-21 Thread SourceForge.net
Bugs item #654766, was opened at 2002-12-16 10:42
Message generated for change (Comment added) made by sf-robot
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.2
>Status: Closed
Resolution: Out of Date
Priority: 5
Private: No
Submitted By: Jes�s Cea Avi�n (jcea)
Assigned to: Josiah Carlson (josiahcarlson)
Summary: asyncore.py and "handle_expt"

Initial Comment:
Python 2.2.2 here.

Asyncore.py doesn't invoke "handle_expt" ever
("handle_expt" is documented in docs). Managing OOB
data is imprescindible to handle "connection refused"
errors in Windows, for example.

--

>Comment By: SourceForge Robot (sf-robot)
Date: 2007-01-21 19:20

Message:
Logged In: YES 
user_id=1312539
Originator: NO

This Tracker item was closed automatically by the system. It was
previously set to a Pending status, and the original submitter
did not respond within 14 days (the time period specified by
the administrator of this Tracker).

--

Comment By: Josiah Carlson (josiahcarlson)
Date: 2007-01-06 22:18

Message:
Logged In: YES 
user_id=341410
Originator: NO

According to the most recent Python trunk, handle_expt() is called when an
error is found within a .select() or .poll() call.

Is this still an issue for you in Python 2.4 or Python 2.5?

Setting status as Pending, Out of Date as I believe this bug is fixed.

--

Comment By: Alexey Klimkin (klimkin)
Date: 2004-03-04 00:24

Message:
Logged In: YES 
user_id=410460

Patch #909005 fixes the problem.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1579370 ] Segfault provoked by generators and exceptions

2007-01-21 Thread SourceForge.net
Bugs item #1579370, was opened at 2006-10-18 04:23
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
Status: Open
Resolution: None
Priority: 9
Private: No
Submitted By: Mike Klaas (mklaas)
Assigned to: Nobody/Anonymous (nobody)
Summary: Segfault provoked by generators and exceptions

Initial Comment:
A reproducible segfault when using heavily-nested
generators and exceptions.

Unfortunately, I haven't yet been able to provoke this
behaviour with a standalone python2.5 script.  There
are, however, no third-party c extensions running in
the process so I'm fairly confident that it is a
problem in the core.

The gist of the code is a series of nested generators
which leave scope when an exception is raised.  This
exception is caught and re-raised in an outer loop. 
The old exception was holding on to the frame which was
keeping the generators alive, and the sequence of
generator destruction and new finalization caused the
segfault.   

--

>Comment By: Martin v. Löwis (loewis)
Date: 2007-01-22 08:51

Message:
Logged In: YES 
user_id=21627
Originator: NO

I don't like mklaas' patch, since I think it is conceptually wrong to have
PyTraceBack_Here() use the frame's thread state (mklaas describes it as
dirty, and I agree). I'm proposing an alternative patch (tr.diff); please
test this as well.
File Added: tr.diff

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2007-01-17 08:01

Message:
Logged In: YES 
user_id=33168
Originator: NO

Bumping priority to see if this should go into 2.5.1.

--

Comment By: Martin v. Löwis (loewis)
Date: 2007-01-04 11:42

Message:
Logged In: YES 
user_id=21627
Originator: NO

Why do frame objects have a thread state in the first place? In
particular, why does PyTraceBack_Here get the thread state from the frame,
instead of using the current thread?

Introduction of f_tstate goes back to r7882, but it is not clear why it
was done that way.

--

Comment By: Andrew Waters (awaters)
Date: 2007-01-04 10:35

Message:
Logged In: YES 
user_id=1418249
Originator: NO

This fixes the segfault problem that I was able to reliably reproduce on
Linux.

We need to get this applied (assuming it is the correct fix) to the source
to make Python 2.5 usable for me in production code.

--

Comment By: Mike Klaas (mklaas)
Date: 2006-11-27 19:41

Message:
Logged In: YES 
user_id=1611720
Originator: YES

The following patch resets the thread state of the generator when it is
resumed, which prevents the segfault for me:

Index: Objects/genobject.c
===
--- Objects/genobject.c (revision 52849)
+++ Objects/genobject.c (working copy)
@@ -77,6 +77,7 @@
Py_XINCREF(tstate->frame);
assert(f->f_back == NULL);
f->f_back = tstate->frame;
+f->f_tstate = tstate;
 
gen->gi_running = 1;
result = PyEval_EvalFrameEx(f, exc);

--

Comment By: Eric Noyau (eric_noyau)
Date: 2006-11-27 19:07

Message:
Logged In: YES 
user_id=1388768
Originator: NO

We are experiencing the same segfault in our application, reliably.
Running our unit test suite just segfault everytime on both Linux and Mac
OS X. Applying Martin's patch fixes the segfault, and makes everything fine
and dandy, at the cost of some memory leaks if I understand properly.

This particular bug prevents us to upgrade to python 2.5 in production.

--

Comment By: Tim Peters (tim_one)
Date: 2006-10-28 07:18

Message:
Logged In: YES 
user_id=31435

> I tried Tim's hope.py on Linux x86_64 and
> Mac OS X 10.4 with debug builds and neither
> one crashed.  Tim's guess looks pretty damn
> good too.

Neal, note that it's the /Windows/ malloc that fills freed
memory with "dangerous bytes" in a debug build -- this
really has nothing to do with that it's a debug build of
/Python/ apart from that on Windows a debug build of Python
also links in the debug version of Microsoft's malloc.

The valgrind report is pointing at the same thing.  Whether
this leads to a crash is purely an accident of when and how
the system malloc happens to reuse the freed memory.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-10-28