[ python-Bugs-1373161 ] r41552 broke test_file on OS X

2006-02-06 Thread SourceForge.net
Bugs item #1373161, was opened at 2005-12-04 16:25
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1373161&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
>Status: Closed
Resolution: Fixed
Priority: 5
Submitted By: Michael Hudson (mwh)
Assigned to: Michael Hudson (mwh)
Summary: r41552 broke test_file on OS X

Initial Comment:
Apparently you *can* seek on sys.stdin here.  If you just want seek() to fail 
sys.stdin.seek(-1) seems pretty likely to work...

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-02-06 00:24

Message:
Logged In: YES 
user_id=33168

Closing since this doesn't seem to be a problem any more on
10.3 and 10.4

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2005-12-14 22:07

Message:
Logged In: YES 
user_id=33168

Michael, I reverted the tell() portion.  Do all the tests
work for you now?  Can this be closed?

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2005-12-05 21:59

Message:
Logged In: YES 
user_id=33168

Sorry, I think I closed the report before I saw that there
was another problem.  From a man page, it looked like tell()
may fail if it is done on a pipe.  So maybe the problem
can't happen on OS X?  We could check if the system is
osx/darwin and skip the test.  Do you want to skip the test?
 Since it was for coverage and to ensure nothing bad goes
wrong with error handling, it's not awful that it can't be
provoked on osx.

--

Comment By: Michael Hudson (mwh)
Date: 2005-12-05 01:31

Message:
Logged In: YES 
user_id=6656

I suspect you know this from what I said on IRC, but test_file still fails, 
because 
you can tell() on sys.stdin too (I don't really see what you can do about this)

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2005-12-04 17:17

Message:
Logged In: YES 
user_id=33168

revision 41602

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1373161&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1424148 ] urllib.FancyURLopener.redirect_internal looses data on POST!

2006-02-06 Thread SourceForge.net
Bugs item #1424148, was opened at 2006-02-04 18:35
Message generated for change (Comment added) made by kxroberto
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424148&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Robert Kiendl (kxroberto)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.FancyURLopener.redirect_internal looses data on POST!

Initial Comment:
def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)


... has to become ...


def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl,data)



... i guess?   (  ",data"  added )

Robert

--

>Comment By: Robert Kiendl (kxroberto)
Date: 2006-02-06 11:29

Message:
Logged In: YES 
user_id=972995

> http://python.org/sf/549151

the analyzation of the browsers is right. lynx is best ok to
ask.
But urllibX is not a browser (application) but a lib: As of
now with standard urllibX error handling you cannot code a lynx.

gvr's initial suggestion to raise a clear error (with
redirection-link as attribute of the exception value) is
best ok. Another option would be to simly yield the
undirected stub HTML and leave the 30X-code (and redirection
LOCATION in header).

To redirect POST as GET _while_ simply loosing (!) the data
(and not appending it to the GET-URL) is most bad for a lib.
Transcribing smart a short formlike POST to a GET w QUERY
would be so la la.
Don't know if the MS & netscape's also transpose to GET with
long data? ...

The current behaviour is most worst of all 4. All other
methods whould at least have raisen an early hint/error in
my case.

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 01:54

Message:
Logged In: YES 
user_id=261020

This is not a bug.
See the long discussion here:
http://python.org/sf/549151

--

Comment By: Robert Kiendl (kxroberto)
Date: 2006-02-04 21:10

Message:
Logged In: YES 
user_id=972995

Found http://www.faqs.org/rfcs/rfc2616.html (below).
But the behaviour is still strange, and the bug even more
serious: a silent redirection of a POST as GET without data
is obscure for a Python language. Leads to unpredictable
results. The cut half execution is not stopable and all is
left to a good reaction of the server, and complex
reinterpreation of the client. Python urllibX should by
default yield the 30X code for a POST redirection and
provide the first HTML: usually a redirection HTML stub with
< a href=...
That would be consistent with the RFC: the User
(=Application! not Python!) can redirect under full control
without generating a wrong call! In my application, a bug
was long unseen because of this wrong behaviour. with
30X-stub it would have been easy to discover and understand ...

urllib2 has the same bug with POST redirection.

===
10.3.2 301 Moved Permanently

   The requested resource has been assigned a new permanent
URI and any
   future references to this resource SHOULD use one of the
returned
   URIs.  Clients with link editing capabilities ought to
automatically
   re-link references to the Request-URI to one or more of
the new
   references returned by the server, where possible. This
response is
   cacheable unless indicated otherwise.

   The new permanent URI SHOULD be given by the Location
field in the
   response. Unless the request method was HEAD, the entity
of the
   response SHOULD contain a short hypertext note with a
hyperlink to
   the new URI(s).

   If the 301 status code is received in response to a
request other
   than GET or HEAD, the user agent MUST NOT automatically
redirect the
   request unless it can be confirmed by the user, since
this might
   change the conditions under which the request was issued.

  Note: When automatically redirecting a POST request after
  receiving a 301 status code, some existing HTTP/1.

[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 10:44
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1425256 ] Support for MSVC 7 and MSVC8 in msvccompiler

2006-02-06 Thread SourceForge.net
Feature Requests item #1425256, was opened at 2006-02-06 15:07
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1425256&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Distutils
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: dlm (davidlukas)
Assigned to: Nobody/Anonymous (nobody)
Summary: Support for MSVC 7 and MSVC8 in msvccompiler

Initial Comment:
Hi,

I tried to build the "ctypes-0.9.6" packages from
source with Microsoft Visual Studio 8 (2005). I
realized that in  module "distutils" of Python 2.4.1
both VC7 and VC8 compilers are not supported at all
(only VC6).
I took a glance at distutils at Python 2.4.2 but also
there no VC8 is supported.

I tried to figure out where I should extend the
compiler detection, but the whole file
"msvccompiler.py" seems to me like a big hack. I've
wrote some code, to get VC8 working on my machine (set
right pathes to Include- and Lib-Directories and to the
binaries), but I don't think it's redistributable.

What do you think of detecting the right MS-Compiler
like this:

def detectCompiler() :
detectVC6()
detectVC7()
detectVC8()

and hiding the code for each particular version of VC
in a separate function. I don't think MS is following a
streight upwards compatibility strategy.

Also ther should be a way, to select on compiler, when
multiple compilers are detected. I saw the

   --compiler=whatever

switch, but did not found any documentation on it.

I've got both versions (VC7 and VC8) installed on my
machine. So I can try out different detection routines
if you want.

Another problem with VC8 is cross-compiling, since ther
e are different library-directories for different
platforms (AMD64, x86, Itanium, Win32, ...). Also here
I see big deficits in the distutil-module at the moment.

Best regards
David

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1425256&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1424148 ] urllib.FancyURLopener.redirect_internal looses data on POST!

2006-02-06 Thread SourceForge.net
Bugs item #1424148, was opened at 2006-02-04 12:35
Message generated for change (Comment added) made by jimjjewett
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424148&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Robert Kiendl (kxroberto)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.FancyURLopener.redirect_internal looses data on POST!

Initial Comment:
def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)


... has to become ...


def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl,data)



... i guess?   (  ",data"  added )

Robert

--

Comment By: Jim Jewett (jimjjewett)
Date: 2006-02-06 12:57

Message:
Logged In: YES 
user_id=764593

In theory, a GET may be automatic, but a POST requires user 
interaction, so the user can be held accountable for the 
results of a POST, but not of a GET.

Often, the page will respond to either; not sending the 
queries protects privacy in case of problems, and works more 
often than not.  (That said, I too would prefer a raised 
error or a transparent repost, at least as options.)

--

Comment By: Robert Kiendl (kxroberto)
Date: 2006-02-06 05:29

Message:
Logged In: YES 
user_id=972995

> http://python.org/sf/549151

the analyzation of the browsers is right. lynx is best ok to
ask.
But urllibX is not a browser (application) but a lib: As of
now with standard urllibX error handling you cannot code a lynx.

gvr's initial suggestion to raise a clear error (with
redirection-link as attribute of the exception value) is
best ok. Another option would be to simly yield the
undirected stub HTML and leave the 30X-code (and redirection
LOCATION in header).

To redirect POST as GET _while_ simply loosing (!) the data
(and not appending it to the GET-URL) is most bad for a lib.
Transcribing smart a short formlike POST to a GET w QUERY
would be so la la.
Don't know if the MS & netscape's also transpose to GET with
long data? ...

The current behaviour is most worst of all 4. All other
methods whould at least have raisen an early hint/error in
my case.

--

Comment By: John J Lee (jjlee)
Date: 2006-02-05 19:54

Message:
Logged In: YES 
user_id=261020

This is not a bug.
See the long discussion here:
http://python.org/sf/549151

--

Comment By: Robert Kiendl (kxroberto)
Date: 2006-02-04 15:10

Message:
Logged In: YES 
user_id=972995

Found http://www.faqs.org/rfcs/rfc2616.html (below).
But the behaviour is still strange, and the bug even more
serious: a silent redirection of a POST as GET without data
is obscure for a Python language. Leads to unpredictable
results. The cut half execution is not stopable and all is
left to a good reaction of the server, and complex
reinterpreation of the client. Python urllibX should by
default yield the 30X code for a POST redirection and
provide the first HTML: usually a redirection HTML stub with
< a href=...
That would be consistent with the RFC: the User
(=Application! not Python!) can redirect under full control
without generating a wrong call! In my application, a bug
was long unseen because of this wrong behaviour. with
30X-stub it would have been easy to discover and understand ...

urllib2 has the same bug with POST redirection.

===
10.3.2 301 Moved Permanently

   The requested resource has been assigned a new permanent
URI and any
   future references to this resource SHOULD use one of the
returned
   URIs.  Clients with link editing capabilities ought to
automatically
   re-link references to the Request-URI to one or more of
the new
   references returned by the server, where possible. This
response is
   cacheable unless indicated otherwise.

   The new permanent URI SHOULD be given by the Location

[ python-Bugs-1425482 ] msvccompiler.py modified to work with .NET 2005 on win64

2006-02-06 Thread SourceForge.net
Bugs item #1425482, was opened at 2006-02-06 13:28
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425482&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: beaudrym (beaudrym)
Assigned to: Nobody/Anonymous (nobody)
Summary: msvccompiler.py modified to work with .NET 2005 on win64

Initial Comment:
Hi,

I tried to compile and install pywin32 (python 
extension) using Microsoft Visual Studio .NET 2005.  
This was done on a AMD64 platform which had Python 
2.4.2.10 installed (from www.activestate.com).

When I try to compile pywin32, it uses the file 
msvccompiler.py that comes with python.  For the 
compilation to work, I had to modify 
msvccompiler.py.  I attached a patch file of my 
modifications.  Basically, I had to modify two things:

1 - use .NET framework 2.0 when 1.1 is not found.
2 - use environment variables "path", "lib" 
and "included" already defined in console when 
compiling with Visual Studio 8.0.  See comments in 
patch file for more details.

Let me know if these patches look reasonable to you.

Regards,
Maxime

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425482&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1424148 ] urllib.FancyURLopener.redirect_internal looses data on POST!

2006-02-06 Thread SourceForge.net
Bugs item #1424148, was opened at 2006-02-04 17:35
Message generated for change (Comment added) made by jjlee
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424148&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Robert Kiendl (kxroberto)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.FancyURLopener.redirect_internal looses data on POST!

Initial Comment:
def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)


... has to become ...


def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl,data)



... i guess?   (  ",data"  added )

Robert

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 20:24

Message:
Logged In: YES 
user_id=261020

First, anyone replying to this, *please* read this page (and
the whole of this tracker note!) first:

http://ppewww.ph.gla.ac.uk/~flavell/www/post-redirect.html


kxroberto: you say that with standard urllibX error handling
you cannot get an exception on redirected 301/302/307 POST.
 That's not true of urllib2, since you may override
HTTPRedirectHandler.redirect_request(), which method was
designed and documented for precisely that purpose.  It
seems sensible to have a default that does what virtually
all browsers do (speaking as a long-time lynx user!).  I
don't know about the urllib case.

It's perfectly reasonable to extend urllib (if necessary) to
allow the option of raising an exception.  Note that (IIRC!)
 urllib's exceptions do not contain the response body data,
however (urllib2's HTTPErrors do contain the response body
data).

It would of course break backwards compatibility to start
raising exceptions by default here.  I don't think it's
reasonable to break old code on the basis of a notional
security issue when the de-facto standard web client
behaviour is to do the redirect.  In reality, the the only
"security" value of the original prescriptive rule was as a
convention to be followed by white-hat web programmers and
web client implementors to help users avoid unintentionally
re-submitting non-idempotent requests.  Since that
convention is NOT followed in the real world (lynx doesn't
count as the real world ;-), I see no value in sticking
rigidly to the original RFC spec -- especially when 2616
even provides 307 precisely in response to this problem. 
Other web client libraries, for example libwww-perl and Java
HTTPClient, do the same as Python here IIRC.  RFC 2616
section 10.3.4 even suggests web programmers use 302 to get
the behaviour you complain about!

The only doubtful case here is 301.  A decision was made on
the default behaviour in that case back when the tracker
item I pointed you to was resolved.  I think it's a mistake
to change our minds again on that default behaviour.


kxroberto.seek(nrBytes)
assert kxroberto.readline() == """\
To redirect POST as GET _while_ simply loosing (!) the data
(and not appending it to the GET-URL) is most bad for a lib."""

No.  There is no value in supporting behaviour which is
simply contrary to both de-facto and prescriptive standards
(see final paragraph of RFC 2616 section 10.3.3: if we
accept the "GET on POST redirect" rule, we must accept that
the Location header is exactly the URL that should be
followed).  FYI, many servers return a redirect URL
containing the urlencoded POST data from the original request.


kxroberto: """Don't know if the MS & netscape's also
transpose to GET with long data? ..."""

urllib2's behaviour (and urllib's, I believe) on these
issues is identical to that of IE and Firefox.


jimjewett: """In theory, a GET may be automatic, but a POST
requires user interaction, so the user can be held
accountable for the results of a POST, but not of a GET."""

That theory has been experimentally falsified ;-)


--

Comment By: Jim Jewett (jimjjewett)
Date: 2006-02-06 17:57

Message:
Logged In: YES 
user_id=

[ python-Bugs-1411097 ] httplib patch to make _read_chunked() more robust

2006-02-06 Thread SourceForge.net
Bugs item #1411097, was opened at 2006-01-20 20:26
Message generated for change (Comment added) made by jjlee
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1411097&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: John J Lee (jjlee)
Assigned to: Nobody/Anonymous (nobody)
Summary: httplib patch to make _read_chunked() more robust

Initial Comment:
To reproduce:

import urllib2
print urllib2.urlopen("http://66.117.37.13/";).read()


The attached patch "fixes" the hang, but that patch is
not acceptable because it also removes the .readline()
and .readlines() methods on the response object
returned by urllib2.urlopen().

The patch seems to demonstrate that the problem is
caused by the (ab)use of socket._fileobject in
urllib2.AbstractHTTPHandler (I believe this hack was
introduced when urllib2 switched to using
httplib.HTTPConnection).

Not sure yet what the actual problem is...


--

>Comment By: John J Lee (jjlee)
Date: 2006-02-06 20:36

Message:
Logged In: YES 
user_id=261020

I missed the fact that, if the connection will not close at
the end of the transaction, the behaviour should not change
from what's currently in SVN (we should not assume that the
chunked response has ended unless we see the proper
terminating CRLF).  I intend to upload a slightly modified
patch that tests for self._will_close, and behaves accordingly.


--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 01:24

Message:
Logged In: YES 
user_id=261020

Oops, fixed chunk.patch to .strip() before comparing to "".

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 00:38

Message:
Logged In: YES 
user_id=261020

First, expanding a bit on what I wrote on 2006-01-21: The
problem does relate to chunked encoding, and is unrelated to
urllib2's use of _fileobject.  My hack to remove use of
socket._fileobject from urllib2 merely breaks handling of
chunked encoding by cutting httplib.HTTPResponse out of the
picture.  The problem is seen in urllib2 in recent Pythons
thanks to urllib2 switching to use of httplib.HTTPConnection
and HTTP/1.1, hence chunked encoding is allowed.  urllib
still uses httplib.HTTP, hence HTTP/1.0, so is unaffected.
To reproduce with httplib:
import httplib
conn = httplib.HTTPConnection("66.117.37.13")
conn.request("GET", "/", headers={"Connection": "close"})
r1 = conn.getresponse()
print r1.read()
The Connection: close is required -- if it's not there the
server doesn't use chunked transfer-encoding.
I verified with a packet sniffer that the problem is that
this server does not send the final trailing CRLF required
by section 3.6.1 of RFC 2616.  However, that section also
says that trailers (trailing HTTP headers) MUST NOT be sent
by the server unless either a TE header was present and
indicated that trailers are acceptable (httplib does not
send the TE header), or the trailers are optional metadata
and may be discarded by the client.  So, I propose the
attached patch to httplib (chunk.patch) as a work-around.


--

Comment By: John J Lee (jjlee)
Date: 2006-01-21 22:10

Message:
Logged In: YES 
user_id=261020

In fact the commit message for rev 36871 states the real
reason _fileobject is used (handling chunked encoding),
showing my workaround is even more harmful than I thought. 
Moreover, doing a urlopen on 66.117.37.13 shows the response
*is* chunked.

The problem seems to be caused by httplib failing to find a
CRLF at the end of the chunked response, so the loop at the
end of _read_chunked() never terminates.  Haven't looked in
detail yet, but I'm guessing a) it's the server's fault and
b) httplib should work around it.


Here's the commit message from 36871:


Fix urllib2.urlopen() handling of chunked content encoding.

The change to use the newer httplib interface admitted the
possibility
that we'd get an HTTP/1.1 chunked response, but the code
didn't handle
it correctly.  The raw socket object can't be pass to
addinfourl(),
because it would read the undecoded response.  Instead,
addinfourl()
must call HTTPResponse.read(), which will handle the decoding.

One extra wrinkle is that the HTTPReponse object can't be
passed to
addinfourl() either, because it doesn't implement readline() or
readlines().  As a quick hack, use socket._fileobject(), which
implements those methods on top of a read buffer. 
(suggested by mwh)

Finally, add some tests based on test_urllibnet.

Thanks to Andrew Sawyers for originally reporting the
chunked problem.



[ python-Bugs-1424148 ] urllib.FancyURLopener.redirect_internal looses data on POST!

2006-02-06 Thread SourceForge.net
Bugs item #1424148, was opened at 2006-02-04 12:35
Message generated for change (Comment added) made by jimjjewett
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424148&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Robert Kiendl (kxroberto)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.FancyURLopener.redirect_internal looses data on POST!

Initial Comment:
def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)


... has to become ...


def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl,data)



... i guess?   (  ",data"  added )

Robert

--

Comment By: Jim Jewett (jimjjewett)
Date: 2006-02-06 15:52

Message:
Logged In: YES 
user_id=764593

Sorry, I was trying to provide a quick explanation of why we 
couldn't just "do the obvious thing" and repost with data.

Yes, I realize that in practice, GET is used for non-
idempotent actions, and POST is (though less often) done 
automatically.

But since that is the official policy, I wouldn't want to 
bet too heavily against it in a courtroom -- so python 
defaults should be at least as conservative as both the spec 
and the common practice.  

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 15:24

Message:
Logged In: YES 
user_id=261020

First, anyone replying to this, *please* read this page (and
the whole of this tracker note!) first:

http://ppewww.ph.gla.ac.uk/~flavell/www/post-redirect.html


kxroberto: you say that with standard urllibX error handling
you cannot get an exception on redirected 301/302/307 POST.
 That's not true of urllib2, since you may override
HTTPRedirectHandler.redirect_request(), which method was
designed and documented for precisely that purpose.  It
seems sensible to have a default that does what virtually
all browsers do (speaking as a long-time lynx user!).  I
don't know about the urllib case.

It's perfectly reasonable to extend urllib (if necessary) to
allow the option of raising an exception.  Note that (IIRC!)
 urllib's exceptions do not contain the response body data,
however (urllib2's HTTPErrors do contain the response body
data).

It would of course break backwards compatibility to start
raising exceptions by default here.  I don't think it's
reasonable to break old code on the basis of a notional
security issue when the de-facto standard web client
behaviour is to do the redirect.  In reality, the the only
"security" value of the original prescriptive rule was as a
convention to be followed by white-hat web programmers and
web client implementors to help users avoid unintentionally
re-submitting non-idempotent requests.  Since that
convention is NOT followed in the real world (lynx doesn't
count as the real world ;-), I see no value in sticking
rigidly to the original RFC spec -- especially when 2616
even provides 307 precisely in response to this problem. 
Other web client libraries, for example libwww-perl and Java
HTTPClient, do the same as Python here IIRC.  RFC 2616
section 10.3.4 even suggests web programmers use 302 to get
the behaviour you complain about!

The only doubtful case here is 301.  A decision was made on
the default behaviour in that case back when the tracker
item I pointed you to was resolved.  I think it's a mistake
to change our minds again on that default behaviour.


kxroberto.seek(nrBytes)
assert kxroberto.readline() == """\
To redirect POST as GET _while_ simply loosing (!) the data
(and not appending it to the GET-URL) is most bad for a lib."""

No.  There is no value in supporting behaviour which is
simply contrary to both de-facto and prescriptive standards
(see final paragraph of RFC 2616 section 10.3.3: if we
accept the "GET on POST redirect" rule, we must accept that
the Location header is exactly the URL that should be
followed).  FYI, many servers return a redirect URL
containing the urle

[ python-Bugs-1411097 ] httplib patch to make _read_chunked() more robust

2006-02-06 Thread SourceForge.net
Bugs item #1411097, was opened at 2006-01-20 20:26
Message generated for change (Comment added) made by jjlee
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1411097&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: John J Lee (jjlee)
Assigned to: Nobody/Anonymous (nobody)
Summary: httplib patch to make _read_chunked() more robust

Initial Comment:
To reproduce:

import urllib2
print urllib2.urlopen("http://66.117.37.13/";).read()


The attached patch "fixes" the hang, but that patch is
not acceptable because it also removes the .readline()
and .readlines() methods on the response object
returned by urllib2.urlopen().

The patch seems to demonstrate that the problem is
caused by the (ab)use of socket._fileobject in
urllib2.AbstractHTTPHandler (I believe this hack was
introduced when urllib2 switched to using
httplib.HTTPConnection).

Not sure yet what the actual problem is...


--

>Comment By: John J Lee (jjlee)
Date: 2006-02-06 21:18

Message:
Logged In: YES 
user_id=261020

Conservative or not, I see no utility in changing the
default, and several major harmful effects: old code breaks,
and people have to pore over the specs to figure out why
"urlopen() doesn't work".


--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 20:36

Message:
Logged In: YES 
user_id=261020

I missed the fact that, if the connection will not close at
the end of the transaction, the behaviour should not change
from what's currently in SVN (we should not assume that the
chunked response has ended unless we see the proper
terminating CRLF).  I intend to upload a slightly modified
patch that tests for self._will_close, and behaves accordingly.


--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 01:24

Message:
Logged In: YES 
user_id=261020

Oops, fixed chunk.patch to .strip() before comparing to "".

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 00:38

Message:
Logged In: YES 
user_id=261020

First, expanding a bit on what I wrote on 2006-01-21: The
problem does relate to chunked encoding, and is unrelated to
urllib2's use of _fileobject.  My hack to remove use of
socket._fileobject from urllib2 merely breaks handling of
chunked encoding by cutting httplib.HTTPResponse out of the
picture.  The problem is seen in urllib2 in recent Pythons
thanks to urllib2 switching to use of httplib.HTTPConnection
and HTTP/1.1, hence chunked encoding is allowed.  urllib
still uses httplib.HTTP, hence HTTP/1.0, so is unaffected.
To reproduce with httplib:
import httplib
conn = httplib.HTTPConnection("66.117.37.13")
conn.request("GET", "/", headers={"Connection": "close"})
r1 = conn.getresponse()
print r1.read()
The Connection: close is required -- if it's not there the
server doesn't use chunked transfer-encoding.
I verified with a packet sniffer that the problem is that
this server does not send the final trailing CRLF required
by section 3.6.1 of RFC 2616.  However, that section also
says that trailers (trailing HTTP headers) MUST NOT be sent
by the server unless either a TE header was present and
indicated that trailers are acceptable (httplib does not
send the TE header), or the trailers are optional metadata
and may be discarded by the client.  So, I propose the
attached patch to httplib (chunk.patch) as a work-around.


--

Comment By: John J Lee (jjlee)
Date: 2006-01-21 22:10

Message:
Logged In: YES 
user_id=261020

In fact the commit message for rev 36871 states the real
reason _fileobject is used (handling chunked encoding),
showing my workaround is even more harmful than I thought. 
Moreover, doing a urlopen on 66.117.37.13 shows the response
*is* chunked.

The problem seems to be caused by httplib failing to find a
CRLF at the end of the chunked response, so the loop at the
end of _read_chunked() never terminates.  Haven't looked in
detail yet, but I'm guessing a) it's the server's fault and
b) httplib should work around it.


Here's the commit message from 36871:


Fix urllib2.urlopen() handling of chunked content encoding.

The change to use the newer httplib interface admitted the
possibility
that we'd get an HTTP/1.1 chunked response, but the code
didn't handle
it correctly.  The raw socket object can't be pass to
addinfourl(),
because it would read the undecoded response.  Instead,
addinfourl()
must call HTTPResponse.read(), which will handle the decoding.

One extra wrinkle is that the HTTPReponse object can't be
passed to
addin

[ python-Bugs-1424148 ] urllib.FancyURLopener.redirect_internal looses data on POST!

2006-02-06 Thread SourceForge.net
Bugs item #1424148, was opened at 2006-02-04 17:35
Message generated for change (Comment added) made by jjlee
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424148&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Robert Kiendl (kxroberto)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.FancyURLopener.redirect_internal looses data on POST!

Initial Comment:
def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)


... has to become ...


def redirect_internal(self, url, fp, errcode,
errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join
with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl,data)



... i guess?   (  ",data"  added )

Robert

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 21:19

Message:
Logged In: YES 
user_id=261020

Conservative or not, I see no utility in changing the
default, and several major harmful effects: old code breaks,
and people have to pore over the specs to figure out why
"urlopen() doesn't work".


--

Comment By: Jim Jewett (jimjjewett)
Date: 2006-02-06 20:52

Message:
Logged In: YES 
user_id=764593

Sorry, I was trying to provide a quick explanation of why we 
couldn't just "do the obvious thing" and repost with data.

Yes, I realize that in practice, GET is used for non-
idempotent actions, and POST is (though less often) done 
automatically.

But since that is the official policy, I wouldn't want to 
bet too heavily against it in a courtroom -- so python 
defaults should be at least as conservative as both the spec 
and the common practice.  

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 20:24

Message:
Logged In: YES 
user_id=261020

First, anyone replying to this, *please* read this page (and
the whole of this tracker note!) first:

http://ppewww.ph.gla.ac.uk/~flavell/www/post-redirect.html


kxroberto: you say that with standard urllibX error handling
you cannot get an exception on redirected 301/302/307 POST.
 That's not true of urllib2, since you may override
HTTPRedirectHandler.redirect_request(), which method was
designed and documented for precisely that purpose.  It
seems sensible to have a default that does what virtually
all browsers do (speaking as a long-time lynx user!).  I
don't know about the urllib case.

It's perfectly reasonable to extend urllib (if necessary) to
allow the option of raising an exception.  Note that (IIRC!)
 urllib's exceptions do not contain the response body data,
however (urllib2's HTTPErrors do contain the response body
data).

It would of course break backwards compatibility to start
raising exceptions by default here.  I don't think it's
reasonable to break old code on the basis of a notional
security issue when the de-facto standard web client
behaviour is to do the redirect.  In reality, the the only
"security" value of the original prescriptive rule was as a
convention to be followed by white-hat web programmers and
web client implementors to help users avoid unintentionally
re-submitting non-idempotent requests.  Since that
convention is NOT followed in the real world (lynx doesn't
count as the real world ;-), I see no value in sticking
rigidly to the original RFC spec -- especially when 2616
even provides 307 precisely in response to this problem. 
Other web client libraries, for example libwww-perl and Java
HTTPClient, do the same as Python here IIRC.  RFC 2616
section 10.3.4 even suggests web programmers use 302 to get
the behaviour you complain about!

The only doubtful case here is 301.  A decision was made on
the default behaviour in that case back when the tracker
item I pointed you to was resolved.  I think it's a mistake
to change our minds again on that default behaviour.


kxroberto.seek(nrBytes)
assert kxroberto.readline() == """\
To redirect POST as GET _while_ simply loosing (!) the data
(and not appending it to the GET-URL) is most bad for a li

[ python-Bugs-1411097 ] httplib patch to make _read_chunked() more robust

2006-02-06 Thread SourceForge.net
Bugs item #1411097, was opened at 2006-01-20 20:26
Message generated for change (Comment added) made by jjlee
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1411097&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: John J Lee (jjlee)
Assigned to: Nobody/Anonymous (nobody)
Summary: httplib patch to make _read_chunked() more robust

Initial Comment:
To reproduce:

import urllib2
print urllib2.urlopen("http://66.117.37.13/";).read()


The attached patch "fixes" the hang, but that patch is
not acceptable because it also removes the .readline()
and .readlines() methods on the response object
returned by urllib2.urlopen().

The patch seems to demonstrate that the problem is
caused by the (ab)use of socket._fileobject in
urllib2.AbstractHTTPHandler (I believe this hack was
introduced when urllib2 switched to using
httplib.HTTPConnection).

Not sure yet what the actual problem is...


--

>Comment By: John J Lee (jjlee)
Date: 2006-02-06 21:20

Message:
Logged In: YES 
user_id=261020

Please ignore last comment (posted to wrong tracker item).

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 21:18

Message:
Logged In: YES 
user_id=261020

Conservative or not, I see no utility in changing the
default, and several major harmful effects: old code breaks,
and people have to pore over the specs to figure out why
"urlopen() doesn't work".


--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 20:36

Message:
Logged In: YES 
user_id=261020

I missed the fact that, if the connection will not close at
the end of the transaction, the behaviour should not change
from what's currently in SVN (we should not assume that the
chunked response has ended unless we see the proper
terminating CRLF).  I intend to upload a slightly modified
patch that tests for self._will_close, and behaves accordingly.


--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 01:24

Message:
Logged In: YES 
user_id=261020

Oops, fixed chunk.patch to .strip() before comparing to "".

--

Comment By: John J Lee (jjlee)
Date: 2006-02-06 00:38

Message:
Logged In: YES 
user_id=261020

First, expanding a bit on what I wrote on 2006-01-21: The
problem does relate to chunked encoding, and is unrelated to
urllib2's use of _fileobject.  My hack to remove use of
socket._fileobject from urllib2 merely breaks handling of
chunked encoding by cutting httplib.HTTPResponse out of the
picture.  The problem is seen in urllib2 in recent Pythons
thanks to urllib2 switching to use of httplib.HTTPConnection
and HTTP/1.1, hence chunked encoding is allowed.  urllib
still uses httplib.HTTP, hence HTTP/1.0, so is unaffected.
To reproduce with httplib:
import httplib
conn = httplib.HTTPConnection("66.117.37.13")
conn.request("GET", "/", headers={"Connection": "close"})
r1 = conn.getresponse()
print r1.read()
The Connection: close is required -- if it's not there the
server doesn't use chunked transfer-encoding.
I verified with a packet sniffer that the problem is that
this server does not send the final trailing CRLF required
by section 3.6.1 of RFC 2616.  However, that section also
says that trailers (trailing HTTP headers) MUST NOT be sent
by the server unless either a TE header was present and
indicated that trailers are acceptable (httplib does not
send the TE header), or the trailers are optional metadata
and may be discarded by the client.  So, I propose the
attached patch to httplib (chunk.patch) as a work-around.


--

Comment By: John J Lee (jjlee)
Date: 2006-01-21 22:10

Message:
Logged In: YES 
user_id=261020

In fact the commit message for rev 36871 states the real
reason _fileobject is used (handling chunked encoding),
showing my workaround is even more harmful than I thought. 
Moreover, doing a urlopen on 66.117.37.13 shows the response
*is* chunked.

The problem seems to be caused by httplib failing to find a
CRLF at the end of the chunked response, so the loop at the
end of _read_chunked() never terminates.  Haven't looked in
detail yet, but I'm guessing a) it's the server's fault and
b) httplib should work around it.


Here's the commit message from 36871:


Fix urllib2.urlopen() handling of chunked content encoding.

The change to use the newer httplib interface admitted the
possibility
that we'd get an HTTP/1.1 chunked response, but the code
didn't handle
it correctly.  The raw socket object can't be 

[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 11:44
Message generated for change (Comment added) made by effbot
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 22:50

Message:
Logged In: YES 
user_id=38376

If Python gives you a permission error, that's because
Windows cannot remove the file.  Windows does, in general,
not allow you to remove files that are held open by some
process.

I suggest taking this issue to comp.lang.python.  The bug
tracker is not the right place for code review and other
support issues.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1424171 ] patch for etree cdata and attr quoting

2006-02-06 Thread SourceForge.net
Bugs item #1424171, was opened at 2006-02-04 19:23
Message generated for change (Comment added) made by effbot
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424171&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: XML
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Chris McDonough (chrism)
Assigned to: Fredrik Lundh (effbot)
Summary: patch for etree cdata and attr quoting

Initial Comment:
Attached is a patch for ElementTree (based on a checkout from the SVN 
trunk's xmlcore.etree) that seems to perform better escaping of cdata and 
attribute values.  Instead of replacing, for example ""e;" with 
""e;" or "&" with "&", it tries to avoid requoting 
ampersands in things that look like entities.

Sorry, I haven't tested this with anything except Python 2.4, I'm not quite 
sure what to do about _encode_entity, and I haven't patched any tests or 
written a new one for this change.  Consider this more of a RFC than a 
patch ready-for-submission as a result.

--

>Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 22:55

Message:
Logged In: YES 
user_id=38376

I'm not sure I follow.  ET works on the infoset side of
things, where everything is decoded into Unicode strings (or
compatible ASCII strings).  If you set an attribute to
"&" in the infoset, it *must* be encoded on the way out.  
If you want an ampersand, use "&".

--

Comment By: Chris McDonough (chrism)
Date: 2006-02-04 21:23

Message:
Logged In: YES 
user_id=32974

Egads, I did this time.

--

Comment By: Georg Brandl (birkenfeld)
Date: 2006-02-04 19:29

Message:
Logged In: YES 
user_id=1188172

OP: You did check the box?

--

Comment By: Chris McDonough (chrism)
Date: 2006-02-04 19:26

Message:
Logged In: YES 
user_id=32974

Sorry, the tracker doesn't seem to want to allow me to upload the file.  See 
http://www.plope.com/static/misc/betterescape.patch for the patch.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1424171&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 05:44
Message generated for change (Comment added) made by tim_one
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: Tim Peters (tim_one)
Date: 2006-02-06 17:19

Message:
Logged In: YES 
user_id=31435

The problem is that there's no reason to believe anything he
did here _does_ leave files open.  I can confirm the
"permission denied" symptom, and even if I arrange for the
call to "touch" to run a touch.bat that doesn't even look at
the filename passed to it (let alone open or modify the file).

I also see a large number of errors of this sort:

Exception in thread Thread-8:
Traceback (most recent call last):
  File "C:\python24\lib\threading.py", line 442, in __bootstrap
self.run()
  File "osremove.py", line 21, in run
touch(filename)
  File "osremove.py", line 8, in touch
stdin=None, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
  File "C:\python24\lib\subprocess.py", line 490, in __init__
_cleanup()
  File "C:\python24\lib\subprocess.py", line 398, in _cleanup
inst.poll()
  File "C:\python24\lib\subprocess.py", line 739, in poll
_active.remove(self)
ValueError: list.remove(x): x not in list

Those are clearly due to subprocess.py internals on Windows,
where the poll() and wait() methods and the module internal
_cleanup() function aren't called in mutually threadsafe
ways.  _Those_ errors can be stopped by commenting out the
_cleanup() call at the start of Popen.__init__() (think
about what happens when multiple threads call _cleanup() at
overlapping times on Windows:  all those threads can end up
trying to remove the same items from _active, but only one
thread per item can succeed).

The "permission denied" errors persist, though.

So there's at least one class of subprocess.py Windows bugs
here, and another class of Windows mysteries.  I believe
subprocess.py is a red herring wrt the latter, though.  For
example, I see much the same if I use os.system() to run
`touch` instead.

--

Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 16:50

Message:
Logged In: YES 
user_id=38376

If Python gives you a permission error, that's because
Windows cannot remove the file.  Windows does, in general,
not allow you to remove files that are held open by some
process.

I suggest taking this issue to comp.lang.python.  The bug
tracker is not the right place for code review and other
support issues.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1421696 ] http response dictionary incomplete

2006-02-06 Thread SourceForge.net
Bugs item #1421696, was opened at 2006-02-01 12:56
Message generated for change (Comment added) made by jimjjewett
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1421696&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Jim Jewett (jimjjewett)
Assigned to: Nobody/Anonymous (nobody)
Summary: http response dictionary incomplete

Initial Comment:
httplib and BaseHTTPServer each maintain their own copy 
of possible response codes.

They don't agree.

It looks like the one in httplib is a superset of the 
one in BaseHTTPServer.BaseHTTPRequestHandler.responses, 
and httplib is the logical place for it, but

(1)  They map in opposite directions.

(2)  The httplib version is just a bunch of names at 
the module toplevel, with no particular grouping that 
separates them from random classes, or makes them easy 
to import as a group.

(3)  The httplib names are explicitly not exported.



--

>Comment By: Jim Jewett (jimjjewett)
Date: 2006-02-06 17:39

Message:
Logged In: YES 
user_id=764593

That may make the cleanup more urgent.  The mapping in 
urllib2 is new with 2.5, so it should still be fine to 
remove it, or forward to httplib.

The mapping in httplib is explicitly not exported, as there 
is an __all__ which excludes them, so it *should* be 
legitimate to remove them in a new release.

BaseHTTPServer places the mapping as a class attribute on a 
public class.  Therefore, either the final location has to 
include both the message and the long message (so that 
BaseHTTPServer can import it and delegate), or this has to 
be the final location, or we can't at best get down to two.

--

Comment By: John J Lee (jjlee)
Date: 2006-02-05 19:56

Message:
Logged In: YES 
user_id=261020

There's also one in urllib2 :-(

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1421696&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 10:44
Message generated for change (Comment added) made by atila-cheops
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: cheops (atila-cheops)
Date: 2006-02-06 22:53

Message:
Logged In: YES 
user_id=1276121

I did post on the python mailing list first, but got no 
responce there, after further looking into it, I seriously 
think there is at least one bug here.

here is the link to the post: 
http://mail.python.org/pipermail/python-list/2006-
February/323650.html

--

Comment By: Tim Peters (tim_one)
Date: 2006-02-06 22:19

Message:
Logged In: YES 
user_id=31435

The problem is that there's no reason to believe anything he
did here _does_ leave files open.  I can confirm the
"permission denied" symptom, and even if I arrange for the
call to "touch" to run a touch.bat that doesn't even look at
the filename passed to it (let alone open or modify the file).

I also see a large number of errors of this sort:

Exception in thread Thread-8:
Traceback (most recent call last):
  File "C:\python24\lib\threading.py", line 442, in __bootstrap
self.run()
  File "osremove.py", line 21, in run
touch(filename)
  File "osremove.py", line 8, in touch
stdin=None, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
  File "C:\python24\lib\subprocess.py", line 490, in __init__
_cleanup()
  File "C:\python24\lib\subprocess.py", line 398, in _cleanup
inst.poll()
  File "C:\python24\lib\subprocess.py", line 739, in poll
_active.remove(self)
ValueError: list.remove(x): x not in list

Those are clearly due to subprocess.py internals on Windows,
where the poll() and wait() methods and the module internal
_cleanup() function aren't called in mutually threadsafe
ways.  _Those_ errors can be stopped by commenting out the
_cleanup() call at the start of Popen.__init__() (think
about what happens when multiple threads call _cleanup() at
overlapping times on Windows:  all those threads can end up
trying to remove the same items from _active, but only one
thread per item can succeed).

The "permission denied" errors persist, though.

So there's at least one class of subprocess.py Windows bugs
here, and another class of Windows mysteries.  I believe
subprocess.py is a red herring wrt the latter, though.  For
example, I see much the same if I use os.system() to run
`touch` instead.

--

Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 21:50

Message:
Logged In: YES 
user_id=38376

If Python gives you a permission error, that's because
Windows cannot remove the file.  Windows does, in general,
not allow you to remove files that are held open by some
process.

I suggest taking this issue to comp.lang.python.  The bug
tracker is not the right place for code review and other
support issues.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 11:44
Message generated for change (Comment added) made by effbot
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: Fredrik Lundh (effbot)
Date: 2006-02-07 00:05

Message:
Logged In: YES 
user_id=38376

> The problem is that there's no reason to believe anything
> he did here _does_ leave files open.

Except that he's hitting the file system quite heavily, and
asyncronously.  My guess is that Windows simply gets behind
(a quick filemon test indicates that this is indeed the
case; just before a crash, I see the events CREATE/SUCCESS,
QUERY/SUCCESS, QUERY/SUCCESS, WRITE/SUCCESS, and
OPEN/SHARING VIOLATION for the failing file, with lots of
requests for other files interleaved).

Unless someone wants to fix Windows, a simple workaround
would be to retry the os.remove a few times before giving up
(with a time.sleep(0.1) in between).

--

Comment By: cheops (atila-cheops)
Date: 2006-02-06 23:53

Message:
Logged In: YES 
user_id=1276121

I did post on the python mailing list first, but got no 
responce there, after further looking into it, I seriously 
think there is at least one bug here.

here is the link to the post: 
http://mail.python.org/pipermail/python-list/2006-
February/323650.html

--

Comment By: Tim Peters (tim_one)
Date: 2006-02-06 23:19

Message:
Logged In: YES 
user_id=31435

The problem is that there's no reason to believe anything he
did here _does_ leave files open.  I can confirm the
"permission denied" symptom, and even if I arrange for the
call to "touch" to run a touch.bat that doesn't even look at
the filename passed to it (let alone open or modify the file).

I also see a large number of errors of this sort:

Exception in thread Thread-8:
Traceback (most recent call last):
  File "C:\python24\lib\threading.py", line 442, in __bootstrap
self.run()
  File "osremove.py", line 21, in run
touch(filename)
  File "osremove.py", line 8, in touch
stdin=None, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
  File "C:\python24\lib\subprocess.py", line 490, in __init__
_cleanup()
  File "C:\python24\lib\subprocess.py", line 398, in _cleanup
inst.poll()
  File "C:\python24\lib\subprocess.py", line 739, in poll
_active.remove(self)
ValueError: list.remove(x): x not in list

Those are clearly due to subprocess.py internals on Windows,
where the poll() and wait() methods and the module internal
_cleanup() function aren't called in mutually threadsafe
ways.  _Those_ errors can be stopped by commenting out the
_cleanup() call at the start of Popen.__init__() (think
about what happens when multiple threads call _cleanup() at
overlapping times on Windows:  all those threads can end up
trying to remove the same items from _active, but only one
thread per item can succeed).

The "permission denied" errors persist, though.

So there's at least one class of subprocess.py Windows bugs
here, and another class of Windows mysteries.  I believe
subprocess.py is a red herring wrt the latter, though.  For
example, I see much the same if I use os.system() to run
`touch` instead.

--

Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 22:50

Message:
Logged In: YES 
user_id=38376

If Python gives you a permission error, that's because
Windows cannot remove the file.  Windows does, in general,
not allow you to remove files that are held open by some
process.

I suggest taking this issue to comp.lang.python.  The bug
tracker is not the right place for code review and other
support issues.

--

You can respond by visiti

[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 05:44
Message generated for change (Comment added) made by tim_one
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: Tim Peters (tim_one)
Date: 2006-02-06 23:07

Message:
Logged In: YES 
user_id=31435

[/F]
> Except that he's hitting the file system quite heavily,

Except that _without_ the call to touch(), he's hitting it
even more heavily, creating and destroying little files just
as fast as the OS can do it in each of 10 threads -- but
there aren't any errors then.

> and asyncronously.

What's asynch here?  The OP's touch() function waits for the
spawned process to terminate, and the test driver doesn't
try to delete the file until after that.

> My guess is that Windows simply gets behind
> (a quick filemon test indicates that this is indeed the
> case; just before a crash, I see the events
> CREATE/SUCCESS, QUERY/SUCCESS, QUERY/SUCCESS,
> WRITE/SUCCESS, and OPEN/SHARING VIOLATION for the
> failing file, with lots of requests for other files
> interleaved).

That's consistent with the symptom reported:  an exception
raised upon trying to remove the file, but not during any
other file operation.  Does it tell you more than _just_
that?  It doesn't for me.

> Unless someone wants to fix Windows,

As above, because removing the call to the internal `touch`
function makes all problems go away it's not obvious that
this is a Windows problem.

> a simple workaround would be to retry the os.remove a
> few times before giving up (with a time.sleep(0.1) in
> between).

Because of the internal threading errors in subprocess.py
(see my first comment), the threads in the test program
still usually die, but with instances of list.remove(x)
ValueErrors internal to subprocess.py.

If I hack around that, then this change to the test
program's file-removal code appears adequate to eliminate
all errors on my box (which is a zippy 3.4 GHz):

try:
os.remove(filename)
except OSError:
time.sleep(0.1)
os.remove(filename)

It's possible that some virus-scanning or file-indexing
gimmick on my box is opening these little files for its own
purposes -- although, if so, I'm at a loss to account for
why a single "os.remove(filename)" never raises an exception
when the `touch()` call is commented out.

OTOH, with the `touch()` call intact, the time.sleep(0.1)
above is not adequate to prevent os.remove() errors if I
change the file-writing code to:

f.write("test" * 25)

Even boosting the sleep() to 0.4 isn't enough then.

That does (mildly) suggest there's another process opening
the temp files, and doing something with them that takes
time proportional to the file size.  However, the
os.remove() errors persist when I disable all such gimmicks
(that I know about ;-)) on my box.

It seems likely I'll never determine a cause for that.  The
bad thread behavior in subprocess.py is independent, and
should be repaired regardless.

--

Comment By: Fredrik Lundh (effbot)
Date: 2006-02-06 18:05

Message:
Logged In: YES 
user_id=38376

> The problem is that there's no reason to believe anything
> he did here _does_ leave files open.

Except that he's hitting the file system quite heavily, and
asyncronously.  My guess is that Windows simply gets behind
(a quick filemon test indicates that this is indeed the
case; just before a crash, I see the events CREATE/SUCCESS,
QUERY/SUCCESS, QUERY/SUCCESS, WRITE/SUCCESS, and
OPEN/SHARING VIOLATION for the failing file, with lots of
requests for other files interleaved).

Unless someone wants to fix Windows, a simple workaround
would be to retry the os.remove a few times before giving up
(with a time.sleep(0.1) in between).


[ python-Bugs-876637 ] Random stack corruption from socketmodule.c

2006-02-06 Thread SourceForge.net
Bugs item #876637, was opened at 2004-01-13 22:41
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=876637&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
>Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Mike Pall (mikesfpy)
>Assigned to: Neal Norwitz (nnorwitz)
Summary: Random stack corruption from socketmodule.c 

Initial Comment:
THE PROBLEM:

The implementation of the socket_object.settimeout() method
(socketmodule.c, function internal_select()) uses the select() system
call with an unbounded file descriptor number. This will cause random
stack corruption if fd>=FD_SETSIZE.

This took me ages to track down! It happened with a massively 
multithreaded
and massively connection-swamped network server. Basically most of 
the
descriptors did not use that routine (because they were either pure 
blocking
or pure non-blocking). But one module used settimeout() and with a little
bit of luck got an fd>=FD_SETSIZE and with even more luck corrupted 
the
stack and took down the whole server process.

Demonstration script appended.

THE SOLUTION:

The solution is to use poll() and to favour poll() even if select()
is available on a platform. The current trend in modern OS+libc
combinations is to emulate select() in libc and call kernel-level poll()
anyway. And this emulation is costly (both for the caller and for libc).

Not so the other way round (only some systems of historical interest
do that BTW), so we definitely want to use poll() if it's available
(even if it's an emulation).

And if select() is your only choice, then check for FD_SETSIZE before
using the FD_SET macro (and raise some strange exception if that fails).

[
I should note that using SO_RCVTIMEO and SO_SNDTIMEO would be a lot 
more
efficient (kernel-wise at least). Unfortunately they are not universally
available (though defined by most system header files). But a simple
runtime test with a fallback to poll()/select() would do.
]

A PATCH, A PATCH?

Well, the check for FD_SETSIZE is left as an exercise for the reader. :-)
Don't forget to merge this with the stray select() way down by adding 
a return value to internal_select().

But yes, I can do a 'real' patch with poll() [and even one with the
SO_RCVTIMEO trick if you are adventurous]. But, I can't test it with
dozens of platforms, various include files, compilers and so on.

So, dear Python core developers: Please discuss this and tell me,
if you want a patch, then you'll get one ASAP.

Thank you for your time!


--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-02-06 23:18

Message:
Logged In: YES 
user_id=33168

Thanks!

Committed revision 42253.
Committed revision 42254. (2.4)


--

Comment By: Troels Walsted Hansen (troels)
Date: 2004-06-10 04:15

Message:
Logged In: YES 
user_id=32863

I have created a patch to make socketmodule use poll() when
available. See http://python.org/sf/970288

(I'm not allowed to attach patches to this bug item.)


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=876637&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1425127 ] os.remove OSError: [Errno 13] Permission denied

2006-02-06 Thread SourceForge.net
Bugs item #1425127, was opened at 2006-02-06 11:44
Message generated for change (Comment added) made by effbot
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1425127&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove OSError: [Errno 13] Permission denied

Initial Comment:
When running the following program I get frequent
errors like this one

Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in
__bootstrap
self.run()
  File "os.remove.py", line 25, in run
os.remove(filename)
OSError: [Errno 13] Permission denied:
'c:\\docume~1\\joag\\locals~1\\temp\\tmpx91tkx'

When leaving out the touch statement(line 24) in the
loop of the class, I do not get any errors.
This is on Windows XP SP2 with python-2.4.2 (you should
have an exe touch somewhere in you path for this to
work) Can somebody shed any light on this please?

Thanks in advance

Joram Agten



--

>Comment By: Fredrik Lundh (effbot)
Date: 2006-02-07 08:59

Message:
Logged In: YES 
user_id=38376

"Does it tell you more than _just_ that?  It doesn't for me."

All requests against the file in question were issued by the
python process; there's no sign of virus checkers or other
external applications.

Also, whenever things failed, there were always multiple
requests for cmd.exe (caused by os.system) between the WRITE
request and the failing OPEN request.
 
My feel, after staring at filemon output, is that this is a
problem in the Windows file I/O layer.  NTFS queues the
various operations, and calling an external process with
stuff still in the queue messes up the request scheduling.

--

Comment By: Tim Peters (tim_one)
Date: 2006-02-07 05:07

Message:
Logged In: YES 
user_id=31435

[/F]
> Except that he's hitting the file system quite heavily,

Except that _without_ the call to touch(), he's hitting it
even more heavily, creating and destroying little files just
as fast as the OS can do it in each of 10 threads -- but
there aren't any errors then.

> and asyncronously.

What's asynch here?  The OP's touch() function waits for the
spawned process to terminate, and the test driver doesn't
try to delete the file until after that.

> My guess is that Windows simply gets behind
> (a quick filemon test indicates that this is indeed the
> case; just before a crash, I see the events
> CREATE/SUCCESS, QUERY/SUCCESS, QUERY/SUCCESS,
> WRITE/SUCCESS, and OPEN/SHARING VIOLATION for the
> failing file, with lots of requests for other files
> interleaved).

That's consistent with the symptom reported:  an exception
raised upon trying to remove the file, but not during any
other file operation.  Does it tell you more than _just_
that?  It doesn't for me.

> Unless someone wants to fix Windows,

As above, because removing the call to the internal `touch`
function makes all problems go away it's not obvious that
this is a Windows problem.

> a simple workaround would be to retry the os.remove a
> few times before giving up (with a time.sleep(0.1) in
> between).

Because of the internal threading errors in subprocess.py
(see my first comment), the threads in the test program
still usually die, but with instances of list.remove(x)
ValueErrors internal to subprocess.py.

If I hack around that, then this change to the test
program's file-removal code appears adequate to eliminate
all errors on my box (which is a zippy 3.4 GHz):

try:
os.remove(filename)
except OSError:
time.sleep(0.1)
os.remove(filename)

It's possible that some virus-scanning or file-indexing
gimmick on my box is opening these little files for its own
purposes -- although, if so, I'm at a loss to account for
why a single "os.remove(filename)" never raises an exception
when the `touch()` call is commented out.

OTOH, with the `touch()` call intact, the time.sleep(0.1)
above is not adequate to prevent os.remove() errors if I
change the file-writing code to:

f.write("test" * 25)

Even boosting the sleep() to 0.4 isn't enough then.

That does (mildly) suggest there's another process opening
the temp files, and doing something with them that takes
time proportional to the file size.  However, the
os.remove() errors persist when I disable all such gimmicks
(that I know about ;-)) on my box.

It seems likely I'll never determine a cause for that.  The
bad thread behavior in subprocess.py is independent, and
should be repaired regardless.

--