[ python-Bugs-1585690 ] csv.reader.line_num missing 'new in 2.5'
Bugs item #1585690, was opened at 2006-10-27 10:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1585690&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Kent Johnson (kjohnson) Assigned to: Nobody/Anonymous (nobody) Summary: csv.reader.line_num missing 'new in 2.5' Initial Comment: In this page: http://docs.python.org/lib/node265.html in the docs for csv.reader.line_num It should be noted that this attribute is new in Python 2.5. Thanks! -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1585690&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1570255 ] redirected cookies
Bugs item #1570255, was opened at 2006-10-03 16:37 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1570255&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: hans_moleman (hans_moleman) Assigned to: Nobody/Anonymous (nobody) Summary: redirected cookies Initial Comment: Cookies are not resend when a redirect is requested. Blurb: I've been trying to get a response off a server using Python. The response so far differs from the response using Firefox. In Python, I have set headers and cookies the way Firefox does it. I noticed that the server accepts the POST request, and redirects the client to another address with the result on it. This happens both with Python and Firefox correctly. Cookie handling differs though: The Python client, when redirected, using the standard redirect handler, does not resend its cookies to the redirected address. Firefox does resend the cookies from the original request. When I redefine the redirect handler and code it so that it adds the cookies from the original request, the response is the same as Firefox's response. This confirms then that resending cookies is required to get the server to respond correctly. Is the default Python redirection cookie policy different from Firefox's policy? Could we improve the default redirection handler to work like Firefox? Is it a bug? I noticed an old open bug report 511786, that looks very much like this problem. It suggests it is fixed. Cheers Hans Moleman. -- >Comment By: A.M. Kuchling (akuchling) Date: 2006-10-27 08:16 Message: Logged In: YES user_id=11375 Given the sensitive data in your script, it's certainly best to not post it. You'll have to dig into urllib2 yourself, I think. Start by looking at the code in redirect_request(), around line 520 of urllib2.py, and adding some debug prints. Print out the contents of req.headers; is the cookie line in there? Change the __init__ of AbstractHTTPHandler to default debuglevel to 1, not 0; this will print out all the HTTP lines being sent and received. -- Comment By: hans_moleman (hans_moleman) Date: 2006-10-27 00:20 Message: Logged In: YES user_id=1610873 I am using this script to obtain monthly internet usage statistics from my ISP. My ISP provides a screen via HTTPS, to enter a usercode and password, after which the usage statistics are displayed on a different address. I cannot send this script with my usercode and password. My ISP might not like me doing this either. Therefore I'll try to find another server that behaves similarly, and send you that. -- Comment By: A.M. Kuchling (akuchling) Date: 2006-10-26 16:16 Message: Logged In: YES user_id=11375 More detail is needed to figure out if there's a problem; can you give a sample URL to exhibit the problem? can you provide your code? From the description, it's unclear if this might be a bug in the handling of redirects or in the CookieProcessor class. The bug in 511786 is still fixed; that bug includes sample code, so I could check it. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1570255&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1585690 ] csv.reader.line_num missing 'new in 2.5'
Bugs item #1585690, was opened at 2006-10-27 06:14 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1585690&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Kent Johnson (kjohnson) >Assigned to: A.M. Kuchling (akuchling) Summary: csv.reader.line_num missing 'new in 2.5' Initial Comment: In this page: http://docs.python.org/lib/node265.html in the docs for csv.reader.line_num It should be noted that this attribute is new in Python 2.5. Thanks! -- >Comment By: A.M. Kuchling (akuchling) Date: 2006-10-27 08:19 Message: Logged In: YES user_id=11375 Fixed in trunk and 25-maint. Thanks! -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1585690&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1580472 ] glob.glob("c:\\[ ]\*) doesn't work
Bugs item #1580472, was opened at 2006-10-19 11:44
Message generated for change (Comment added) made by potten
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Koblaid (koblaid)
Assigned to: Nobody/Anonymous (nobody)
Summary: glob.glob("c:\\[ ]\*) doesn't work
Initial Comment:
OS: Windows 2000 Service Pack 4
Python 2.5
glob.glob() doesn't work in directories named
"[ ]" (with a blank in it). Another example is a
directory named "A - [Aa-Am]"
Example:
#
C:\>md []
C:\>md "[ ]"
C:\>copy anyfile.txt []
1 Datei(en) kopiert.
C:\>copy anyfile.txt "[ ]"
1 Datei(en) kopiert.
C:\>python
Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC
v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for
more information.
>>> import glob
>>> glob.glob ("c:\\[]\*")
['c:\\[]\\anyfile.txt']
>>> glob.glob ("c:\\[ ]\*")
[]
#
The second glob should have resulted the same as the
first glob since I copied the same file to both
directories.
I may be wrong because I'm new to python. But I've
tested it a couple of times, and I think it have to be
a bug of python or a bug of windows.
Greets, Koblaid
--
Comment By: Peter Otten (potten)
Date: 2006-10-27 12:32
Message:
Logged In: YES
user_id=703365
Not a bug. "[abc]" matches exactly one character which may be "a", "b" or "c".
Therefore "[ ]" matches one space character. If you want a literal "[", put it
in brackets,
e. g. glob.glob("C:\\[[] ]\\*").
---
By the way, do you think this problem is common enough to warrant the addition
of a
fnmatch.escape() function? I have something like this in mind:
>>> import re
>>> r = re.compile("(%s)" % "|".join(re.escape(c) for c in "*?["))
>>> def escape(s):
... return r.sub(r"[\1]", s)
...
>>> escape("c:\\[a-z]\\*")
'c:\\[[]a-z]\\[*]'
--
Comment By: Josiah Carlson (josiahcarlson)
Date: 2006-10-27 06:14
Message:
Logged In: YES
user_id=341410
This is a known issue with the fnmatch module (what glob
uses under the covers). According to the documentation of
the translate method that converts patterns into regular
expressions... "There is no way to quote meta-characters."
The fact that "[]" works but "[ ]" doesn't work is a
convenient bug, for those who want to use "[]".
If you can come up with some similar but non-ambiguous
syntax to update the fnmatch module, I'm sure it would be
considered, but as-is, I can't see this as a "bug" per-se.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1583946 ] SSL "issuer" and "server" names cannot be parsed
Bugs item #1583946, was opened at 2006-10-24 14:32
Message generated for change (Comment added) made by akuchling
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1583946&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: John Nagle (nagle)
Assigned to: Nobody/Anonymous (nobody)
Summary: SSL "issuer" and "server" names cannot be parsed
Initial Comment:
(Python 2.5 library)
The Python SSL object offers two methods from
obtaining the info from an SSL certificate, "server()"
and "issuer()". These return strings.
The actual values in the certificate are a series
of key /value pairs in ASN.1 binary format. But what
"server()" and "issuer()" return are single strings,
with the key/value pairs separated by "/".
However, "/" is a valid character in certificate
data. So parsing such strings is ambiguous, and
potentially exploitable.
This is more than a theoretical problem. The
issuer field of Verisign certificates has a "/" in the
middle of a text field:
"/O=VeriSign Trust Network/OU=VeriSign,
Inc./OU=VeriSign International Server CA - Class
3/OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY
LTD.(c)97 VeriSign".
Note the
"OU=Terms of use at www.verisign.com/rpa (c)00"
with a "/" in the middle of the value field. Oops.
Worse, this is potentially exploitable. By
ordering a low-level certificate with a "/" in the
right place, you can create the illusion (at least for
flawed implementations like this one) that the
certificate belongs to someone else. Just order a
certificate from GoDaddy, enter something like this in
the "Name" field
"Myphonyname/C=US/ST=California/L=San Jose/O=eBay
Inc./OU=Site Operations/CN=signin.ebay.com"
and Python code will be spoofed into thinking you're eBay.
Fortunately, browsers don't use Python code.
The actual bug is in
python/trunk/Modules/_ssl.c
at
if ((self->server_cert =
SSL_get_peer_certificate(self->ssl))) {
X509_NAME_oneline(X509_get_subject_name(self->server_cert),
self->server, X509_NAME_MAXLEN);
X509_NAME_oneline(X509_get_issuer_name(self->server_cert),
self->issuer, X509_NAME_MAXLEN);
The "X509_name_oneline" function takes an X509_NAME
structure, which is the certificate system's
representation of a list, and flattens it into a
printable string. This is a debug function, not one
for use in production code. The SSL documentation for
"X509_name_oneline" says:
"The functions X509_NAME_oneline() and
X509_NAME_print() are legacy functions which produce a
non standard output form, they don't handle multi
character fields and have various quirks and
inconsistencies. Their use is strongly discouraged in
new applications."
What OpenSSL callers are supposed to do is call
X509_NAME_entry_count() to get the number of entries in
an X509_NAME structure, then get each entry with
X509_NAME_get_entry(). A few more calls will obtain
the name/value pair from the entry, as UTF8 strings,
which should be converted to Python UNICODE strings.
OpenSSL has all the proper support, but Python's shim
doesn't interface to it correctly.
X509_NAME_oneline() doesn't handle Unicode; it converts
non-ASCII values to "\xnn" format. Again, it's for
debug output only.
So what's needed are two new functions for Python's SSL
sockets to replace "issuer" and "server". The new
functions should return lists of Unicode strings
representing the key/value pairs. (A list is needed,
not a dictionary; two strings with the same key
are both possible and common.)
The reason this now matters is that new "high
assurance" certs, the ones that tell you how much a
site can be trusted, are now being deployed, and to use
them effectively, you need that info. Support for them
is in Internet Explorer 7, so they're going to be
widespread soon. Python needs to catch up.
And, of course, this needs to be fixed as part of
Unicode support.
John Nagle
Animats
--
>Comment By: A.M. Kuchling (akuchling)
Date: 2006-10-27 08:54
Message:
Logged In: YES
user_id=11375
I've reworded the description in the documentation to say
something like this: "Returns a string describing the issuer
of the server's certificate.
Useful for debugging purposes; do not parse the content of
this string
because its format can't be parsed unambiguously."
For adding new features: please submit a patch. Python's
maintainers probably don't use SSL in
any sophisticated way and therefore have no idea what shape
better SSL/X.509 support would take.
--
Comment By: Martin v. Löwis (loewis)
Date: 20
[ python-Bugs-1562583 ] asyncore.dispatcher.set_reuse_addr not documented.
Bugs item #1562583, was opened at 2006-09-20 21:21 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1562583&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Noah Spurrier (noah) >Assigned to: A.M. Kuchling (akuchling) Summary: asyncore.dispatcher.set_reuse_addr not documented. Initial Comment: I could not find this in http://docs.python.org/lib/module-asyncore.html nor in http://docs.python.org/lib/genindex.html -- >Comment By: A.M. Kuchling (akuchling) Date: 2006-10-27 09:07 Message: Logged In: YES user_id=11375 Added to the docs on the trunk, 25-maint, and 24-maint branches. Thanks for your report! -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1562583&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1542016 ] inconsistency in PCALL conditional code in ceval.c
Bugs item #1542016, was opened at 2006-08-17 10:21
Message generated for change (Comment added) made by akuchling
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1542016&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Private: No
Submitted By: Santiago Gala (sgala)
>Assigned to: A.M. Kuchling (akuchling)
Summary: inconsistency in PCALL conditional code in ceval.c
Initial Comment:
While there are macros to profile PCALL_POP, the
reporting of it via sys.callstats() is broken.
This patch solves it.
Index: Python/ceval.c
===
--- Python/ceval.c (revisión: 51339)
+++ Python/ceval.c (copia de trabajo)
@@ -186,10 +186,10 @@
PyObject *
PyEval_GetCallStats(PyObject *self)
{
- return Py_BuildValue("ii",
+ return Py_BuildValue("iii",
pcall[0], pcall[1],
pcall[2], pcall[3],
pcall[4], pcall[5],
pcall[6], pcall[7],
-pcall[8], pcall[9]);
+pcall[8], pcall[9],
pcall[10]);
}
#else
#define PCALL(O)
--
>Comment By: A.M. Kuchling (akuchling)
Date: 2006-10-27 09:36
Message:
Logged In: YES
user_id=11375
Committed to the trunk in rev. 52469, to 25-maint in rev.
52470, and to 24-maint in rev. 52472. Thanks for your bug
report!
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1542016&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1580472 ] glob.glob("c:\\[ ]\*) doesn't work
Bugs item #1580472, was opened at 2006-10-19 11:44
Message generated for change (Comment added) made by gbrandl
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
>Resolution: Invalid
Priority: 5
Private: No
Submitted By: Koblaid (koblaid)
Assigned to: Nobody/Anonymous (nobody)
Summary: glob.glob("c:\\[ ]\*) doesn't work
Initial Comment:
OS: Windows 2000 Service Pack 4
Python 2.5
glob.glob() doesn't work in directories named
"[ ]" (with a blank in it). Another example is a
directory named "A - [Aa-Am]"
Example:
#
C:\>md []
C:\>md "[ ]"
C:\>copy anyfile.txt []
1 Datei(en) kopiert.
C:\>copy anyfile.txt "[ ]"
1 Datei(en) kopiert.
C:\>python
Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC
v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for
more information.
>>> import glob
>>> glob.glob ("c:\\[]\*")
['c:\\[]\\anyfile.txt']
>>> glob.glob ("c:\\[ ]\*")
[]
#
The second glob should have resulted the same as the
first glob since I copied the same file to both
directories.
I may be wrong because I'm new to python. But I've
tested it a couple of times, and I think it have to be
a bug of python or a bug of windows.
Greets, Koblaid
--
>Comment By: Georg Brandl (gbrandl)
Date: 2006-10-27 14:01
Message:
Logged In: YES
user_id=849994
Not a bug, as Peter said.
--
Comment By: Peter Otten (potten)
Date: 2006-10-27 12:32
Message:
Logged In: YES
user_id=703365
Not a bug. "[abc]" matches exactly one character which may be "a", "b" or "c".
Therefore "[ ]" matches one space character. If you want a literal "[", put it
in brackets,
e. g. glob.glob("C:\\[[] ]\\*").
---
By the way, do you think this problem is common enough to warrant the addition
of a
fnmatch.escape() function? I have something like this in mind:
>>> import re
>>> r = re.compile("(%s)" % "|".join(re.escape(c) for c in "*?["))
>>> def escape(s):
... return r.sub(r"[\1]", s)
...
>>> escape("c:\\[a-z]\\*")
'c:\\[[]a-z]\\[*]'
--
Comment By: Josiah Carlson (josiahcarlson)
Date: 2006-10-27 06:14
Message:
Logged In: YES
user_id=341410
This is a known issue with the fnmatch module (what glob
uses under the covers). According to the documentation of
the translate method that converts patterns into regular
expressions... "There is no way to quote meta-characters."
The fact that "[]" works but "[ ]" doesn't work is a
convenient bug, for those who want to use "[]".
If you can come up with some similar but non-ambiguous
syntax to update the fnmatch module, I'm sure it would be
considered, but as-is, I can't see this as a "bug" per-se.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580472&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1576241 ] functools.wraps fails on builtins
Bugs item #1576241, was opened at 2006-10-13 08:24
Message generated for change (Comment added) made by ncoghlan
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576241&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: kajiuma (kajiuma)
Assigned to: Nick Coghlan (ncoghlan)
Summary: functools.wraps fails on builtins
Initial Comment:
functools.wraps assumes that the wrapped function
has a __dict__ attribute, which is not true for
builtins.
The attached patch provides an empty dictionaries
for functions that do not have the required
attributes. This will cause programs expecting an
AttributeError (if there are any) to fail.
--
>Comment By: Nick Coghlan (ncoghlan)
Date: 2006-10-28 02:07
Message:
Logged In: YES
user_id=1038590
I was mainly considering the decorator use case when I wrote
the function, so the idea of a wrapped function without a
dict attribute didn't occur to me (obviously!).
So definitely fix it on the trunk, and I'd say backport it
to 2.5 as well. My reasoning regarding the latter is that
the example code in the documentation for functools.wraps is
actually buggy with the current behaviour. With this bug
fixed, the documentation example will work as intended.
--
Comment By: A.M. Kuchling (akuchling)
Date: 2006-10-27 05:18
Message:
Logged In: YES
user_id=11375
The change seems reasonable, but arguably this is an API
change because of the AttributeError no longer being raised.
Nick, do you want to decide whether to make this change or
not? (I can make the edit and add a test if you agree to
apply this change.)
--
Comment By: kajiuma (kajiuma)
Date: 2006-10-13 08:33
Message:
Logged In: YES
user_id=1619773
Looks like lynx cannot send files.
The patch changed: getattr(wrapped, attr)
to: getattr(wrapped, attr, {})
At then end of line 35 of Lib/functools.py
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576241&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1576241 ] functools.wraps fails on builtins
Bugs item #1576241, was opened at 2006-10-12 18:24
Message generated for change (Settings changed) made by akuchling
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576241&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Private: No
Submitted By: kajiuma (kajiuma)
>Assigned to: A.M. Kuchling (akuchling)
Summary: functools.wraps fails on builtins
Initial Comment:
functools.wraps assumes that the wrapped function
has a __dict__ attribute, which is not true for
builtins.
The attached patch provides an empty dictionaries
for functions that do not have the required
attributes. This will cause programs expecting an
AttributeError (if there are any) to fail.
--
>Comment By: A.M. Kuchling (akuchling)
Date: 2006-10-27 12:42
Message:
Logged In: YES
user_id=11375
Committed to trunk (rev.52476) and 25-maint (rev. 52477).
thanks for your patch!
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-10-27 12:07
Message:
Logged In: YES
user_id=1038590
I was mainly considering the decorator use case when I wrote
the function, so the idea of a wrapped function without a
dict attribute didn't occur to me (obviously!).
So definitely fix it on the trunk, and I'd say backport it
to 2.5 as well. My reasoning regarding the latter is that
the example code in the documentation for functools.wraps is
actually buggy with the current behaviour. With this bug
fixed, the documentation example will work as intended.
--
Comment By: A.M. Kuchling (akuchling)
Date: 2006-10-26 15:18
Message:
Logged In: YES
user_id=11375
The change seems reasonable, but arguably this is an API
change because of the AttributeError no longer being raised.
Nick, do you want to decide whether to make this change or
not? (I can make the edit and add a test if you agree to
apply this change.)
--
Comment By: kajiuma (kajiuma)
Date: 2006-10-12 18:33
Message:
Logged In: YES
user_id=1619773
Looks like lynx cannot send files.
The patch changed: getattr(wrapped, attr)
to: getattr(wrapped, attr, {})
At then end of line 35 of Lib/functools.py
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576241&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1576174 ] str(WindowsError) wrong
Bugs item #1576174, was opened at 2006-10-12 22:12 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576174&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 >Status: Closed Resolution: Accepted Priority: 5 Private: No Submitted By: Thomas Heller (theller) Assigned to: Thomas Heller (theller) Summary: str(WindowsError) wrong Initial Comment: str(WindowsError(1001, 'a message') in Python 2.5 gives '[Error 22] a message'. The attached patch with test fixes this. -- >Comment By: Thomas Heller (theller) Date: 2006-10-27 21:17 Message: Logged In: YES user_id=11105 Committed as rev 52485 (trunk) and 52486 (release25-maint). -- Comment By: Martin v. Löwis (loewis) Date: 2006-10-21 11:53 Message: Logged In: YES user_id=21627 The patch is fine, please apply (along with a NEWS entry, for both 2.5 and the trunk). -- Comment By: Thomas Heller (theller) Date: 2006-10-13 20:17 Message: Logged In: YES user_id=11105 Uploaded a new patch which I actually tested under Linux also. -- Comment By: Thomas Heller (theller) Date: 2006-10-13 20:17 Message: Logged In: YES user_id=11105 My bad. I didn't test on Linux. -- Comment By: �iga Seilnacht (zseil) Date: 2006-10-12 23:53 Message: Logged In: YES user_id=1326842 The part of the patch that changes EnvironmentError_str should be removed (EnvironmentError doesn't have a winerror member, the change causes compilation errors). Otherwise the patch looks fine. -- Comment By: Thomas Heller (theller) Date: 2006-10-12 22:13 Message: Logged In: YES user_id=11105 See also: http://mail.python.org/pipermail/python-dev/2006-September/068869.html -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1576174&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1580738 ] httplib hangs reading too much data
Bugs item #1580738, was opened at 2006-10-19 14:06 Message generated for change (Comment added) made by djmitche You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1580738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Dustin J. Mitchell (djmitche) Assigned to: Nobody/Anonymous (nobody) Summary: httplib hangs reading too much data Initial Comment: I'm building an interface to Amazon's S3, using httplib. It uses a single object for multiple transactions. What's happening is this: HTTP > PUT /unitest-temp-1161039691 HTTP/1.1 HTTP > Date: Mon, 16 Oct 2006 23:01:32 GMT HTTP > Authorization: AWS <>:KiTWRuq/ 6aay0bI2J5DkE2TAWD0= HTTP > (end headers) HTTP < HTTP/1.1 200 OK HTTP < content-length: 0 HTTP < x-amz-id-2: 40uQn0OCpTiFcX+LqjMuzG6NnufdUk/.. HTTP < server: AmazonS3 HTTP < x-amz-request-id: FF504E8FD1B86F8C HTTP < location: /unitest-temp-1161039691 HTTP < date: Mon, 16 Oct 2006 23:01:33 GMT HTTPConnection.__state before response.read: Idle HTTPConnection.__response: closed? False length: 0 reading response HTTPConnection.__state after response.read: Idle HTTPConnection.__response: closed? False length: 0 ..later in the same connection.. HTTPConnection.__state before putrequest: Idle HTTPConnection.__response: closed? False length: 0 HTTP > DELETE /unitest-temp-1161039691 HTTP/1.1 HTTP > Date: Mon, 16 Oct 2006 23:01:33 GMT HTTP > Authorization: AWS <>: a5OizuLNwwV7eBUhha0B6rEJ+CQ= HTTP > (end headers) HTTPConnection.__state before getresponse: Request-sent HTTPConnection.__response: closed? False length: 0 File "/usr/lib64/python2.4/httplib.py", line 856, in getresponse raise ResponseNotReady() If the first request does not precede it, the second request is fine. To avoid excessive memory use, I'm calling request.read(16384) repeatedly, instead of just calling request.read(). This seems to be key to the problem -- if I omit the 'amt' argument to read(), then the last line of the first request reads HTTPConnection.__response: closed? True length: 0 and the later call to getresponse() doesn't raise ResponseNotReady. Looking at the source for httplib.HTTPResponse.read, self.close() gets called in the latter (working) case, but not in the former (non-working). It would seem sensible to add 'if self.length == 0: self.close()' to the end of that function (and, in fact, this change makes the whole thing work), but this comment makes me hesitant: # we do not use _safe_read() here because this may be a .will_close # connection, and the user is reading more bytes than will be provided # (for example, reading in 1k chunks) I suspect that either (a) this is a bug or (b) the client is supposed to either call read() with no arguments or calculate the proper inputs to read(amt) based on the Content-Length header. If (b), I would think the docs should be updated to reflect that? Thanks for any assistance. -- >Comment By: Dustin J. Mitchell (djmitche) Date: 2006-10-27 17:53 Message: Logged In: YES user_id=7446 Excellent -- the first paragraph, where you talk about the .length attribute, makes things quite clear, so I agree that (b) is the correct solution: include the content of that paragraph in the documentation. Thanks! -- Comment By: Mark Hammond (mhammond) Date: 2006-10-26 21:21 Message: Logged In: YES user_id=14198 The correct answer is indeed (b) - but note that httplib will itself do the content-length magic for you, including the correct handling of 'chunked' encoding. If the .length attribute is not None, then that is exactly how many bytes you should read. If .length is None, then either chunked encoding is used (in which case you can call read() with a fixed size until it returns an empty string), or no content-length was supplied (which can be treated the same as chunked, but the connection will close at the end. Checking ob.will_close can give you some insight into that. Its not reasonable to add 'if self.length==0: self.close()' - it is perfectly valid to have a zero byte response within a keep-alive connection - we don't want to force a new (expensive) connection to the server just because a zero byte response was requested. The HTTP semantics are hard to get your head around, but I believe httplib gets it right, and a ResponseNotReady exception in particular points at an error in the code attempting to use the library. Working with connections that keep alive is tricky - you just jump through hoops to ensure you maintain the state of the httplib object correctly - in general, that means you must *always* consume the entire response (even if it i
[ python-Bugs-1579370 ] Segfault provoked by generators and exceptions
Bugs item #1579370, was opened at 2006-10-17 19:23 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. -- >Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-27 21:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. -- Comment By: Michael Hudson (mwh) Date: 2006-10-19 00:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. -- Comment By: Tim Peters (tim_one) Date: 2006-10-18 17:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0x, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f
[ python-Bugs-1579370 ] Segfault provoked by generators and exceptions
Bugs item #1579370, was opened at 2006-10-17 22:23 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. -- >Comment By: Tim Peters (tim_one) Date: 2006-10-28 01:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. -- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 00:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. -- Comment By: Michael Hudson (mwh) Date: 2006-10-19 03:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. -- Comment By: Tim Peters (tim_one) Date: 2006-10-18 20:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread
