[ python-Bugs-1463043 ] test_minidom.py fails for Python-2.4.3 on SUSE 9.3
Bugs item #1463043, was opened at 2006-04-02 16:03
Message generated for change (Comment added) made by tobixx
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1463043&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Richard Townsend (rptownsend)
Assigned to: Martin v. Löwis (loewis)
Summary: test_minidom.py fails for Python-2.4.3 on SUSE 9.3
Initial Comment:
I built Python-2.4.3 from source on SUSE 9.3 and get
the following error for test_minidom.py
/usr/local/src/Python-2.4.3: ./python
Lib/test/test_minidom.py
Failed Test
Test Failed: testAltNewline
Traceback (most recent call last):
File "Lib/test/test_minidom.py", line 1384, in ?
func()
File "Lib/test/test_minidom.py", line 427, in
testAltNewline
confirm(domstr == str.replace("\n", "\r\n"))
File "Lib/test/test_minidom.py", line 28, in confirm
raise Exception
Exception
Failed testEncodings - encoding EURO SIGN
Test Failed: testEncodings
Traceback (most recent call last):
File "Lib/test/test_minidom.py", line 1384, in ?
func()
File "Lib/test/test_minidom.py", line 891, in
testEncodings
"testEncodings - encoding EURO SIGN")
File "Lib/test/test_minidom.py", line 28, in confirm
raise Exception
Exception
Failed After replaceChild()
Test Failed: testPatch1094164
Traceback (most recent call last):
File "Lib/test/test_minidom.py", line 1384, in ?
func()
File "Lib/test/test_minidom.py", line 1137, in
testPatch1094164
confirm(e.parentNode is elem, "After replaceChild()")
File "Lib/test/test_minidom.py", line 28, in confirm
raise Exception
Exception
Failed Test
Test Failed: testWriteXML
Traceback (most recent call last):
File "Lib/test/test_minidom.py", line 1384, in ?
func()
File "Lib/test/test_minidom.py", line 420, in
testWriteXML
confirm(str == domstr)
File "Lib/test/test_minidom.py", line 28, in confirm
raise Exception
Exception
Check for failures in these tests:
testAltNewline
testEncodings
testPatch1094164
testWriteXML
--
Comment By: Steffen Tobias Oschatz (tobixx)
Date: 2006-06-23 11:44
Message:
Logged In: YES
user_id=694396
I can confirm this behavior for Red Hat Enterprise 3.
I installed Python 2.4.3 . The tests failed for pyexpat
(complaining that there is no expat.so) and minidom.
I installed PyXML-0.8.4 - that solves expat error, minidom
error was still there. Tests for PyXML all run fine.
I have looked into the test and find the following reason:
testAltNewline:
---
str= '\n\n'
dom.toprettyxml(newl="\r\n")
>>> u'\n\r\n'
str.replace("\n", "\r\n")
>>> '\r\n\r\n'
domstr == str.replace("\n", "\r\n")
>>> False
I assume the test should be: domstr == str.replace("\r\n"),
"\n") to pass it. But by the way: why is there an '\n' in
the pretty string? And i would suggest: unicode(str).
testWriteXML:
str= ''
domstr=dom.toxml()
>>> u'\n'
str == domstr
>>> False
Whoops, where does the '\n' coming from ? :
toxml
toprettyxml
writexml:
if encoding is None:
writer.write('\n')
I'm not a xml guy, but i ask myself: should such formating
really be in this place ?
testEncodings:
--
same as before
testPyth1094164:
I can confirm this behavior for Red Hat Enterprise 3.
I installed Python 2.4.3 . The tests failed for pyexpat
(complaining that there is no expat.so) and minidom.
I installed PyXML-0.8.4 - that solves expat error, minidom
error was still there. Tests for PyXML all run fine.
I have looked into the test and find the following reason:
testAltNewline:
---
str= '\n\n'
dom.toprettyxml(newl="\r\n")
>>> u'\n\r\n'
str.replace("\n", "\r\n")
>>> '\r\n\r\n'
domstr == str.replace("\n", "\r\n")
>>> False
I assume the test should be: domstr == str.replace("\r\n"),
"\n") to pass it. But by the way: why is there an '\n' in
the pretty string? And i would suggest: unicode(str).
testWriteXML:
str= ''
domstr=dom.toxml()
>>> u'\n'
str == domstr
>>> False
Whoops, where does the '\n' coming from ? :
toxml
toprettyxml
writexml:
if encoding is None:
writer.write('\n')
I'm not a xml guy, but i ask myself: should such formating
really be in this place ?
testEncodings:
--
same as before
testPyth1094164:
after elem.replaceChild(e, e) the dom is gone:
> type(elem.firstChild)
Out[129]:
> type(e.parentNode)
Out[130]:
And why ?
replaceChild:
if newChild.parentNode is not None:
newChild.parentNode.removeChild(newChild)
if newChild is oldChild:
return
I assume the order of this if statement should be reversed.
--
Co
[ python-Bugs-1510172 ] Absolute/relative import not working?
Bugs item #1510172, was opened at 2006-06-21 21:35 Message generated for change (Comment added) made by twouters You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1510172&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Mitch Chapman (mitchchapman) Assigned to: Nobody/Anonymous (nobody) Summary: Absolute/relative import not working? Initial Comment: Trying to import from a module using dotted import syntax produces this exception: ValueError: Relative importpath too deep This behavior has been confirmed on Mac OS X 10.4 using the Python 2.5b1 disk image; and on CentOS 4 using the Python 2.5b1 source tarball. The exception is raised regardless of whether the PYTHONPATH environment variable can see the toplevel directory of the package being tested; regardless of whether the import is performed from an interactive Python session or from a script invoked from the command line; and regardless of whether the main script starts with from __future__ import absolute_import To test, I tried to re-create the package structure used as an example in PEP 328. (See attachments.) Most of the files were empty, except moduleX.py and moduleY.py: #moduleX.py: from __future__ import absolute_import from .moduleY import spam #moduleY.py: spam = "spam" According to the PEP, if should be possible to import moduleX without error. But I get the above exception whenever I try to import moduleX or to run it from the command line. $ python2.5 moduleX.py Traceback (most recent call last): File "moduleX.py", line 3, in from .moduleY import spam ValueError: Relative importpath too deep Is this a usage/documentation error? -- >Comment By: Thomas Wouters (twouters) Date: 2006-06-23 15:52 Message: Logged In: YES user_id=34209 See the discussion started at: http://mail.python.org/pipermail/python-dev/2006-June/066161.html It's not a bug in 328 or 338 (the PEP that adds the -m switch for packages), but in the interaction between the two. I don't think this will be fixed for 2.5, since there is no obvious fix. If it hurts when you press there, don't press there. Either don't use -m for packaged modules, or have the packaged module only use absolute imports. (But don't be surprised if the script-module is imported twice, once as __main__ and once as the module itself. That's a whole other bug/feature.) -- Comment By: Mitch Chapman (mitchchapman) Date: 2006-06-22 01:57 Message: Logged In: YES user_id=348188 Hm... but the same error occurs if one tries to import moduleX from an interactive Python session, or from a sibling module. In other words, in 2.5b1 any module which uses relative imports can be used only as a fully-qualified member of a package. It cannot be imported directly by a sibling module, and it cannot be used as a main module at all: $ python2.5 -m package.subpackage1.moduleX ... from .moduleY import spam ValueError: Relative importpath too deep Given other efforts (PEP 299; PEP 338) to make it easier to use modules both as mainlines and as imports, I still think this is a bug. -- Comment By: iga Seilnacht (zseil) Date: 2006-06-22 00:59 Message: Logged In: YES user_id=1326842 I think this is a usage error. The problem is that you run moduleX as a script. This puts the module's directory as the first entry in sys.path (see http://docs.python.org/dev/lib/module-sys.html#l2h-5058 for detais). As a consequence, moduleX is recognised as a top level module, not as part of a package. If you want to test relative import, try opening an interactive shell in the directory where `package` resides, and type: >>> from package.subpackage1 import moduleX >>> moduleX.spam 'spam' -- Comment By: Mark Nottingham (mnot) Date: 2006-06-21 23:16 Message: Logged In: YES user_id=21868 Seeing the same behaviour; OSX with the installer. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1510172&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1501699 ] method format of logging.Formatter caches incorrectly
Bugs item #1501699, was opened at 2006-06-06 17:14 Message generated for change (Comment added) made by blorbeer You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1501699&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Open Resolution: Invalid Priority: 5 Submitted By: Boris Lorbeer (blorbeer) Assigned to: Vinay Sajip (vsajip) Summary: method format of logging.Formatter caches incorrectly Initial Comment: The format method of logging.Formatter is buggy in that it doesn't call the method formatException if the cache record.exc_text is set. If you have two Formatters that should format the same log record differently (i.e. each has its own overriding formatException method), the formatException method of the second formatter will never be called because the cache has been set by the first formatter. The proper way of using the cache is IMHO to check the cache only in the method formatException of logging.Formatter. -- >Comment By: Boris Lorbeer (blorbeer) Date: 2006-06-23 16:01 Message: Logged In: YES user_id=1535177 Hi vsajip, yes, it is by design, but I don't know whether the design is ideal. But if this behaviour is really intended, it should be documented clearly, such as: formatException(exc_info): If you override this method, an exception in the log record will be formatted by using this method, but only if this log record wasn't given by the framework to another formatter (that uses the default format function) before your formatter got its turn (something you cannot ensure)... Now to the question of how to fix the design (provided one wants to): clearly one cannot change the signature of formatException without breaking existing code. But one could change formatter to have an additional field 'labeledCache': a pair of an exc_info tuple and a string (the cache). The formatException method would then use this cache only if id() of its argument is the id() of the first element in the pair, otherwise it would exchange 'labeledCache' by a new pair belonging to the current exc_info tuple. That's only one posibility to fix this problem. -- Comment By: Vinay Sajip (vsajip) Date: 2006-06-22 18:46 Message: Logged In: YES user_id=308438 It's not a bug, it's by design. The formatException method only takes the exception info as a parameter, and to change the method signature now could break some people's code, right? A solution would be for you to also override the format method in your custom formatter classes and set record.exc_text to None if you want to invalidate the cache before calling the base class implementation. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1501699&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1501699 ] method format of logging.Formatter caches incorrectly
Bugs item #1501699, was opened at 2006-06-06 15:14 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1501699&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Pending Resolution: Invalid Priority: 5 Submitted By: Boris Lorbeer (blorbeer) Assigned to: Vinay Sajip (vsajip) Summary: method format of logging.Formatter caches incorrectly Initial Comment: The format method of logging.Formatter is buggy in that it doesn't call the method formatException if the cache record.exc_text is set. If you have two Formatters that should format the same log record differently (i.e. each has its own overriding formatException method), the formatException method of the second formatter will never be called because the cache has been set by the first formatter. The proper way of using the cache is IMHO to check the cache only in the method formatException of logging.Formatter. -- >Comment By: Vinay Sajip (vsajip) Date: 2006-06-23 14:56 Message: Logged In: YES user_id=308438 Hi Boris, You didn't say in your comment what was wrong with my suggestion (setting record.exc_text to None in your formatter subclass). I take your point, and understand your labeledCache suggestion, and will look at implementing something equivalent when time permits. However, other scenarios need to be considered, such as sending LogRecords over the wire. In this scenario (not uncommon in multi-process environments), the present implementation could send the formatted stack trace, as it is pickled in the LogRecord; implementing a cache in the Formatter will not allow the stack trace to be sent to be logged elsewhwere. -- Comment By: Boris Lorbeer (blorbeer) Date: 2006-06-23 14:01 Message: Logged In: YES user_id=1535177 Hi vsajip, yes, it is by design, but I don't know whether the design is ideal. But if this behaviour is really intended, it should be documented clearly, such as: formatException(exc_info): If you override this method, an exception in the log record will be formatted by using this method, but only if this log record wasn't given by the framework to another formatter (that uses the default format function) before your formatter got its turn (something you cannot ensure)... Now to the question of how to fix the design (provided one wants to): clearly one cannot change the signature of formatException without breaking existing code. But one could change formatter to have an additional field 'labeledCache': a pair of an exc_info tuple and a string (the cache). The formatException method would then use this cache only if id() of its argument is the id() of the first element in the pair, otherwise it would exchange 'labeledCache' by a new pair belonging to the current exc_info tuple. That's only one posibility to fix this problem. -- Comment By: Vinay Sajip (vsajip) Date: 2006-06-22 16:46 Message: Logged In: YES user_id=308438 It's not a bug, it's by design. The formatException method only takes the exception info as a parameter, and to change the method signature now could break some people's code, right? A solution would be for you to also override the format method in your custom formatter classes and set record.exc_text to None if you want to invalidate the cache before calling the base class implementation. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1501699&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1511381 ] codec_getstreamcodec passes extra None
Bugs item #1511381, was opened at 2006-06-24 00:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.5 Status: Open Resolution: None Priority: 6 Submitted By: Hye-Shik Chang (perky) Assigned to: Walter Dörwald (doerwalter) Summary: codec_getstreamcodec passes extra None Initial Comment: codec_getstreamcodec passes a None object (null pointer, originally) as a "error" argument when errors is given as a null pointer. Due to this behavior, parsers can't utilize cjkcodecs which doesn't allow None as a default argument: SyntaxError: encoding problem: with BOM Attached patch fixes it to omit the argument, "errors", and changed it to use PyObject_CallFunction instead of PyEval_CallFunction because PyEval_CallFunction doesn't work for simple "O" argument. (I don't know it was intended. But we can still use PyEval_CallFunction if we write it as "(O)") I wonder if there's a reason that you chose PyEval_CallFunction for the initialization order or something? How to reproduce the error: echo "# coding: cp949" > test.py ./python test.py -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1511381 ] codec_getstreamcodec passes extra None
Bugs item #1511381, was opened at 2006-06-23 17:00 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.5 Status: Open Resolution: None Priority: 6 Submitted By: Hye-Shik Chang (perky) Assigned to: Walter Dörwald (doerwalter) Summary: codec_getstreamcodec passes extra None Initial Comment: codec_getstreamcodec passes a None object (null pointer, originally) as a "error" argument when errors is given as a null pointer. Due to this behavior, parsers can't utilize cjkcodecs which doesn't allow None as a default argument: SyntaxError: encoding problem: with BOM Attached patch fixes it to omit the argument, "errors", and changed it to use PyObject_CallFunction instead of PyEval_CallFunction because PyEval_CallFunction doesn't work for simple "O" argument. (I don't know it was intended. But we can still use PyEval_CallFunction if we write it as "(O)") I wonder if there's a reason that you chose PyEval_CallFunction for the initialization order or something? How to reproduce the error: echo "# coding: cp949" > test.py ./python test.py -- >Comment By: Walter Dörwald (doerwalter) Date: 2006-06-23 17:47 Message: Logged In: YES user_id=89016 The patch looks good to me. Switching from PyEval_CallFunction() to PyObject_CallFunction() should be OK. (There seems to be subtle differences between the two, but finding out what it is looks like a scavenger hunt to me :-/)). So go ahead and check it in. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1511381 ] codec_getstreamcodec passes extra None
Bugs item #1511381, was opened at 2006-06-23 17:00 Message generated for change (Settings changed) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.5 Status: Open Resolution: None Priority: 6 Submitted By: Hye-Shik Chang (perky) >Assigned to: Hye-Shik Chang (perky) Summary: codec_getstreamcodec passes extra None Initial Comment: codec_getstreamcodec passes a None object (null pointer, originally) as a "error" argument when errors is given as a null pointer. Due to this behavior, parsers can't utilize cjkcodecs which doesn't allow None as a default argument: SyntaxError: encoding problem: with BOM Attached patch fixes it to omit the argument, "errors", and changed it to use PyObject_CallFunction instead of PyEval_CallFunction because PyEval_CallFunction doesn't work for simple "O" argument. (I don't know it was intended. But we can still use PyEval_CallFunction if we write it as "(O)") I wonder if there's a reason that you chose PyEval_CallFunction for the initialization order or something? How to reproduce the error: echo "# coding: cp949" > test.py ./python test.py -- Comment By: Walter Dörwald (doerwalter) Date: 2006-06-23 17:47 Message: Logged In: YES user_id=89016 The patch looks good to me. Switching from PyEval_CallFunction() to PyObject_CallFunction() should be OK. (There seems to be subtle differences between the two, but finding out what it is looks like a scavenger hunt to me :-/)). So go ahead and check it in. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1511497 ] xml.sax.expatreader is missing
Bugs item #1511497, was opened at 2006-06-23 20:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511497&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: XML Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Wummel (calvin) Assigned to: Nobody/Anonymous (nobody) Summary: xml.sax.expatreader is missing Initial Comment: Hi, when testing the new Python 2.5 subversion tree I encountered this behaviour: $ python2.5 Python 2.5b1 (trunk:47065, Jun 22 2006, 20:56:23) [GCC 4.1.2 20060613 (prerelease) (Debian 4.1.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import xml.sax.expatreader >>> print xml.sax.expatreader Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'expatreader' >>> So the import went ok, but using the attribute gave an error. This is very strange. Python 2.4 did not have this behaviour. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511497&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Feature Requests-1509060 ] Interrupt/kill threads w/exception
Feature Requests item #1509060, was opened at 2006-06-19 21:30 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1509060&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Threads Group: None Status: Open Resolution: None Priority: 5 Submitted By: Oliver Bock (oliverbock) Assigned to: Nobody/Anonymous (nobody) Summary: Interrupt/kill threads w/exception Initial Comment: When unsophisticated (but not evil) users write Python macros, they occasionally write infinite loops. It would be nice if it was possible to interrupt threads to break these loops. The safety of rasing an exception in another thread was noted in http://sourceforge.net/tracker/?func=detail&atid=305470&aid=452266&group_id=5470 Anton Wilson wrote a patch for this some time ago: http://mail.python.org/pipermail/python-list/2003-February/148999.html Note that this won't help if the thread is blocked on I/O. -- Comment By: Josiah Carlson (josiahcarlson) Date: 2006-06-23 11:24 Message: Logged In: YES user_id=341410 It would be nice to be able to kill runaway threads, though I have no comment on the patch that Anton Wilson offers (which will no doubt need to be updated for more recent Pythons). -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1509060&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1194222 ] parsedate and Y2K
Bugs item #1194222, was opened at 2005-05-02 21:37
Message generated for change (Comment added) made by mnot
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1194222&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Mark Nottingham (mnot)
Assigned to: Nobody/Anonymous (nobody)
Summary: parsedate and Y2K
Initial Comment:
rfc822.parsedate and email.Utils.parsedate don't take Y2K into
account when parsing two-digit years, even though they're allowed by
RFC822. Even though that spec has since been superseded, there
are still systems generating dates in the old format, and RFC2616,
which bases its dates on RFC822, still allows two-digit years.
For example,
>>> email.Utils.parsedate("Sun, 6 Nov 94 08:49:37 GMT")
(94, 11, 6, 8, 49, 37, 0, 0, 0)
Here's a trivial patch to behave as outlined in the time module (I don't
test for time.accept2dyear because the input is outside the system's
control, and RFC-specified); it's against 2.3, but should be easy to
integrate into later versions.
125a126,130
> if yy < 100:
> if yy > 68:
> yy = yy + 1900
> else:
> yy = yy + 2000
--
>Comment By: Mark Nottingham (mnot)
Date: 2006-06-23 12:17
Message:
Logged In: YES
user_id=21868
This bug is still present in the 2.5 library. Will it be fixed?
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1194222&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1194222 ] parsedate and Y2K
Bugs item #1194222, was opened at 2005-05-02 21:37
Message generated for change (Settings changed) made by mnot
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1194222&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
>Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Mark Nottingham (mnot)
Assigned to: Nobody/Anonymous (nobody)
Summary: parsedate and Y2K
Initial Comment:
rfc822.parsedate and email.Utils.parsedate don't take Y2K into
account when parsing two-digit years, even though they're allowed by
RFC822. Even though that spec has since been superseded, there
are still systems generating dates in the old format, and RFC2616,
which bases its dates on RFC822, still allows two-digit years.
For example,
>>> email.Utils.parsedate("Sun, 6 Nov 94 08:49:37 GMT")
(94, 11, 6, 8, 49, 37, 0, 0, 0)
Here's a trivial patch to behave as outlined in the time module (I don't
test for time.accept2dyear because the input is outside the system's
control, and RFC-specified); it's against 2.3, but should be easy to
integrate into later versions.
125a126,130
> if yy < 100:
> if yy > 68:
> yy = yy + 1900
> else:
> yy = yy + 2000
--
Comment By: Mark Nottingham (mnot)
Date: 2006-06-23 12:17
Message:
Logged In: YES
user_id=21868
This bug is still present in the 2.5 library. Will it be fixed?
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1194222&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1202533 ] a bunch of infinite C recursions
Bugs item #1202533, was opened at 2005-05-15 16:43
Message generated for change (Comment added) made by bcannon
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: a bunch of infinite C recursions
Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level,
and I have no clue at the moment how it could be reasonably fixed. The idea is
to define special __xxx__ methods in such a way that no Python code is actually
called before they invoke more special methods (e.g. themselves).
>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
--
>Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 12:44
Message:
Logged In: YES
user_id=357491
Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
--
Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 13:39
Message:
Logged In: YES
user_id=593130
Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as
a duplicate. If that turns out to be wrong, it should be reopened.
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-29 05:23
Message:
Logged In: YES
user_id=4771
Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the
examples so far, with the exception of the "__get__=getattr" one, where we get
a strange result instead of a RuntimeError (I suspect careless exception eating
is taking place).
The main loop in ceval.c doesn't call PyObject_Call() very often: it usually
dispatches directly itself for performance, which is exactly what we want here,
as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().
So this change has a minor impact on performance. Pystone for example issues
only three PyObject_Call() per loop, to call classes. This has an
almost-unmeasurable impact ( < 0.4%).
Of course I'll think a bit more and search for examples that don't go through
PyObject_Call() :-)
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-23 06:52
Message:
Logged In: YES
user_id=4771
When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator"). We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them. For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
information -- a call graph including indirect calls through
function pointers. Not sure it's sane, though.
--
Comment By: Michael Hudson (mwh)
Date: 2005-05-23 06:16
Message:
Logged In: YES
user_id=6656
I agree with Armin that this could easily be a never ending story. Perhaps
it would suffice to sprinkle Py_EnterRecursiveCall around as we find holes.
It might have to, because I can't really think of a better way of doing this.
The only other approach I know is that of SBCL (a Common Lisp
implementation): it mprotects a page at the end of the stack and installs a
SIGSEGV handler (and uses sigaltstack) that knows how to abort the
current lisp operation. Somehow, I don't think we want to go down this
line.
Anybody have any other ideas?
--
Comment By: Martin v. Löwis (loewis)
Date: 2005-05-23 06:06
Message:
Logged In: YES
user_id=21627
It has been a long-time policy that you should not be able
to crash the Python interpreter even with malicious code. I
think this is a good policy, because it provides people
always with a back-trace, which is much easier to analyse
than a core dump.
--
Comment By: Josiah Carlson (josiahcarlson)
Date: 2005-05-23 00:41
Message:
Logged In: YES
user_id=341410
I personally think that the CPython runtime should make a
best-effort to not crash when running code that makes sense.
But when CPython is running on input that is nonsensical
(in each of the examples that Armin provides, no return
value could make sense), I think that as long as the
behavior is stated clearl
[ python-Bugs-1202533 ] a bunch of infinite C recursions
Bugs item #1202533, was opened at 2005-05-15 23:43
Message generated for change (Comment added) made by arigo
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: a bunch of infinite C recursions
Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level,
and I have no clue at the moment how it could be reasonably fixed. The idea is
to define special __xxx__ methods in such a way that no Python code is actually
called before they invoke more special methods (e.g. themselves).
>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
--
>Comment By: Armin Rigo (arigo)
Date: 2006-06-23 20:05
Message:
Logged In: YES
user_id=4771
I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:
* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)
* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)
So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more. Not sure what to do about it. I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 19:44
Message:
Logged In: YES
user_id=357491
Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
--
Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 20:39
Message:
Logged In: YES
user_id=593130
Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as
a duplicate. If that turns out to be wrong, it should be reopened.
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-29 12:23
Message:
Logged In: YES
user_id=4771
Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the
examples so far, with the exception of the "__get__=getattr" one, where we get
a strange result instead of a RuntimeError (I suspect careless exception eating
is taking place).
The main loop in ceval.c doesn't call PyObject_Call() very often: it usually
dispatches directly itself for performance, which is exactly what we want here,
as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().
So this change has a minor impact on performance. Pystone for example issues
only three PyObject_Call() per loop, to call classes. This has an
almost-unmeasurable impact ( < 0.4%).
Of course I'll think a bit more and search for examples that don't go through
PyObject_Call() :-)
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-23 13:52
Message:
Logged In: YES
user_id=4771
When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator"). We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them. For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
information -- a call graph including indirect calls through
function pointers. Not sure it's sane, though.
--
Comment By: Michael Hudson (mwh)
Date: 2005-05-23 13:16
Message:
Logged In: YES
user_id=6656
I agree with Armin that this could easily be a never ending story. Perhaps
it would suffice to sprinkle Py_EnterRecursiveCall around as we find holes.
It might have to, because I can't really think of a better way of doing this.
The only other approach I know is that of SBCL (a Common Lisp
implementation): it mprotects a page at the end of the stack and installs a
SIGSEGV handler (and uses sigaltstack) that knows how to abort the
current lisp operation. Somehow, I don't think we want to go down t
[ python-Bugs-1202533 ] a bunch of infinite C recursions
Bugs item #1202533, was opened at 2005-05-15 16:43
Message generated for change (Comment added) made by bcannon
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: a bunch of infinite C recursions
Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level,
and I have no clue at the moment how it could be reasonably fixed. The idea is
to define special __xxx__ methods in such a way that no Python code is actually
called before they invoke more special methods (e.g. themselves).
>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
--
>Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:53
Message:
Logged In: YES
user_id=357491
I thought the check was in slot_tp_call and not
PyObject_Call. So yeah, I basically forgot. =)
The problem with allowing the segfault to stay is that it
destroys security in terms of protecting the interpreter,
which I am trying to deal with. So leaving random ways to
crash the interpreter is currently a no-no for me. I will
see if I can come up with another way to fix this issue.
--
Comment By: Armin Rigo (arigo)
Date: 2006-06-23 13:05
Message:
Logged In: YES
user_id=4771
I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:
* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)
* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)
So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more. Not sure what to do about it. I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 12:44
Message:
Logged In: YES
user_id=357491
Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
--
Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 13:39
Message:
Logged In: YES
user_id=593130
Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as
a duplicate. If that turns out to be wrong, it should be reopened.
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-29 05:23
Message:
Logged In: YES
user_id=4771
Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the
examples so far, with the exception of the "__get__=getattr" one, where we get
a strange result instead of a RuntimeError (I suspect careless exception eating
is taking place).
The main loop in ceval.c doesn't call PyObject_Call() very often: it usually
dispatches directly itself for performance, which is exactly what we want here,
as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().
So this change has a minor impact on performance. Pystone for example issues
only three PyObject_Call() per loop, to call classes. This has an
almost-unmeasurable impact ( < 0.4%).
Of course I'll think a bit more and search for examples that don't go through
PyObject_Call() :-)
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-23 06:52
Message:
Logged In: YES
user_id=4771
When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator"). We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them. For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
information -- a call graph including indirect calls through
function pointers. Not sure it's sane, though.
--
Comment By: Michael Hudson (mwh)
Date: 200
[ python-Bugs-1202533 ] a bunch of infinite C recursions
Bugs item #1202533, was opened at 2005-05-15 16:43
Message generated for change (Comment added) made by bcannon
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: a bunch of infinite C recursions
Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level,
and I have no clue at the moment how it could be reasonably fixed. The idea is
to define special __xxx__ methods in such a way that no Python code is actually
called before they invoke more special methods (e.g. themselves).
>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
--
>Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:57
Message:
Logged In: YES
user_id=357491
The rev. that Armin checked in was actually r47061.
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:53
Message:
Logged In: YES
user_id=357491
I thought the check was in slot_tp_call and not
PyObject_Call. So yeah, I basically forgot. =)
The problem with allowing the segfault to stay is that it
destroys security in terms of protecting the interpreter,
which I am trying to deal with. So leaving random ways to
crash the interpreter is currently a no-no for me. I will
see if I can come up with another way to fix this issue.
--
Comment By: Armin Rigo (arigo)
Date: 2006-06-23 13:05
Message:
Logged In: YES
user_id=4771
I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:
* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)
* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)
So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more. Not sure what to do about it. I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 12:44
Message:
Logged In: YES
user_id=357491
Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
--
Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 13:39
Message:
Logged In: YES
user_id=593130
Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as
a duplicate. If that turns out to be wrong, it should be reopened.
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-29 05:23
Message:
Logged In: YES
user_id=4771
Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the
examples so far, with the exception of the "__get__=getattr" one, where we get
a strange result instead of a RuntimeError (I suspect careless exception eating
is taking place).
The main loop in ceval.c doesn't call PyObject_Call() very often: it usually
dispatches directly itself for performance, which is exactly what we want here,
as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().
So this change has a minor impact on performance. Pystone for example issues
only three PyObject_Call() per loop, to call classes. This has an
almost-unmeasurable impact ( < 0.4%).
Of course I'll think a bit more and search for examples that don't go through
PyObject_Call() :-)
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-23 06:52
Message:
Logged In: YES
user_id=4771
When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator"). We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them. For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
[ python-Bugs-1511381 ] codec_getstreamcodec passes extra None
Bugs item #1511381, was opened at 2006-06-24 00:00 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 6 Submitted By: Hye-Shik Chang (perky) Assigned to: Hye-Shik Chang (perky) Summary: codec_getstreamcodec passes extra None Initial Comment: codec_getstreamcodec passes a None object (null pointer, originally) as a "error" argument when errors is given as a null pointer. Due to this behavior, parsers can't utilize cjkcodecs which doesn't allow None as a default argument: SyntaxError: encoding problem: with BOM Attached patch fixes it to omit the argument, "errors", and changed it to use PyObject_CallFunction instead of PyEval_CallFunction because PyEval_CallFunction doesn't work for simple "O" argument. (I don't know it was intended. But we can still use PyEval_CallFunction if we write it as "(O)") I wonder if there's a reason that you chose PyEval_CallFunction for the initialization order or something? How to reproduce the error: echo "# coding: cp949" > test.py ./python test.py -- >Comment By: Hye-Shik Chang (perky) Date: 2006-06-24 06:18 Message: Logged In: YES user_id=55188 Committed as r47086. Thanks for the review! :) -- Comment By: Walter Dörwald (doerwalter) Date: 2006-06-24 00:47 Message: Logged In: YES user_id=89016 The patch looks good to me. Switching from PyEval_CallFunction() to PyObject_CallFunction() should be OK. (There seems to be subtle differences between the two, but finding out what it is looks like a scavenger hunt to me :-/)). So go ahead and check it in. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1511381&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1202533 ] a bunch of infinite C recursions
Bugs item #1202533, was opened at 2005-05-15 16:43
Message generated for change (Comment added) made by bcannon
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Armin Rigo (arigo)
Assigned to: Nobody/Anonymous (nobody)
Summary: a bunch of infinite C recursions
Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level,
and I have no clue at the moment how it could be reasonably fixed. The idea is
to define special __xxx__ methods in such a way that no Python code is actually
called before they invoke more special methods (e.g. themselves).
>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
--
>Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 14:54
Message:
Logged In: YES
user_id=357491
I just had an idea, Armin. What if, at the recursive call
site in PyErr_NormalizeException(), we called
Py_LeaveRecursiveCall() before and Py_EnterRecursiveCall()
after? That would keep the recursion limit the same when
the normalization was done, but still allow the check in
PyObject_Call()::
Py_LeaveRecursiveCall();
PyErr_NormalizeException(exc, val, tb);
Py_EnterRecursiveCall("");
Since it is an internal call I think it would be safe to
"play" with the recursion depth value like this. What do
you think?
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:57
Message:
Logged In: YES
user_id=357491
The rev. that Armin checked in was actually r47061.
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:53
Message:
Logged In: YES
user_id=357491
I thought the check was in slot_tp_call and not
PyObject_Call. So yeah, I basically forgot. =)
The problem with allowing the segfault to stay is that it
destroys security in terms of protecting the interpreter,
which I am trying to deal with. So leaving random ways to
crash the interpreter is currently a no-no for me. I will
see if I can come up with another way to fix this issue.
--
Comment By: Armin Rigo (arigo)
Date: 2006-06-23 13:05
Message:
Logged In: YES
user_id=4771
I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:
* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)
* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)
So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more. Not sure what to do about it. I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...
--
Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 12:44
Message:
Logged In: YES
user_id=357491
Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
--
Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 13:39
Message:
Logged In: YES
user_id=593130
Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as
a duplicate. If that turns out to be wrong, it should be reopened.
--
Comment By: Armin Rigo (arigo)
Date: 2005-05-29 05:23
Message:
Logged In: YES
user_id=4771
Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the
examples so far, with the exception of the "__get__=getattr" one, where we get
a strange result instead of a RuntimeError (I suspect careless exception eating
is taking place).
The main loop in ceval.c doesn't call PyObject_Call() very often: it usually
dispatches directly itself for performance, which is exactly what we want here,
as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().
So this change has a minor impact on performance. Pystone for example issues
only three PyObject_Call() per loop, to call classes. This has an
almost-unmeasurable impact ( < 0.4%).
Of course I'll think a
