[issue4024] float(0.0) singleton

2008-10-02 Thread lplatypus

New submission from lplatypus <[EMAIL PROTECTED]>:

Here is a patch to make PyFloat_FromDouble(0.0) always return the same 
float instance.  This is similar to the existing optimization in 
PyInt_FromLong(x) for small x.

My own motivation is that the patch reduces memory by several megabytes 
for a particular in-house data processing script, but I think that it 
should be generally useful assuming that zero is a very common float 
value, and at worst almost neutral when this assumption is wrong.  The 
minimal performance impact of the test for zero should be easily 
recovered by reduced memory allocation calls.  I am happy to look into 
benchmarking if you require empirical performance data.

--
components: Interpreter Core
files: python_zero_float.patch
keywords: patch
messages: 74224
nosy: ldeller
severity: normal
status: open
title: float(0.0) singleton
type: resource usage
versions: Python 2.6
Added file: http://bugs.python.org/file11686/python_zero_float.patch

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4024>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4024] float(0.0) singleton

2008-10-03 Thread lplatypus

lplatypus <[EMAIL PROTECTED]> added the comment:

No it won't distinguish between +0.0 and -0.0 in its present form,
because these two have the same value according to the C equality
operator.  This should be easy to adjust, eg we could exclude -0.0 by
changing the comparison
if (fval == 0.0)
into 
static double positive_zero = 0.0;
...
if (!memcmp(&fval, &positive_zero, sizeof(double)))

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4024>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7890] equal unicode/str objects can have unequal hash

2010-02-08 Thread lplatypus

New submission from lplatypus :

The documentation for the hash() function says:
"Numeric values that compare equal have the same hash value (even if they are 
of different types, as is the case for 1 and 1.0)"

This can be violated when comparing a unicode object with its str equivalent.  
Here is an example:

C:\>c:\Python27\python -S
Python 2.7a3 (r27a3:78021, Feb  7 2010, 00:00:09) [MSC v.1500 32 bit (Intel)] 
on win32
>>> import sys; sys.setdefaultencoding('utf-8')
>>> unicodeobj = u'No\xebl'
>>> strobj = str(unicodeobj)
>>> unicodeobj == strobj
True
>>> hash(unicodeobj) == hash(strobj)
False

The last response should be True not False.

I tested this on Python 2.7a3/windows, 2.6.4/linux, 2.5.2/linux.  The problem 
is not relevant to Python 3.0+.

Looking at unicodeobject.c:unicode_hash() and stringobject.c:string_hash(), I 
think that this problem would arise for "equal" objects strobj and unicodeobj 
when the unicode code points are not aligned with the encoded bytes, ie when:
map(ord, unicodeobj) != map(ord, strobj)
This means that the problem never arises when sys.getdefaultencoding() is 
'ascii' or 'iso8859-1'/'latin1'.

--
components: Interpreter Core
messages: 99084
nosy: ldeller
severity: normal
status: open
title: equal unicode/str objects can have unequal hash
type: behavior
versions: Python 2.5, Python 2.6, Python 2.7

___
Python tracker 
<http://bugs.python.org/issue7890>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7890] equal unicode/str objects can have unequal hash

2010-02-09 Thread lplatypus

lplatypus  added the comment:

Okay thanks, but in that case might I suggest that this limitation be mentioned 
in the documentation for sys.setdefaultencoding?  It currently reads as if any 
available encoding is acceptable. Perhaps even a warning or exception should be 
produced when calling it wrongly?

Other places that may need review include:
- the programming FAQ on python.org which presents the option of calling 
setdefaultencoding('mbcs') on windows ( 
http://www.python.org/doc/faq/programming/#what-does-unicodeerror-ascii-decoding-encoding-error-ordinal-not-in-range-128-mean
 )
- the comments in site.py which provoke changing the default encoding
- PEP100 which suggests enabling this code in site.py

BTW would patches ever be considered to fix issues such as this with using 
other encodings as default encodings, or is there some objection to the concept?

--

___
Python tracker 
<http://bugs.python.org/issue7890>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5223] infinite recursion in PyErr_WriteUnraisable

2009-02-11 Thread lplatypus

New submission from lplatypus :

Here is an example of pure Python code which can cause the interpreter
to crash.

I can reproduce the problem in 2.6.0 and 2.6.1 but not in 2.5.2.

The __getattr__ function in this example is interesting in that it
involves infinite recursion, but then the RuntimeError("maximum
recursion depth exceeded") actually causes it to behave correctly.  This
is due to the behaviour of hasattr which suppresses any exception caught
when checking for attributes.

Added to the mix we have sys.stderr replaced by an instance with a write
method.  The key ingredient here is that getattr(sys.stderr, "write")
invokes Python code.  Near the interpreter's recursion limit this python
code can fail.  This causes infinite recursion in C.

Here is a snippet of the call stack from gdb showing the recursion cycle
(using 2.6.0 source code):

#9  0x004a442c in PyErr_WriteUnraisable (obj=0x64ae40)
at Python/errors.c:606
#10 0x004a48f5 in PyErr_GivenExceptionMatches (err=0x64ae40,
exc=0x64ae40) at Python/errors.c:115
#11 0x00466056 in slot_tp_getattr_hook (self=0x70a910,
name=0x2b4a94d47e70) at Objects/typeobject.c:5426
#12 0x00449f4d in PyObject_GetAttrString (v=0x70a910,
name=0x7fff155e2fe0 )
at Objects/object.c:1176
#13 0x0042e316 in PyFile_WriteObject (v=0xd02d88, f=0x70a910,
flags=1)
at Objects/fileobject.c:2362
#14 0x0042e5c5 in PyFile_WriteString (s=0x51704a "Exception ",
f=0x70a910) at Objects/fileobject.c:2422
#15 0x004a442c in PyErr_WriteUnraisable (obj=0x64ae40)
at Python/errors.c:606

--
components: Interpreter Core
files: recursebug.py
messages: 81721
nosy: ldeller
severity: normal
status: open
title: infinite recursion in PyErr_WriteUnraisable
type: crash
versions: Python 2.6
Added file: http://bugs.python.org/file13044/recursebug.py

___
Python tracker 
<http://bugs.python.org/issue5223>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5223] infinite recursion in PyErr_WriteUnraisable

2009-02-11 Thread lplatypus

lplatypus  added the comment:

I believe that this problem was introduced in subversion revision 65319.

___
Python tracker 
<http://bugs.python.org/issue5223>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5223] infinite recursion in PyErr_WriteUnraisable

2009-02-11 Thread lplatypus

lplatypus  added the comment:

sorry I meant revision 65320

___
Python tracker 
<http://bugs.python.org/issue5223>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6853] system proxy not used for https (on windows)

2009-09-06 Thread lplatypus

New submission from lplatypus :

On Windows, the urllib2 module (renamed to urllib.request in python 3)
does not use the system web proxy for https URLs in the case where "Use
the same proxy for all protocols" is selected in the Internet Explorer
proxy settings.

Attached is a patch against urllib/request.py in python 3.1.1

--
components: Library (Lib)
files: urllib-httpsproxy.patch
keywords: patch
messages: 92342
nosy: ldeller
severity: normal
status: open
title: system proxy not used for https (on windows)
type: behavior
versions: Python 2.6, Python 3.1
Added file: http://bugs.python.org/file14850/urllib-httpsproxy.patch

___
Python tracker 
<http://bugs.python.org/issue6853>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5223] infinite recursion in PyErr_WriteUnraisable

2009-03-08 Thread lplatypus

lplatypus  added the comment:

Hi amaury, I am copying you into this issue because I think it was
introduced in your revision 65320 when you added a call to
PyErr_WriteUnraisable from PyErr_GivenExceptionMatches in Python/errors.c.
Any thoughts on on this issue or how to fix it would be very welcome.

--
message_count: 3.0 -> 4.0
nosy: +amaury.forgeotdarc
nosy_count: 2.0 -> 3.0

___
Python tracker 
<http://bugs.python.org/issue5223>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25769] Crash due to using weakref referent without acquiring a strong reference

2015-11-29 Thread lplatypus

New submission from lplatypus:

I have encountered some crashes in a multithreaded application which appear to 
be due to a bug in weakref_richcompare in Objects/weakref.c

(I am using Python 2.7.9, but the same weakref code exists in 3.5 and hg 
default branch too)

weakref_richcompare ends with the statement:

return PyObject_RichCompare(PyWeakref_GET_OBJECT(self),
PyWeakref_GET_OBJECT(other), op);

At this point the code has established that the referents of "self" and "other" 
are still alive, and it is trying to compare the referents.  However it has not 
acquired a strong reference to the referents, so I think it is possible for one 
of them to be deleted half way through this comparison.  This can lead to a 
crash, because PyObject_RichCompare assumes that the PyObject*’s it was passed 
will remain usable for the duration of the call.

The crash dumps I have seen involve data corruption consistent with one of 
these PyObject's being deleted and the memory used for something else, eg:

00 python27!try_3way_compare+0x15 [objects\object.c @ 712]
01 python27!try_3way_to_rich_compare+0xb [objects\object.c @ 901]
02 python27!do_richcmp+0x2c [objects\object.c @ 935]
03 python27!PyObject_RichCompare+0x99 [objects\object.c @ 982]
04 python27!weakref_richcompare+0x6a [objects\weakrefobject.c @ 212]

In this example, in try_3way_compare the value of v->ob_type was 0x5f637865, 
which is ASCII "exc_" and not a valid pointer at all.

Other places in weakrefobject.c seem to have a similar weakness, eg in 
weakref_hash and weakref_repr.

I have not been successful in producing a small test case to demonstrate this 
crash.

--
components: Interpreter Core
messages: 255608
nosy: ldeller
priority: normal
severity: normal
status: open
title: Crash due to using weakref referent without acquiring a strong reference
type: crash
versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6

___
Python tracker 
<http://bugs.python.org/issue25769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25769] Crash due to using weakref referent without acquiring a strong reference

2015-11-29 Thread lplatypus

lplatypus added the comment:

I think the fix for this is simply a matter of using Py_INCREF/Py_DECREF around 
usage of the referent.  This should only be necessary for nontrivial usages 
where the GIL might be released.  Here is a patch.

--
keywords: +patch
Added file: http://bugs.python.org/file41195/issue25769-weakref.patch

___
Python tracker 
<http://bugs.python.org/issue25769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com