Changes by Stefan Krah :
--
assignee: eric.araujo ->
resolution: duplicate ->
stage: committed/rejected -> needs patch
status: closed -> open
___
Python tracker
<http://bugs.python
Changes by Stefan Krah :
--
stage: needs patch -> patch review
___
Python tracker
<http://bugs.python.org/issue16779>
___
___
Python-bugs-list mailing list
Un
Stefan Behnel added the comment:
> Also, PEP 8 forbids using annotations in the CPython library, which
> includes all of CPython's builtins. So using annotations in any way
> for this was off the table.
Interesting, wasn't aware of that. Then let's wait what will
Stefan Behnel added the comment:
>> inspect.isbuiltin() returns False
> Are you absolutely sure about this?
Yes. The "inheritance" of Cython's function type from PyCFunction is a pure
implementation detail of the object struct layout that is not otherwise
visible in any
Stefan Behnel added the comment:
> What "existing function introspection API"? I wasn't aware there was an
> existing mechanism to provide signature metadata for builtin functions.
Not for builtin functions, but it's unclear to me why the API of builtin
functions sho
Stefan Behnel added the comment:
Python 3.4.0b3+ (default:19d81cc213d7, Feb 1 2014, 10:38:23)
[GCC 4.8.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def test(a,b,c=None): pass
>>> set(dir(t
Stefan Behnel added the comment:
"""
>>> test.__code__.co_varnames
()
>>> test.__code__.co_varnames
()
>>> test.__code__.co_varnames
('a', 'b', 'c')
"""
copy&pasto, please ignore the first two... :o)
Stefan Behnel added the comment:
> [...] a "builtin code object" (PyCFunctionObject) [...] doesn't have any of
> the metadata you cited.
That exactly is the bug.
I think we should take the further discussion offline, or mov
Stefan Behnel added the comment:
> I understand Stefan to (reasonably) want 1 api instead of 2.
Absolutely. However, I guess the underlying reasoning here is that there
are other callables, too, so it's not worth making just two kinds of them
look the same, even if both are functions
Stefan Krah added the comment:
Cross compiling for arm works here on Ubuntu:
$ cat config.site
ac_cv_file__dev_ptmx=no
ac_cv_file__dev_ptc=no
$ export CONFIG_SITE=$PWD/config.site
$ ./configure --host=arm-linux-gnueabi --build=x86_64 --disable-ipv6
$ make
I cannot test though, since I don
Stefan Krah added the comment:
Mauricio de Alencar wrote:
> String formatting is completely unaware of the concept of *significant
> digits*.
>>> format(Decimal(1), ".2f")
'1.00'
--
___
Python tracker
Stefan Krah added the comment:
Mauricio de Alencar wrote:
>
> Mauricio de Alencar added the comment:
>
> "Digits after the decimal mark" is not the same as "significant digits".
> See https://en.wikipedia.org/wiki/Significant_figures
>
> If I have
Stefan Krah added the comment:
Mauricio de Alencar wrote:
> The floats I posted are examples of computation results. The meaningful
> figures are related to the precision of the measurements fed to the
> computation.
Thank you, that makes it clear. Constructing Decimal('256.2
Changes by Stefan Behnel :
--
nosy: +scoder
___
Python tracker
<http://bugs.python.org/issue20485>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Stefan Krah:
As I understand, _decimal_to_ratio() should always produce an
integer ratio. But it does not for positive exponents:
>>> import statistics
>>> statistics.mean([Decimal("100"), Decimal("200")])
Decimal('150')
&
Stefan Krah added the comment:
OverflowError seems like a good choice if only values in the range
of a C long are accepted. ValueError would perhaps be more intuitive,
since the user normally only cares about the accepted range of
input values rather than the internal details.
--
nosy
Stefan Krah added the comment:
We can add a fast Decimal.as_integer_ratio() in C.
That said, why is the sum of Decimals not done in decimal arithmetic
with a very high context precision? It would be exact and with usual exponents
in the range [-384, 383] it should be very fast.
>>&g
Stefan Krah added the comment:
Oscar Benjamin wrote:
> If you're going to use decimals though then you can trap inexact and
> keep increasing the precision until it becomes exact.
For sums that is not necessary. Create a context with MAX_EMAX, MIN_EMIN and
MAX_PREC and mpd_a
Stefan Krah added the comment:
I must say that I'm moderately against these kinds of changes since
the benefit is small. The original reason for keeping the older
forms of assert* was to keep the diffs between 2.5-3.x manageable.
Perhaps that reason is gone now, but still: If anything ch
Stefan Krah added the comment:
I slightly favor the ValueError patch because there is only a single exception
to catch. PyLong_AsUnsignedLong() also raises OverflowError for both positive
values that are too large and for negative values.
Perhaps the error message could contain the actual range
Stefan Krah added the comment:
This looks like a duplicate of #20536. Steven, do you think you
have a chance to fix this before rc1?
--
keywords: +3.4regression
___
Python tracker
<http://bugs.python.org/issue20
Changes by Stefan Krah :
--
keywords: -3.4regression
___
Python tracker
<http://bugs.python.org/issue20561>
___
___
Python-bugs-list mailing list
Unsubscribe:
Stefan Krah added the comment:
Ian, could you please provide an example where multi-dimensional
indexing and slicing works in 2.x but not in 3.3?
--
___
Python tracker
<http://bugs.python.org/issue14
Stefan Krah added the comment:
Thanks, Ian. It seems to me that these issues should be sorted out
on the NumPy lists:
memoryview is not a drop-in replacement for buffer, so it has
different semantics.
What might help you is that you can cast any memoryview to
simple bytes without making a copy
Stefan Behnel added the comment:
>>> inspect.isbuiltin() returns False
>> Are you absolutely sure about this?
> Yes.
Oh, well...
isbuiltin(cyfunction) *does* return False. However,
ismethoddescriptor(cyfunction) returns True, because Cython's functions bind as
Stefan Behnel added the comment:
BTW, ismethoddescriptor() is an exceedingly broad test, IMHO. I wonder if it
should even be relied upon for anything in inspect.py.
--
___
Python tracker
<http://bugs.python.org/issue17
Stefan Behnel added the comment:
> Since this is a problem in Cython, not in CPython, maybe you can fix it in
> Cython?
I'm actually considering that. Now that Signature.from_function() allows
function-like types, it appears like it's the easiest solution to add a
"__sig
Stefan Behnel added the comment:
> Oh, sound like a big hack.
Well, it's certainly a bunch of overhead (even assuming that "inspect" will
most likely be imported already - just looked it up in import.c, there's
lots of useless generic code there), with a lot of potent
Changes by Stefan Behnel :
--
nosy: +scoder
___
Python tracker
<http://bugs.python.org/issue20632>
___
___
Python-bugs-list mailing list
Unsubscribe:
Stefan Behnel added the comment:
I tested it and it works, so I could take the simple route now and say "yes, it
fixes the problem", but it's actually no longer required because I already
added a "__signature__" property to Cython's functions. However, as Yury
Stefan Krah added the comment:
> Barring c++, are we using any C compilers that don't support inlines?
Not that I know of. libmpdec is C99, which seems to be supported by all
obscure commercial compilers on snakebite.
Also there have been no 3.x bug reports due to compilers choking o
Stefan Krah added the comment:
aubmoon: Would it be a possibility just to use 'f' instead?
>>> "{:+10.7f}".format(1.12345678)
'+1.1234568'
>>> "{:+10.7f}".format(0.12345678)
'+0.1234568'
--
nosy: +skrah
___
Stefan Krah added the comment:
No error here.
--
nosy: +skrah
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Stefan Krah added the comment:
I like the current behavior. We have modulo arithmetic here and
bool(96%24) is false, too.
--
nosy: +skrah
___
Python tracker
<http://bugs.python.org/issue13
Stefan Krah added the comment:
Unix time modulo 86400 gives the number of elapsed seconds in a day
and is zero at midnight. Also, modular arithmetic is colloquially
called "clock arithmetic" for a reason.
--
___
Python trac
Changes by Stefan Krah :
--
nosy: -skrah
___
Python tracker
<http://bugs.python.org/issue13936>
___
___
Python-bugs-list mailing list
Unsubscribe:
Stefan Behnel added the comment:
My latest status is that a decision on the future of the "parser" argument is
still pending. See #20219.
It's correctly deprecated in the sense that passing any previously existing
parser isn't going to be supported anymore, but passing an
New submission from Stefan Behnel :
Here's a simple coroutine that works perfectly in Python 2.6 but seems
to let Py3.1 enter an infinite loop that ends up eating all memory.
-
def printing_sink():
"A simple sink that prints the received values."
Stefan Behnel added the comment:
Hmm, ok, so this is actually an anticipated bug? And I assume this has
been discussed before and was decided to get solved by doing... what?
Is it documented somewhere why this happens and what one must avoid to
not run into this kind of pitfall
Changes by Stefan Behnel :
--
status: closed -> open
___
Python tracker
<http://bugs.python.org/issue6673>
___
___
Python-bugs-list mailing list
Unsubscri
Stefan Behnel added the comment:
Very good argumentation, thanks Nick!
I think this is worth being fixed in the 3.1 series.
--
___
Python tracker
<http://bugs.python.org/issue6
New submission from Stefan Krah :
--- a-decimal.py2009-08-28 11:48:45.0 +0200
+++ b-decimal.py2009-08-28 11:49:47.0 +0200
@@ -4845,7 +4845,7 @@
log_tenpower = f*M # exact
else:
log_d = 0 # error < 2.31
-log_tenpower = div_neares
New submission from Stefan Krah :
Hi,
I believe the following comparisons aren't correct:
1:
Decimal("-sNaN63450748854172416").compare_total(Decimal("-sNaN911993"))
==> Decimal('1')
Should be: Decimal('-1') (checked against decNumber)
New submission from Stefan Krah :
Hi,
a couple of minor issues:
1:
>>> c = getcontext()
>>> c.traps[InvalidOperation] = False
>>> Decimal("NaN").__int__()
Decimal('NaN')
I think the return value should be None.
2:
>>> c = getcontext(
Stefan Krah added the comment:
Yes, it is also fixed in 2.6 maintenance. I was hoping it could go into
2.5 maintenance.
--
___
Python tracker
<http://bugs.python.org/issue6
New submission from Stefan Krah :
Hi,
it looks like format_dict['type'] is not always initialized:
>>> from decimal import *
>>> format(Decimal("0.12345"), "a=-7.0")
Traceback (most recent call last):
File "", line 1, in
File
Stefan Krah added the comment:
Eric Smith wrote:
> The test as written will always give an error for None. I think the
> better fix is to change it to be:
>
> if format_dict['type'] is None or format_dict['type'] in 'gG':
>
> That "f
Stefan Krah added the comment:
[...]
> But in Python this error condition *can* 'otherwise be indicated', by
> raising a suitable Python exception. So I propose changing the decimal
> module in 2.7 and 3.2 so that int(Decimal('nan')) and
> long(Decimal(
New submission from Stefan Krah :
format(float("0.12345"), "7.0") -> '0.1'
The default alignment should be 'left-aligned'.
--
messages: 92370
nosy: skrah
severity: normal
status: open
title: float().__format__() default alignment
v
Stefan Krah added the comment:
Yes, I'll do that. - The tracker has eaten my examples, so hopefully
this goes through:
1. format(Decimal("0.12345"), "7.1") -> '0.1'
2. format(Decimal("0.12345"), "7.0g") -> '0.1&
Changes by Stefan Krah :
--
nosy: +eric.smith
___
Python tracker
<http://bugs.python.org/issue6857>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Stefan Krah :
Hi,
I've two more issues where format behavior should probably be identical:
1: (version 2.6 vs. 3.1):
Version 2.6:
>>> format(Decimal("NaN"), "+08.4")
'+NaN'
>>> format(float("NaN"), &quo
Stefan Krah added the comment:
Issue 1:
I would definitely keep the spelling in decimal, my concern was only the
padding.
The C standard agrees with Mark's view:
"Leading zeros (following any indication of sign or base) are used to
pad to the field width rather than performing spa
Stefan Krah added the comment:
Issue 3 is nonsense, '-' means left-justified in C. Sorry.
--
___
Python tracker
<http://bugs.python.org/issue6871>
___
___
New submission from Stefan Schwarzburg :
In the last example in the readline documentation
(http://docs.python.org/library/readline.html), the line
code.InteractiveConsole.__init__(self)
should be changed to
code.InteractiveConsole.__init__(self, locals, filename)
to work properly
New submission from Stefan Krah :
In many cases, decimal.py sets InvalidOperation instead of
DivisionImpossible or DivisionUndefined.
Mark, could I persuade you to isolate these cases by running a modified
deccheck2.py from mpdecimal (See attachment), which does not suppress
differences in the
New submission from Stefan Krah :
decimal.py sets InvalidOperation if the payload of a NaN is too large:
>>> c = getcontext()
>>> c.prec = 4
>>> c.create_decimal("NaN12345")
Traceback (most recent call last):
File "", line 1, in
File
Changes by Stefan Krah :
--
nosy: +mark.dickinson
___
Python tracker
<http://bugs.python.org/issue7047>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Stefan Krah :
>>> from decimal import *
>>> c = getcontext()
>>> c.prec = 2
>>> c.logb(Decimal("1E123456"))
Decimal('123456')
>>>
This result agrees with the result of decNumber, but the spec says:
&q
New submission from Stefan Krah :
If precision 1 is aupported, the following results should not be NaN:
Python 2.7a0 (trunk:74738, Sep 10 2009, 11:50:08)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Changes by Stefan Krah :
--
nosy: +mark.dickinson
___
Python tracker
<http://bugs.python.org/issue7049>
___
___
Python-bugs-list mailing list
Unsubscribe:
Stefan Krah added the comment:
Thanks for the explanation, and I agree that decimal.py is perfectly
correct. I based the report on the fact that decNumber updates the
context status with e.g. Division_impossible. But Division_impossible is
one of the flags that form IEEE_754_Invalid_operation
Stefan Krah added the comment:
I don't think early abortion based on m and the current precision is a
good idea. Then we have the situation that this works (prec=4):
(Decimal(7) ** 2) % 10
But this does not:
pow(Decimal(7), 2, 1
Stefan Krah added the comment:
This whole thing is indeed a matter of taste, so I'd close the bug if no
one else is interested.
--
___
Python tracker
<http://bugs.python.org/i
Stefan Krah added the comment:
Deprecate on the grounds that it is slow in decimal.py or the
InvalidOperation issue?
I think pure integer arithmetic with the decimal type always requires
attention from the user, since in many functions one has to check for
Rounded/Inexact in order to get
Stefan Krah added the comment:
precision: 34
maxExponent: 9
minExponent: -9
-- integer overflow in 3.61 or earlier
scbx164 scaleb 1E-9 -12 -> NaN Invalid_operation
-- out of range
scbx165 scaleb -1E-9 +12 -> NaN Invalid_operation
I
Stefan Krah added the comment:
(1) is clearly true. I wonder about (2) and (3):
The decimal data type is specified to be usable for integer arithmetic.
With a high precision (and traps for Rounded/Inexact) I think it's
reasonably convenient t
New submission from Stefan Behnel :
Running the Cython compiler under Python 3.1.1 and 3.2 (SVN) corrupts
PyThreadState->exc_value by leaving a dead reference. Printing the value
then leads to a crash.
This bug is about plain Python code, no Cython built extension modules
involved.
Steps
Stefan Behnel added the comment:
I should add that the crash doesn't necessarily happen during the first
test run, which also converts the Cython source to Py3 using 2to3.
However, once that's done, running the test a second time crashe
New submission from Stefan Krah :
Hi,
I got two issues with the all-important function rotate():
1. It should probably convert an integer argument:
>>> from decimal import *
>>> c = getcontext()
>>> c.prec = 4
>>> Decimal("10").rotat
New submission from Stefan Krah :
In the following case, Decimal() and int() behave differently. I wonder
if this is intentional:
>>> from decimal import *
>>> x = Decimal(2)
>>> y = Decimal(x)
>>> id(x) == id(y)
False
>>>
>>
New submission from Stefan Krah :
I'm not sure this is a bug, but I am trying to understand the rationale
for mimicking IEEE 754 for == and != comparisons involving NaNs. The
comment in decimal.py says:
"Note: The Decimal standard doesn't cover rich comparisons for Decimals.
In
New submission from Stefan Krah :
Sorry to report so many obscure corner cases. With the combination
Opensolaris/suncc/Python3.x copysign() gives reversed results when the
second argument is a NaN. The bug is in the C99 copysign() function
(OpenSolaris/suncc), but Python2.6 seems to have a
Changes by Stefan Krah :
--
type: -> behavior
___
Python tracker
<http://bugs.python.org/issue7279>
___
___
Python-bugs-list mailing list
Unsubscri
Changes by Stefan Krah :
--
type: -> behavior
___
Python tracker
<http://bugs.python.org/issue7278>
___
___
Python-bugs-list mailing list
Unsubscri
Stefan Krah added the comment:
I hope this won't be getting too complex. :)
Firstly, I agree that this is perhaps not a bug at all. I reported it
because I seemed possible that Python2.x had a deliberate workaround for
this issue which somehow got lost in 3.x.
Secondly, I didn't me
Stefan Krah added the comment:
I can confirm that short float repr() is active and all float tests are
passed on this combination:
Ubuntu64bit -> KVM -> OpenSolaris32bit/Python3.2/gcc
--
nosy: +skrah
___
Python tracker
<http://bugs.p
Stefan Krah added the comment:
The inline asm compiles, but I don't know how good the GNU inline asm
support is with suncc in general. I'm not a heavy user of suncc, I just
use it for testing.
That said, perhaps fesetprec works, too:
http://docs.sun.com/app/docs/doc/816-5172/fe
Stefan Krah added the comment:
If gcc and suncc are present, ./configure chooses gcc and everything is
fine.
If only suncc is present, it's detected as cc. These tests should be
possible:
ste...@opensolaris:~/svn/py3k$ cc -V
cc: Sun C 5.9 SunOS_i386 Patch 124868-07 2008/10/07
usag
Stefan Krah added the comment:
My copy is 32-bit. I never installed a 64-bit version, but I strongly
assume that uname -p gives x86_64. BTW, uname -p works on Solaris, but
returns 'unknown' on my 64 bit Linux.
--
___
Python trac
Stefan Krah added the comment:
Tested the patch against an updated 3.2. repr-style is 'short', and I
did not see obvious float errors. In particular, test_float.py is
passed. I also did not see new compile warnings.
As for the other tests, the errors I get seem to be the same with
Stefan Krah added the comment:
The tests that you mention run o.k., except capi, but that looks harmless:
ste...@opensolaris:~/svn/py3k/Lib/test# ../../python test_capi.py
test_instancemethod (__main__.CAPITest) ... ok
--
Ran
Stefan Krah added the comment:
Yes, test_ascii_formatd fails with 'ImportError: No module named _ctypes'.
--
___
Python tracker
<http://bugs.python.
New submission from Stefan Krah :
This issue affects the format functions of float and decimal.
When calculating the padding necessary to reach the minimum width,
UTF-8 separators and decimal points are calculated by their byte
lengths. This can lead to printed representations that are too
Stefan Behnel added the comment:
I hadn't, but it looks like the 2to3-ed Cython also runs on 3.0 now, so
I tested that, but I failed to get the procedure below to crash for me.
And that's both in 3.0 *and* 3.1.1! :-/
But I can still provoke the crash in 3.0, 3.0.1, 3.1.1 and the
Stefan Behnel added the comment:
The patch is supposed to apply near the end of the class
TreeAssertVisitor at the end of the file Cython/TestUtils.py, not in the
class NodeTypeWriter.
And the test doesn't run (or even import) the extension, it just buil
Stefan Krah added the comment:
I agree that it might add confusion. In the C-module,I currently
do this:
>>> Decimal(0) / 0
Traceback (most recent call last):
File "", line 1, in
cdecimal.InvalidOperation: []
But since you already have a detailed error message, this doe
Stefan Krah added the comment:
What you mean by "working with bytestrings"? The UTF-8 separators or
decimal points come directly from struct lconv (man localeconv). The
logical way to reach a minimum width of 19 is to have 19 UTF-8
characters, which can subsequently be converte
New submission from Stefan Behnel :
PyUnicode_FromEncodedObject() currently calls PyObject_AsCharBuffer() to
get the buffer pointer and length of a buffer supporting object. It
should be changed to support the buffer protocol correctly instead.
I filed this as a crash bug as the buffer protocol
Stefan Krah added the comment:
In python3.2, the output of decimal looks good. With float, the
separator is printed as two spaces on my Unicode terminal (export
LC_ALL=cs_CZ.UTF-8).
So decimal (3.2) interprets the separator string as a single UTF-8 char
and the final output is a UTF-8 string
Stefan Krah added the comment:
Googling "multi-byte thousands separator" gives better results. From
those results, it is clear to me that decimal_point and thousands_sep
are strings that may be interpreted as multi-byte characters. The Czech
separator appears to be a no-break space
New submission from Stefan Krah :
Hi, the following works in 2.7 but not in 3.x:
>>> import locale
>>> from decimal import *
>>> locale.setlocale(locale.LC_NUMERIC, 'fi_FI')
'fi_FI'
>>> format(Decimal('1000'), 'n'
Stefan Krah added the comment:
This fails in _localemodule.c: str2uni(). mbstowcs(NULL, s, 0) is
LC_CTYPE sensitive, but LC_CTYPE is UTF-8 in my terminal.
If I set LC_CTYPE and LC_NUMERIC together, things work.
This raises the question: If LC_CTYPE and LC_NUMERIC differ (and
since they are
Stefan Krah added the comment:
Segfault confirmed on 64 bit Ubuntu, Python 3.2a0:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f5074dea6e0 (LWP 11665)]
0x0042111b in _PyTuple_Resize (pv=0x7fff7ce03b10, newsize=25) at
Objects/tupleobject.c:853
853
Stefan Krah added the comment:
I have the same issue with the Express edition. You can work around it
by finding and executing vcvars32.bat or vcvars64.bat before running
setup.py. It would be nice if distutils took care of it though.
--
nosy: +skrah
Stefan Krah added the comment:
I think we have two issues here:
First, the default install of VS Express does not support 64-bit tools,
so distutils cannot work with a 64-bit Python install. I verified that
it _does_ work with a 32-bit Python install.
Second, it is possible to install 64-bit
New submission from Stefan Schwarzburg :
The documentation of multiprocessing.managers.BaseManager
(http://docs.python.org/library/multiprocessing.html#module-multiprocessing.managers)
refers to a method "serve_forever". This method is only available at the
server object inside BaseMa
Stefan Krah added the comment:
I needed a new Windows VM image anyway, so I can now confirm that the
paths of a fresh VS Express + SDK 64-bit tools install are broken as
described above.
--
___
Python tracker
<http://bugs.python.org/issue7
Stefan Krah added the comment:
Yes, it's a problem in _localemodule.c. This situation always
occurs when LC_NUMERIC is something like ISO8859-15, LC_CTYPE
is UTF-8 AND the decimal point or separator are in the range
128-255. Then mbstowcs tries to decode the character according
to LC_CTYP
Stefan Krah added the comment:
Changed title (was: decimal.py: format failure with locale specifier)
--
title: decimal.py: format failure with locale specifier -> _localemodule.c:
str2uni() with different LC_NUMERIC and LC_CTYPE
___
Python trac
3701 - 3800 of 4953 matches
Mail list logo