On Oct 4, 2006, at 8:14 PM, Martin v. Löwis wrote:
> If it breaks a few systems, that already is some systems too many.
> Python should never crash; and we have no control over the floating
> point exception handling in any portable manner.
You're quite right, though there is already plenty o
Kristján V. Jónsson schrieb:
> Hm, doesn´t seem to be so for my regular python.
>
> maybe it is 2.3.3, or maybe it is stackless from back then.
It's because you are using Windows. The way -0.0 gets rendered
depends on the platform. As Tim points out, try
math.atan2(0.0, -0.0) vs math.atan2(0.0, 0
Alastair Houghton schrieb:
> AFAIK few systems have floating point traps enabled by default (in fact,
> isn't that what IEEE 754 specifies?), because they often aren't very
> useful. And in the specific case of the Python interpreter, why would
> you ever want them turned on?
That reasoning is ir
On Wed, Oct 04, 2006 at 12:42:04AM -0400, Tim Peters wrote:
>
> > If C90 doesn't distinguish -0.0 and +0.0, how can Python?
>
> > Can you give a simple example where the difference between the two
> > is apparent to the Python programmer?
>
> Perhaps surprsingly, many (well, comparatively many, c
gt; y = 0.0
>>> x,y
(0.0, 0.0)
>>>
maybe it is 2.3.3, or maybe it is stackless from back then.
K
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]
> On Behalf Of "Martin v. Löwis"
> Sent: 3. október 2006 17:56
> To: [E
On Wed, Oct 04, 2006 at 12:42:04AM -0400, Tim Peters wrote:
> [EMAIL PROTECTED]
> > If C90 doesn't distinguish -0.0 and +0.0, how can Python?
>
> With liberal applications of piss & vinegar ;-)
>
> > Can you give a simple example where the difference between the two is
> > apparent
> > to the Py
James Y Knight <[EMAIL PROTECTED]> wrote:
>
> This is a really poor argument. Python should be moving *towards*
> proper '754 fp support, not away from it. On the platforms that are
> most important, the C implementations distinguish positive and
> negative 0. That the current python impleme
Alastair Houghton <[EMAIL PROTECTED]> wrote:
>
> AFAIK few systems have floating point traps enabled by default (in
> fact, isn't that what IEEE 754 specifies?), because they often aren't
> very useful.
The first two statements are true; the last isn't. They are extremely
useful, not least b
On 4 Oct 2006, at 02:38, Josiah Carlson wrote:
> Alastair Houghton <[EMAIL PROTECTED]> wrote:
>
> There is, of course, the option of examining their representations in
> memory (I described the general technique in another posting on this
> thread). From what I understand of IEEE 764 FP doubles,
On 4 Oct 2006, at 06:34, Martin v. Löwis wrote:
> Alastair Houghton schrieb:
>> On 3 Oct 2006, at 17:47, James Y Knight wrote:
>>
>>> On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
As Michael Hudson observed, this is difficult to implement, though:
You can't distinguish between -0.0
Alastair Houghton schrieb:
> On 3 Oct 2006, at 17:47, James Y Knight wrote:
>
>> On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
>>> As Michael Hudson observed, this is difficult to implement, though:
>>> You can't distinguish between -0.0 and +0.0 easily, yet you should.
>>
>> Of course you ca
Steve Holden <[EMAIL PROTECTED]> wrote:
> Josiah Carlson wrote:
> [yet more on this topic]
>
> If the brainpower already expended on this issue were proportional to
> its significance then we'd be reading about it on CNN news.
Goodness, I wasn't aware that pointer manipulation took that much
br
[Tim]
>> Someone (Fred, I think) introduced a front-end optimization to
>> collapse that to plain LOAD_CONST, doing the negation at compile time.
> I did the original change to make negative integers use just LOAD_CONST, but I
> don't think I changed what was generated for float literals. That co
On Wednesday 04 October 2006 00:53, Tim Peters wrote:
> Someone (Fred, I think) introduced a front-end optimization to
> collapse that to plain LOAD_CONST, doing the negation at compile time.
I did the original change to make negative integers use just LOAD_CONST, but I
don't think I changed wh
[EMAIL PROTECTED]
> Can you give a simple example where the difference between the two is apparent
> to the Python programmer?
BTW, I don't recall the details and don't care enough to reconstruct
them, but when Python's front end was first changed to recognize
"negative literals", it treated +0.0
[EMAIL PROTECTED]
> If C90 doesn't distinguish -0.0 and +0.0, how can Python?
With liberal applications of piss & vinegar ;-)
> Can you give a simple example where the difference between the two is apparent
> to the Python programmer?
Perhaps surprsingly, many (well, comparatively many, compared
On 10/3/06, Steve Holden <[EMAIL PROTECTED]> wrote:
> If the brainpower already expended on this issue were proportional to
> its significance then we'd be reading about it on CNN news.
>
> This thread has disappeared down a rat-hole, never to re-emerge with
> anything of significant benefit to use
Josiah Carlson wrote:
[yet more on this topic]
If the brainpower already expended on this issue were proportional to
its significance then we'd be reading about it on CNN news.
This thread has disappeared down a rat-hole, never to re-emerge with
anything of significant benefit to users. C'mon,
Alastair Houghton <[EMAIL PROTECTED]> wrote:
> On 3 Oct 2006, at 17:47, James Y Knight wrote:
>
> > On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
> >> As Michael Hudson observed, this is difficult to implement, though:
> >> You can't distinguish between -0.0 and +0.0 easily, yet you should.
On 3 Oct 2006, at 17:47, James Y Knight wrote:
> On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
>> As Michael Hudson observed, this is difficult to implement, though:
>> You can't distinguish between -0.0 and +0.0 easily, yet you should.
>
> Of course you can. It's absolutely trivial. The only
> > It would be instructive to understand how much, if any, python code
> > would break if we lost -0.0. I'm do not believe that there is any
> > reliable way for python code to tell the difference between all of
> > the different types of IEEE 754 zeros and in the special case of -0.0
> >
Nick Maclaren schrieb:
>> py> x=-0.0
>> py> y=0.0
>> py> x,y
>
> Nobody is denying that SOME C90 implementations distinguish them,
> but it is no part of the standard - indeed, a C90 implementation is
> permitted to use ANY criterion for deciding when to display -0.0 and
> 0.0. C99 is ambiguous t
On Oct 3, 2006, at 2:26 PM, Nick Maclaren wrote:
> =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= <[EMAIL PROTECTED]> wrote:
>>
>> py> x=-0.0
>> py> y=0.0
>> py> x,y
>
> Nobody is denying that SOME C90 implementations distinguish them,
> but it is no part of the standard - indeed, a C90 implementatio
=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= <[EMAIL PROTECTED]> wrote:
>
> py> x=-0.0
> py> y=0.0
> py> x,y
Nobody is denying that SOME C90 implementations distinguish them,
but it is no part of the standard - indeed, a C90 implementation is
permitted to use ANY criterion for deciding when to displ
[EMAIL PROTECTED] schrieb:
> If C90 doesn't distinguish -0.0 and +0.0, how can Python? Can you give a
> simple example where the difference between the two is apparent to the
> Python programmer?
Sure:
py> x=-0.0
py> y=0.0
py> x,y
(-0.0, 0.0)
py> hash(x),hash(y)
(0, 0)
py> x==y
True
py> str(x)==
James Y Knight wrote:
> On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
>> As Michael Hudson observed, this is difficult to implement, though:
>> You can't distinguish between -0.0 and +0.0 easily, yet you should.
>
> Of course you can. It's absolutely trivial. The only part that's even
> *th
Martin> b) it is likely that this change won't affect a significant
Martin>number of applications (I'm pretty sure someone will notice,
Martin>though; someone always notices).
+1 QOTF.
Skip
___
Python-Dev mailing list
Python-Dev@pyt
Martin> However, it is certainly a change to the observable behavior of
Martin> the Python implementation, and no amount of arguing can change
Martin> that.
If C90 doesn't distinguish -0.0 and +0.0, how can Python? Can you give a
simple example where the difference between the two is
Nicko van Someren schrieb:
> It's only a semantic change on platforms that "happen to" use IEEE
> 754 float representations, or some other representation that exposes
> the sign of zero.
Right. Later, you admit that this is vast majority of modern machines.
> It would be instructive to unders
On Oct 3, 2006, at 8:30 AM, Martin v. Löwis wrote:
> As Michael Hudson observed, this is difficult to implement, though:
> You can't distinguish between -0.0 and +0.0 easily, yet you should.
Of course you can. It's absolutely trivial. The only part that's even
*the least bit* sketchy in this is
On 3 Oct 2006, at 15:10, Martin v. Löwis wrote:
> Nick Maclaren schrieb:
>> That was the point of a previous posting of mine in this thread :-(
>>
>> You shouldn't, despite what IEEE 754 says, at least if you are
>> allowing for either portability or numeric validation.
>>
>> There are a huge numb
Nick Maclaren schrieb:
> So distinguishing -0.0 from 0.0 isn't really in Python's current
> semantics at all. And, for reasons that we could go into, I assert
> that it should not be - which is NOT the same as not supporting
> branch cuts in cmath.
Are you talking about "Python the language speci
=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= <[EMAIL PROTECTED]> wrote:
>
> Ah, you are proposing a semantic change, then: -0.0 will become
> unrepresentable, right?
Well, it is and it isn't.
Python currently supports only some of IEEE 754, and that is more by
accident than design - because that is
Nick Maclaren schrieb:
> That was the point of a previous posting of mine in this thread :-(
>
> You shouldn't, despite what IEEE 754 says, at least if you are
> allowing for either portability or numeric validation.
>
> There are a huge number of good reasons why IEEE 754 signed zeroes
> fit ext
=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= <[EMAIL PROTECTED]> wrote:
>
> >> The total count of floating point numbers allocated at this point is
> >> 985794.
> >> Without the reuse, they would be 1317145, so this is a saving of 25%, and
> >> of 5Mb.
> >
> > And, if you optimised just 0.0, you w
Nick Maclaren schrieb:
>> The total count of floating point numbers allocated at this point is 985794.
>> Without the reuse, they would be 1317145, so this is a saving of 25%, and
>> of 5Mb.
>
> And, if you optimised just 0.0, you would get 60% of that saving at
> a small fraction of the cost and
Nick Craig-Wood schrieb:
> Even if 0.0 is allocated and de-allocated 10,000 times in a row, there
> would be no memory savings by caching its value.
>
> However there would be
> a) less allocator overhead - allocation objects is relatively expensive
> b) better caching of the value
> c) less cache
[EMAIL PROTECTED] wrote:
>
> Doesn't that presume that optimizing just 0.0 could be done easily? Suppose
> 0.0 is generated all over the place in EVE?
Yes, and it isn't, respectively! The changes in floatobject.c would
be trivial (if tedious), and my recollection of my scan is that
floating valu
>> The total count of floating point numbers allocated at this point is
>> 985794. Without the reuse, they would be 1317145, so this is a
>> saving of 25%, and of 5Mb.
Nick> And, if you optimised just 0.0, you would get 60% of that saving
Nick> at a small fraction of the cost
=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?= <[EMAIL PROTECTED]> wrote:
>
> The total count of floating point numbers allocated at this point is 985794.
> Without the reuse, they would be 1317145, so this is a saving of 25%, and
> of 5Mb.
And, if you optimised just 0.0, you would get 60% of that sav
OTECTED]
> On Behalf Of [EMAIL PROTECTED]
> Sent: 3. október 2006 00:54
> To: Terry Reedy
> Cc: python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
>
>
> Terry> "Kristján V. Jónsson" <[EMAIL PROTECTED]> wrote:
> >> Anyw
"Terry Reedy" <[EMAIL PROTECTED]> wrote:
>
> For true floating point measurements (of temperature, for instance),
> 'integral' measurements (which are an artifact of the scale used (degrees F
> versus C versus K)) should generally be no more common than other realized
> measurements.
Not quite,
Terry Reedy wrote:
> For true floating point measurements (of temperature, for instance),
> 'integral' measurements (which are an artifact of the scale used (degrees F
> versus C versus K)) should generally be no more common than other realized
> measurements.
a real-life sensor is of course w
On Mon, Oct 02, 2006 at 07:53:34PM -0500, [EMAIL PROTECTED] wrote:
> Terry> "Kristján V. Jónsson" <[EMAIL PROTECTED]> wrote:
> >> Anyway, Skip noted that 50% of all floats are whole numbers between
> >> -10 and 10 inclusive,
>
> Terry> Please, no. He said something like this about
On Tue, Oct 03, 2006 at 09:47:03AM +1000, Delaney, Timothy (Tim) wrote:
> This doesn't actually give us a very useful indication of potential
> memory savings. What I think would be more useful is tracking the
> maximum simultaneous count of each value i.e. what the maximum refcount
> would have be
skip> Most definitely. I just posted what I came up with in about two
skip> minutes. I'll add some code to track the high water mark as well
skip> and report back.
Using the smallest change I could get away with, I came up with these
allocation figures (same as before):
-1.0: 2
Terry> "Kristján V. Jónsson" <[EMAIL PROTECTED]> wrote:
>> Anyway, Skip noted that 50% of all floats are whole numbers between
>> -10 and 10 inclusive,
Terry> Please, no. He said something like this about
Terry> *non-floating-point applications* (evidence unspecified, that I
Tim> This doesn't actually give us a very useful indication of potential
Tim> memory savings. What I think would be more useful is tracking the
Tim> maximum simultaneous count of each value i.e. what the maximum
Tim> refcount would have been if they were shared.
Most definitely.
"Kristján V. Jónsson" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>Anyway, Skip noted that 50% of all floats are whole numbers between -10
>and 10 inclusive,
Please, no. He said something like this about *non-floating-point
applications* (evidence unspecified, that I remember)
[EMAIL PROTECTED] wrote:
> Steve> By these statistics I think the answer to the original
> question Steve> is clearly "no" in the general case.
>
> As someone else (Guido?) pointed out, the literal case isn't all that
> interesting. I modified floatobject.c to track a few interesting
> f
[EMAIL PROTECTED] wrote:\/
> Steve> By these statistics I think the answer to the original question
> Steve> is clearly "no" in the general case.
>
> As someone else (Guido?) pointed out, the literal case isn't all that
> interesting. I modified floatobject.c to track a few interesting
>
Michael Hudson <[EMAIL PROTECTED]> wrote:
> "Martin v. Löwis" <[EMAIL PROTECTED]> writes:
> > Kristján V. Jónsson schrieb:
> >> I can't see how this situation is any different from the re-use of
> >> low ints. There is no fundamental law that says that ints below 100
> >> are more common than oth
On Mon, Oct 02, 2006, "Martin v. L?wis" wrote:
> Michael Hudson schrieb:
>>
>> I think most of
>> the code posted so far has been constant time, at least in terms of
>> instruction count, though some might indeed be fairly slow on some
>> processors -- conversion from double to integer on the Power
Michael Hudson schrieb:
>> 1. it is possible to determine whether the value is "special" in
>>constant time, and also fetch the singleton value in constant
>>time for ints; the same isn't possible for floats.
>
> I don't think you mean "constant time" here do you?
Right; I really wonder
"Martin v. Löwis" <[EMAIL PROTECTED]> writes:
> Kristján V. Jónsson schrieb:
>> I can't see how this situation is any different from the re-use of
>> low ints. There is no fundamental law that says that ints below 100
>> are more common than other, yet experience shows that this is so,
>> and so
v. Löwis" [mailto:[EMAIL PROTECTED]
> Sent: 2. október 2006 14:37
> To: Kristján V. Jónsson
> Cc: Bob Ippolito; python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
>
> Kristján V. Jónsson schrieb:
> > I can't see how this situation is any different
Kristján V. Jónsson schrieb:
> I can't see how this situation is any different from the re-use of
> low ints. There is no fundamental law that says that ints below 100
> are more common than other, yet experience shows that this is so,
> and so they are reused.
There are two important difference
f karma lies that way?
Cheers,
Kristján
> -Original Message-
> From: "Martin v. Löwis" [mailto:[EMAIL PROTECTED]
> Sent: 2. október 2006 13:50
> To: Kristján V. Jónsson
> Cc: Bob Ippolito; python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
>
Kristján V. Jónsson schrieb:
> Well, a lot of extension code, like ours use PyFloat_FromDouble(foo);
> This can be from vectors and stuff.
Hmm. If you get a lot of 0.0 values from vectors and stuff, I would
expect that memory usage is already high.
In any case, a module that creates a lot of copi
Nick Coghlan schrieb:
>> Right. Although I do wonder what kind of software people write to run
>> into this problem. As Guido points out, the numbers must be the result
>> from some computation, or created by an extension module by different
>> means. If people have many *simultaneous* copies of 0.
On Sun, Oct 01, 2006 at 02:01:51PM -0400, Jean-Paul Calderone wrote:
> Each line in an interactive session is compiled separately, like modules
> are compiled separately. With the current implementation, literals in a
> single compilation unit have a chance to be "cached" like this. Literals
> in
On Sun, 1 Oct 2006 13:54:31 -0400, Terry Reedy <[EMAIL PROTECTED]> wrote:
>
>"Nick Craig-Wood" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]
>> On Fri, Sep 29, 2006 at 12:03:03PM -0700, Guido van Rossum wrote:
>>> I see some confusion in this thread.
>>>
>>> If a *LITERAL* 0.0 (or an
"Nick Craig-Wood" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Fri, Sep 29, 2006 at 12:03:03PM -0700, Guido van Rossum wrote:
>> I see some confusion in this thread.
>>
>> If a *LITERAL* 0.0 (or any other float literal) is used, you only get
>> one object, no matter how many t
On Sat, Sep 30, 2006 at 03:21:50PM -0700, Bob Ippolito wrote:
> On 9/30/06, Terry Reedy <[EMAIL PROTECTED]> wrote:
> > "Nick Coghlan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]
> > > I suspect the problem would typically stem from floating point
> > > values that are read in from a
On Fri, Sep 29, 2006 at 12:03:03PM -0700, Guido van Rossum wrote:
> I see some confusion in this thread.
>
> If a *LITERAL* 0.0 (or any other float literal) is used, you only get
> one object, no matter how many times it is used.
For some reason that doesn't happen in the interpreter which has be
Steve> By these statistics I think the answer to the original question
Steve> is clearly "no" in the general case.
As someone else (Guido?) pointed out, the literal case isn't all that
interesting. I modified floatobject.c to track a few interesting
floating point values:
static uns
On 9/30/06, Terry Reedy <[EMAIL PROTECTED]> wrote:
>
> "Nick Coghlan" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> >I suspect the problem would typically stem from floating point values that
> >are
> >read in from a human-readable file rather than being the result of a
> >'calcul
"Nick Coghlan" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>I suspect the problem would typically stem from floating point values that
>are
>read in from a human-readable file rather than being the result of a
>'calculation' as such:
For such situations, one could create a trans
he benefit great.
Cheers,
Kristján
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of "Martin v. Löwis"
Sent: 30. september 2006 08:48
To: Bob Ippolito
Cc: python-dev@python.org
Subject: Re: [Python-Dev] Caching float(0.0)
Bob Ippolito schrieb:
>
Martin v. Löwis wrote:
> Bob Ippolito schrieb:
>> My guess is that people do have this problem, they just don't know
>> where that memory has gone. I know I don't count objects unless I have
>> a process that's leaking memory or it grows so big that I notice (by
>> swapping or chance).
>
> Right.
Bob Ippolito schrieb:
> My guess is that people do have this problem, they just don't know
> where that memory has gone. I know I don't count objects unless I have
> a process that's leaking memory or it grows so big that I notice (by
> swapping or chance).
Right. Although I do wonder what kind of
Jason Orendorff wrote:
> On 9/29/06, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
>
>>(I just checked the program I'm working on, and my analysis tells me
>>that the most common floating point value in that program is 121.216,
>>which occurs 32 times. from what I can tell, 0.0 isn't used at all.)
>
On 9/29/06, Greg Ewing <[EMAIL PROTECTED]> wrote:
> Nick Craig-Wood wrote:
>
> > Is there any reason why float() shouldn't cache the value of 0.0 since
> > it is by far and away the most common value?
>
> 1.0 might be another candidate for cacheing.
>
> Although the fact that nobody has complained
Nick Craig-Wood wrote:
> Is there any reason why float() shouldn't cache the value of 0.0 since
> it is by far and away the most common value?
1.0 might be another candidate for cacheing.
Although the fact that nobody has complained about this
before suggests that it might not be a frequent enou
I see some confusion in this thread.
If a *LITERAL* 0.0 (or any other float literal) is used, you only get
one object, no matter how many times it is used.
But if the result of a *COMPUTATION* returns 0.0, you get a new object
for each such result. If you have 70 MB worth of zeros, that's clearly
"Jason Orendorff" <[EMAIL PROTECTED]> wrote:
>
> Anyway, this kind of static analysis is probably more entertaining
> than relevant. ...
Well, yes. One can tell that by the piffling little counts being
bandied about! More seriously, yes, it is Well Known that 0.0 is
the Most Common Floating-Poi
On 9/29/06, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
> (I just checked the program I'm working on, and my analysis tells me
> that the most common floating point value in that program is 121.216,
> which occurs 32 times. from what I can tell, 0.0 isn't used at all.)
*bemused look* Fredrik, can y
september 2006 15:18
> To: Fredrik Lundh; python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
>
> Acting on this excellent advice, I have patched in a reuse
> for -1.0, 0.0 and 1.0 for EVE Online. We use vectors and
> stuff a lot, and 0.0 is very, very common. I
[mailto:[EMAIL PROTECTED]
> On Behalf Of Fredrik Lundh
> Sent: 29. september 2006 15:11
> To: python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
>
> Nick Craig-Wood wrote:
>
> > Is there any reason why float() shouldn't cache the value
> of 0.0
Nick Craig-Wood wrote:
> Is there any reason why float() shouldn't cache the value of 0.0 since
> it is by far and away the most common value?
says who ?
(I just checked the program I'm working on, and my analysis tells me
that the most common floating point value in that program is 121.216,
w
I just discovered that in a program of mine it was wasting 7MB out of
200MB by storing multiple copies of 0.0. I found this a bit suprising
since I'm used to small ints and strings being cached.
I added the apparently nonsensical lines
+if age == 0.0:
+age = 0.0
81 matches
Mail list logo