[ python-Bugs-1765375 ] setup.py trashes LDFLAGS

2007-08-01 Thread SourceForge.net
Bugs item #1765375, was opened at 2007-08-01 15:56
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1765375&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Harald Koenig (h_koenig)
Assigned to: Nobody/Anonymous (nobody)
Summary: setup.py trashes LDFLAGS

Initial Comment:
the regexp below will trash the library path in this line in Makefile

LDFLAGS = -L/foo/lib -Wl,-rpath,/foo/lib -L/bar/lib -Wl,-rpath,/bar/lib

to 
-L/foo/libWl,-rpath,/foo/lib -L/bar/libWl,-rpath,/bar/lib

which renders this library paths broken and useless for  building python 
modules.


the following patch seems to work fine for my setup on various plattforms:

--- 8< -- 8< -- 8< -- 8< -- 8< -- 8< -- 8< ---
--- Python-2.5.1/setup.py~  2007-08-01 15:19:27.0 +0200
+++ Python-2.5.1/setup.py   2007-08-01 15:19:48.0 +0200
@@ -267,7 +267,7 @@
 # strip out double-dashes first so that we don't end up with
 # substituting "--Long" to "-Long" and thus lead to "ong" being
 # used for a library directory.
-env_val = re.sub(r'(^|\s+)-(-|(?!%s))' % arg_name[1], '', 
env_val)
+env_val = re.sub(r'(^|\s+)-(-|(?!%s))' % arg_name[1], ' ', 
env_val)
 parser = optparse.OptionParser()
 # Make sure that allowing args interspersed with options is
 # allowed
--- 8< -- 8< -- 8< -- 8< -- 8< -- 8< -- 8< ---

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1765375&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1731717 ] race condition in subprocess module

2007-08-01 Thread SourceForge.net
Bugs item #1731717, was opened at 2007-06-06 08:19
Message generated for change (Comment added) made by abo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1731717&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: dsagal (dsagal)
Assigned to: Peter Åstrand (astrand)
Summary: race condition in subprocess module

Initial Comment:
Python's subprocess module has a race condition: Popen() constructor has a call 
to global "_cleanup()" function on whenever a Popen object gets created, and 
that call causes a check for all pending Popen objects whether their subprocess 
has exited - i.e. the poll() method is called for every active Popen object.

Now, if I create Popen object "foo" in one thread, and then a.wait(), and 
meanwhile I create another Popen object "bar" in another thread, then a.poll() 
might get called by _clean() right at the time when my first thread is running 
a.wait(). But those are not synchronized - each calls os.waitpid() if 
returncode is None, but the section which checks returncode and calls 
os.waitpid() is not protected as a critical section should be.

The following code illustrates the problem, and is known to break reasonably 
consistenly with Python2.4. Changes to subprocess in Python2.5 seems to address 
a somewhat related problem, but not this one.

import sys, os, threading, subprocess, time

class X(threading.Thread):
  def __init__(self, *args, **kwargs):
super(X, self).__init__(*args, **kwargs)
self.start()

def tt():
  s = subprocess.Popen(("/bin/ls", "-a", "/tmp"), stdout=subprocess.PIPE,
  universal_newlines=True)
  # time.sleep(1)
  s.communicate()[0]

for i in xrange(1000):
  X(target = tt)

This typically gives a few dozen errors like these:
Exception in thread Thread-795:
Traceback (most recent call last):
  File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
  File "/usr/lib/python2.4/threading.py", line 422, in run
self.__target(*self.__args, **self.__kwargs)
  File "z.py", line 21, in tt
s.communicate()[0]
  File "/usr/lib/python2.4/subprocess.py", line 1083, in communicate
self.wait()
  File "/usr/lib/python2.4/subprocess.py", line 1007, in wait
pid, sts = os.waitpid(self.pid, 0)
OSError: [Errno 10] No child processes

Note that uncommenting time.sleep(1) fixes the problem. So does wrapping 
subprocess.poll() and wait() with a lock. So does removing the call to global 
_cleanup() in Popen constructor.

--

Comment By: Donovan Baarda (abo)
Date: 2007-08-02 03:05

Message:
Logged In: YES 
user_id=10273
Originator: NO

It appears that subprocess calls a module global "_cleanup()" whenever
opening a new subprocess. This method is meant to reap any child processes
that have terminated and have not explicitly cleaned up. These are
processes you would expect to be cleaned up by GC, however, subprocess
keeps a list of of all spawned subprocesses in _active until they are
reaped explicitly so it can cleanup any that nolonger referenced anywhere
else.

The problem is lots of methods, including poll() and wait(), check
self.returncode and then modify it. Any non-atomic read/modify action is
inherently non-threadsafe. And _cleanup() calls poll() on all un-reaped
child processes. If two threads happen to try and spawn subprocesses at
once, these _cleanup() calls collide..

The way to fix this depends on how thread-safe you want to make it. If you
want to share popen objects between threads to wait()/poll() with impunity
from any thread, you should add a recursive lock attribute to the Popen
instance and have it lock/release it at the start/end of every method
call.

If you only care about using popen objects in one thread at a time, then
all you need to fix is the nasty "every popen created calls poll() on every
other living popen object regardless of what thread started them,
and poll() is not threadsafe" behaviour.

Removing _cleanup() is one way, but it will then not reap child processes
that you del'ed all references to (except the one in subprocess._active)
before you checked they were done.

Probably another good idea is to not append and remove popen objects to
_active directly, instead and add a popen.__del__() method that defers
GC'ing of non-finished popen objects by adding them to _active. This
way, _active only contains un-reaped child processes that were due to be
GC'ed. _cleanup() will then be responsible for polling and removing these
popen objects from _active when they are done.

However, this alone will not fix things because you are still calling
_cleanup() from different threads, it is still calling poll() on all these
un-reaped processes, and poll() is not thr

[ python-Bugs-1731717 ] race condition in subprocess module

2007-08-01 Thread SourceForge.net
Bugs item #1731717, was opened at 2007-06-06 08:19
Message generated for change (Comment added) made by abo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1731717&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: dsagal (dsagal)
Assigned to: Peter Åstrand (astrand)
Summary: race condition in subprocess module

Initial Comment:
Python's subprocess module has a race condition: Popen() constructor has a call 
to global "_cleanup()" function on whenever a Popen object gets created, and 
that call causes a check for all pending Popen objects whether their subprocess 
has exited - i.e. the poll() method is called for every active Popen object.

Now, if I create Popen object "foo" in one thread, and then a.wait(), and 
meanwhile I create another Popen object "bar" in another thread, then a.poll() 
might get called by _clean() right at the time when my first thread is running 
a.wait(). But those are not synchronized - each calls os.waitpid() if 
returncode is None, but the section which checks returncode and calls 
os.waitpid() is not protected as a critical section should be.

The following code illustrates the problem, and is known to break reasonably 
consistenly with Python2.4. Changes to subprocess in Python2.5 seems to address 
a somewhat related problem, but not this one.

import sys, os, threading, subprocess, time

class X(threading.Thread):
  def __init__(self, *args, **kwargs):
super(X, self).__init__(*args, **kwargs)
self.start()

def tt():
  s = subprocess.Popen(("/bin/ls", "-a", "/tmp"), stdout=subprocess.PIPE,
  universal_newlines=True)
  # time.sleep(1)
  s.communicate()[0]

for i in xrange(1000):
  X(target = tt)

This typically gives a few dozen errors like these:
Exception in thread Thread-795:
Traceback (most recent call last):
  File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
  File "/usr/lib/python2.4/threading.py", line 422, in run
self.__target(*self.__args, **self.__kwargs)
  File "z.py", line 21, in tt
s.communicate()[0]
  File "/usr/lib/python2.4/subprocess.py", line 1083, in communicate
self.wait()
  File "/usr/lib/python2.4/subprocess.py", line 1007, in wait
pid, sts = os.waitpid(self.pid, 0)
OSError: [Errno 10] No child processes

Note that uncommenting time.sleep(1) fixes the problem. So does wrapping 
subprocess.poll() and wait() with a lock. So does removing the call to global 
_cleanup() in Popen constructor.

--

Comment By: Donovan Baarda (abo)
Date: 2007-08-02 03:37

Message:
Logged In: YES 
user_id=10273
Originator: NO

Having just gone through that waffly description of the problems and
various levels of fix, I think there are really only two fixes worth
considering;

1) Make Popen instances fully threadsafe. Give them a recursive lock
attribute and have every method acquire the lock at the start, and release
it at the end.

2) Decide the "try to reap abandoned children at each Popen" idea was an
ugly hack and abandon it. Remove _active and _cleanup(), and document that
any child process not explicitly handled to completion will result in
zombie child processes.

--

Comment By: Donovan Baarda (abo)
Date: 2007-08-02 03:05

Message:
Logged In: YES 
user_id=10273
Originator: NO

It appears that subprocess calls a module global "_cleanup()" whenever
opening a new subprocess. This method is meant to reap any child processes
that have terminated and have not explicitly cleaned up. These are
processes you would expect to be cleaned up by GC, however, subprocess
keeps a list of of all spawned subprocesses in _active until they are
reaped explicitly so it can cleanup any that nolonger referenced anywhere
else.

The problem is lots of methods, including poll() and wait(), check
self.returncode and then modify it. Any non-atomic read/modify action is
inherently non-threadsafe. And _cleanup() calls poll() on all un-reaped
child processes. If two threads happen to try and spawn subprocesses at
once, these _cleanup() calls collide..

The way to fix this depends on how thread-safe you want to make it. If you
want to share popen objects between threads to wait()/poll() with impunity
from any thread, you should add a recursive lock attribute to the Popen
instance and have it lock/release it at the start/end of every method
call.

If you only care about using popen objects in one thread at a time, then
all you need to fix is the nasty "every popen created calls poll() on every
other living popen object regardless of what thread started them,
and poll() is not threadsafe" behaviour.

Removing _cleanup() is one way, but it

[ python-Feature Requests-1764638 ] add new bytecodes: JUMP_IF_{FALSE|TRUE}_AND_POP

2007-08-01 Thread SourceForge.net
Feature Requests item #1764638, was opened at 2007-07-31 17:12
Message generated for change (Comment added) made by doublep
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1764638&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Parser/Compiler
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Paul Pogonyshev (doublep)
Assigned to: Nobody/Anonymous (nobody)
Summary: add new bytecodes: JUMP_IF_{FALSE|TRUE}_AND_POP

Initial Comment:
In disassembled code of Python functions I often see stuff like this:

421 JUMP_IF_FALSE   14 (to 438)
424 POP_TOP

1178...
435 JUMP_FORWARD 1 (to 439)
>>  438 POP_TOP
>>  439 END_FINALLY

Note how both branches of execution after JUMP_IF_FALSE do POP_TOP.  This 
causes the true-branch add JUMP_FORWARD, the only purpose of which is to bypass 
the POP_TOP command.

I propose adding two new bytecodes, JUMP_IF_FALSE_AND_POP and 
JUMP_IF_TRUE_AND_POP.  Their semantics would be the same as that of existing 
JUMP_IF_FALSE/JUMP_IF_TRUE except the commands would also pop the stack once, 
after checking whether to jump.  This would simplify the above code to just

421 JUMP_IF_FALSE_AND_POP   14 (to 438)

1178...
>>  438 END_FINALLY

This shortens bytecode by 5 bytes and both execution branches, by 1 and 2 
commands correspondingly.

I'm willing to create a patch, if this sounds like a worthwile improvement.  
Maybe it is better to skip 2.6 and target it for 3000 instead.


--

>Comment By: Paul Pogonyshev (doublep)
Date: 2007-08-02 00:52

Message:
Logged In: YES 
user_id=1203127
Originator: YES

I have made first stab at this.  It shows about 2% speedup on pystone,
even though peephole optimizer does no related optimizations for new
bytecodes yet.

Note that having separate entries in ceval.c's switch() for the new
bytecodes is essential.  Perhaps because they are also used for prediction,
not sure.

File Added: new-bytecodes.diff

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1764638&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1759997 ] poll() on cygwin sometimes fails [PATCH]

2007-08-01 Thread SourceForge.net
Bugs item #1759997, was opened at 2007-07-25 00:58
Message generated for change (Comment added) made by zooko
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1759997&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Brian Warner (warner)
Assigned to: Nobody/Anonymous (nobody)
Summary: poll() on cygwin sometimes fails [PATCH]

Initial Comment:

While trying to track down a problem with our application 
(http://allmydata.org) running under cygwin, I discovered that the 
select.poll() object sometimes returns completely bogus data. poll() returns a 
list of tuples of (fd, revent), but fds are supposed to be small integers, and 
revents are a bitmask of POLLIN/POLLOUT flags. In my tests, I saw poll() return 
a list that started out looking normal, but the last half of the list contained 
fds and revents values like fd=0x7672a646, revent=0xd819. 

It turns out that under cygwin-1.5.24 (which I believe is a pretty recent 
version), the poll() call sometimes violates the POSIX specification, and 
provides a return value which is different than the number of pollfd structures 
that have non-zero .revents fields (generally larger). This causes the 
implementation of poll_poll() (in Modules/selectmodule.c) to read beyond the 
end of the pollfd array, copying random memory into the python list it is 
building, causing the bogus values I observed during my tests.  

These bogus values were mostly ignored, because the Twisted pollreactor that I 
was using noticed that the fd didn't correspond to any previously-registered 
file descriptor. It was only when the bogus fd happened to coincide with a real 
one (and when that indicated that a TCP listening socket became writable, which 
should never happen) that an exception was raised.

 The attached patch (against 2.5.1) works around the problem by manually 
counting the number of non-zero .revents, rather than relying upon the return 
value from poll(). This version passes test_poll on both linux and cygwin.

cheers,
 -Brian Warner




--

Comment By: Zooko O'Whielacronx (zooko)
Date: 2007-08-01 23:08

Message:
Logged In: YES 
user_id=52562
Originator: NO

FYI, this is the issue ticket on the allmydata.org Tahoe project:

http://allmydata.org/trac/tahoe/ticket/31

I've written a patch for cygwin poll and am now testing it before
submitting it to the cygwin developers.

--

Comment By: Brian Warner (warner)
Date: 2007-07-27 23:25

Message:
Logged In: YES 
user_id=24186
Originator: YES

We've begun the process: zooko is working on a patch for cygwin and is
working with them to figure out how to compile the thing. We've not yet
explained the poll() bug to them in detail (wanting to have a patch in hand
first).

I'll report back once we get some word from them about how likely it is
this problem will be fixed on the cygwin side.


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2007-07-25 05:53

Message:
Logged In: YES 
user_id=33168
Originator: NO

Has this problem been reported to cygwin?  Have they fixed the problem?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1759997&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1731717 ] race condition in subprocess module

2007-08-01 Thread SourceForge.net
Bugs item #1731717, was opened at 2007-06-05 18:19
Message generated for change (Comment added) made by gvanrossum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1731717&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: dsagal (dsagal)
Assigned to: Peter Åstrand (astrand)
Summary: race condition in subprocess module

Initial Comment:
Python's subprocess module has a race condition: Popen() constructor has a call 
to global "_cleanup()" function on whenever a Popen object gets created, and 
that call causes a check for all pending Popen objects whether their subprocess 
has exited - i.e. the poll() method is called for every active Popen object.

Now, if I create Popen object "foo" in one thread, and then a.wait(), and 
meanwhile I create another Popen object "bar" in another thread, then a.poll() 
might get called by _clean() right at the time when my first thread is running 
a.wait(). But those are not synchronized - each calls os.waitpid() if 
returncode is None, but the section which checks returncode and calls 
os.waitpid() is not protected as a critical section should be.

The following code illustrates the problem, and is known to break reasonably 
consistenly with Python2.4. Changes to subprocess in Python2.5 seems to address 
a somewhat related problem, but not this one.

import sys, os, threading, subprocess, time

class X(threading.Thread):
  def __init__(self, *args, **kwargs):
super(X, self).__init__(*args, **kwargs)
self.start()

def tt():
  s = subprocess.Popen(("/bin/ls", "-a", "/tmp"), stdout=subprocess.PIPE,
  universal_newlines=True)
  # time.sleep(1)
  s.communicate()[0]

for i in xrange(1000):
  X(target = tt)

This typically gives a few dozen errors like these:
Exception in thread Thread-795:
Traceback (most recent call last):
  File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
  File "/usr/lib/python2.4/threading.py", line 422, in run
self.__target(*self.__args, **self.__kwargs)
  File "z.py", line 21, in tt
s.communicate()[0]
  File "/usr/lib/python2.4/subprocess.py", line 1083, in communicate
self.wait()
  File "/usr/lib/python2.4/subprocess.py", line 1007, in wait
pid, sts = os.waitpid(self.pid, 0)
OSError: [Errno 10] No child processes

Note that uncommenting time.sleep(1) fixes the problem. So does wrapping 
subprocess.poll() and wait() with a lock. So does removing the call to global 
_cleanup() in Popen constructor.

--

>Comment By: Guido van Rossum (gvanrossum)
Date: 2007-08-01 20:45

Message:
Logged In: YES 
user_id=6380
Originator: NO

I like #2.  I don't see any use for threadsafe Popen instances, and I
think that any self-respecting long-running server using Popen should
properly manage its subprocesses anyway.  (And for short-running processes
it doesn't really matter if you have a few zombies.)

One could add a __del__ method that registers zombies to be reaped later,
but I don't think it's worth it, and __del__ has some serious issues of its
own.  (If you really want to do this, use a weak reference callback instead
of __del__ to do the zombie registration.)

--

Comment By: Donovan Baarda (abo)
Date: 2007-08-01 13:37

Message:
Logged In: YES 
user_id=10273
Originator: NO

Having just gone through that waffly description of the problems and
various levels of fix, I think there are really only two fixes worth
considering;

1) Make Popen instances fully threadsafe. Give them a recursive lock
attribute and have every method acquire the lock at the start, and release
it at the end.

2) Decide the "try to reap abandoned children at each Popen" idea was an
ugly hack and abandon it. Remove _active and _cleanup(), and document that
any child process not explicitly handled to completion will result in
zombie child processes.

--

Comment By: Donovan Baarda (abo)
Date: 2007-08-01 13:05

Message:
Logged In: YES 
user_id=10273
Originator: NO

It appears that subprocess calls a module global "_cleanup()" whenever
opening a new subprocess. This method is meant to reap any child processes
that have terminated and have not explicitly cleaned up. These are
processes you would expect to be cleaned up by GC, however, subprocess
keeps a list of of all spawned subprocesses in _active until they are
reaped explicitly so it can cleanup any that nolonger referenced anywhere
else.

The problem is lots of methods, including poll() and wait(), check
self.returncode and then modify it. Any non-atomic read/modify action is
inherently non-threadsafe. And _cleanup

[ python-Bugs-1725899 ] decimal sqrt method doesn't use round-half-even

2007-08-01 Thread SourceForge.net
Bugs item #1725899, was opened at 2007-05-25 19:52
Message generated for change (Comment added) made by facundobatista
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1725899&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Private: No
Submitted By: Mark Dickinson (marketdickinson)
Assigned to: Facundo Batista (facundobatista)
Summary: decimal sqrt method doesn't use round-half-even

Initial Comment:
According to version 1.66 of Cowlishaw's `General Decimal Arithmetic
Specification' the square-root operation in the decimal module should
round using the round-half-even algorithm (regardless of the rounding
setting in the current context).  It doesn't appear to do so:

>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(9123455**2).sqrt()
Decimal("9.12345E+6")

The exact value of this square root is exactly halfway between two
representable Decimals, so using round-half-even with 6 digits I'd
expect the answer to be rounded to the neighboring representable
Decimal with *even* last digit---that is,

Decimal("9.12346E+6")

This bug only seems to occur when the number of significant digits in
the argument exceeds the current precision (indeed, if the number of
sig. digits in the argument is less than or equal to the current
precision then it's impossible for the square root to be halfway
between two representable floats).

It seems to me that this is a minor bug that will occur rarely and is
unlikely to have any serious effect even when it does occur; however,
it does seem to be a deviation from the specification.


--

>Comment By: Facundo Batista (facundobatista)
Date: 2007-08-02 00:15

Message:
Logged In: YES 
user_id=752496
Originator: NO

Fixed in the revisions 56654 to 56656, in the decimal-branch, using
patches generated by Mark, thanks!

--

Comment By: Mark Dickinson (marketdickinson)
Date: 2007-06-22 15:57

Message:
Logged In: YES 
user_id=703403
Originator: YES

See patch 1741308.  I'll contact Mike Cowlishaw to verify that the
reference implementation really is buggy.


--

Comment By: Tim Peters (tim_one)
Date: 2007-06-22 13:06

Message:
Logged In: YES 
user_id=31435
Originator: NO

Of course you're right, the spec does say inputs shouldn't be rounded. 
And that last example is horrendous:  sqrt should definitely be monotonic
(a floating function f is "monotonic" if it guarantees f(x) >= f(y)
whenever x >= y; you found x and y such that x > y but sqrt(x) < sqrt(y) --
ouch!).

--

Comment By: Mark Dickinson (marketdickinson)
Date: 2007-06-22 03:35

Message:
Logged In: YES 
user_id=703403
Originator: YES

One more result. This is definitely getting suspicious now--I'll submit my
patch.

>>> from decimal import *
>>> getcontext().prec = 3
>>> Decimal(11772).sqrt()
Decimal("109")
>>> Decimal(11774).sqrt()
Decimal("108")



--

Comment By: Mark Dickinson (marketdickinson)
Date: 2007-06-22 03:16

Message:
Logged In: YES 
user_id=703403
Originator: YES

Some more strange results for sqrt():  with Emax = 9, Emin = -9 and prec =
3:

1. In the following, should the Subnormal and Underflow flags be set?  The
result before rounding is subnormal, even though the post-rounding result
is not, and the spec seems to say that those flags should be set in this
situation.  But Cowlishaw's reference implementation (version 3.50) doesn't
set these flags. (If 9.998 is replaced by 9.99 then the flags *are* set,
which seems inconsistent.)

>>> Decimal("9.998E-19").sqrt()
Decimal("1.00E-9")
>>> getcontext()
Context(prec=3, rounding=ROUND_HALF_EVEN, Emin=-9, Emax=9, capitals=1,
flags=[Rounded, Inexact], traps=[DivisionByZero, Overflow,
InvalidOperation])

2. As I understand the spec, the following result is incorrect:

>>> Decimal("1.12E-19").sqrt()
Decimal("3.4E-10")

(The true value of the square root is 3.34664...e-10, which should surely
be rounded to 3.3E-10, not 3.4E-10?).
But again, Cowlishaw's reference implementation also gives 3.4E-10.

3. Similarly for the following

>>> Decimal("4.21E-20").sqrt()
Decimal("2.0E-10")

The answer should, I think, be 2.1E-10;  here the reference implementation
also gives 2.1E-10.

4. And I can't justify this one at all...

>>> Decimal("1.01001").sqrt()
Decimal("1.01")

Shouldn't this be 1.00?  But again Python agrees with the reference
implementation.

Either all this is pretty mixed up, or I'm fundamentally misunderstanding
things.  I have a patch that I think fixes all these bug

[ python-Feature Requests-1726697 ] add operator.fst and snd functions

2007-08-01 Thread SourceForge.net
Feature Requests item #1726697, was opened at 2007-05-27 22:49
Message generated for change (Settings changed) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1726697&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
>Status: Closed
>Resolution: Rejected
Priority: 5
Private: No
Submitted By: paul rubin (phr)
Assigned to: Nobody/Anonymous (nobody)
Summary: add operator.fst and snd functions

Initial Comment:
operator.itemgetter is a general but clumsy abstraction.  Almost all the time 
when I use it, it's either to get the first or the second item of a tuple.  I 
think that use case is common enough that it's worth including them in the 
operator module:

   fst = itemgetter(0)
   snd = itemgetter(1)

I end up putting the above definitions in my programs very frequently and it 
would be nice if I could stop ;)

fst and snd are mathematical names abbreviating "first" and "second" in some 
areas of math, like sin and cos abbreviate "sine" and "cosine".  The names fst 
and snd are also used in Haskell and *ML.  However, calling them "first" and 
"second" might be stylistically preferable to some Python users and would also 
be ok.


--

Comment By: Raymond Hettinger (rhettinger)
Date: 2007-05-27 23:49

Message:
Logged In: YES 
user_id=80475
Originator: NO

-1

That would provide too many ways to do it.  
It is more important to learn how to use one way correctly.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1726697&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1728488 ] -q (quiet) option for python interpreter

2007-08-01 Thread SourceForge.net
Feature Requests item #1728488, was opened at 2007-05-30 13:44
Message generated for change (Comment added) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1728488&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Marcin Wojdyr (wojdyr)
Assigned to: Nobody/Anonymous (nobody)
Summary: -q (quiet) option for python interpreter

Initial Comment:

I'd like to suggest the new option for python:

 -q Do not print the version and copyright messages.  These messages are 
also suppressed in non-interactive mode.

Why:
I often use python as a calculator, for a couple-lines calculations, and would 
prefer to avoid having printed these three lines.
There is a similar option in e.g. gdb.


AFAICS the implementation would require small changes in Modules/main.c, 
Misc/python.man and probably in other docs. If it would be accepted, I can do 
it.

Marcin

--

>Comment By: Raymond Hettinger (rhettinger)
Date: 2007-08-01 23:26

Message:
Logged In: YES 
user_id=80475
Originator: NO

+1 I think this would be nice.  

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1728488&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1764638 ] add new bytecodes: JUMP_IF_{FALSE|TRUE}_AND_POP

2007-08-01 Thread SourceForge.net
Feature Requests item #1764638, was opened at 2007-07-31 09:12
Message generated for change (Comment added) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1764638&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Parser/Compiler
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Paul Pogonyshev (doublep)
>Assigned to: Raymond Hettinger (rhettinger)
Summary: add new bytecodes: JUMP_IF_{FALSE|TRUE}_AND_POP

Initial Comment:
In disassembled code of Python functions I often see stuff like this:

421 JUMP_IF_FALSE   14 (to 438)
424 POP_TOP

1178...
435 JUMP_FORWARD 1 (to 439)
>>  438 POP_TOP
>>  439 END_FINALLY

Note how both branches of execution after JUMP_IF_FALSE do POP_TOP.  This 
causes the true-branch add JUMP_FORWARD, the only purpose of which is to bypass 
the POP_TOP command.

I propose adding two new bytecodes, JUMP_IF_FALSE_AND_POP and 
JUMP_IF_TRUE_AND_POP.  Their semantics would be the same as that of existing 
JUMP_IF_FALSE/JUMP_IF_TRUE except the commands would also pop the stack once, 
after checking whether to jump.  This would simplify the above code to just

421 JUMP_IF_FALSE_AND_POP   14 (to 438)

1178...
>>  438 END_FINALLY

This shortens bytecode by 5 bytes and both execution branches, by 1 and 2 
commands correspondingly.

I'm willing to create a patch, if this sounds like a worthwile improvement.  
Maybe it is better to skip 2.6 and target it for 3000 instead.


--

>Comment By: Raymond Hettinger (rhettinger)
Date: 2007-08-01 23:24

Message:
Logged In: YES 
user_id=80475
Originator: NO

This was looked at and rejected long ago.
The main reasons were that the new code would interfere with and
complicate other byte code optimizations.  Also, the need was mitigated
almost entirely by the predict macros.


--

Comment By: Paul Pogonyshev (doublep)
Date: 2007-08-01 16:52

Message:
Logged In: YES 
user_id=1203127
Originator: YES

I have made first stab at this.  It shows about 2% speedup on pystone,
even though peephole optimizer does no related optimizations for new
bytecodes yet.

Note that having separate entries in ceval.c's switch() for the new
bytecodes is essential.  Perhaps because they are also used for prediction,
not sure.

File Added: new-bytecodes.diff

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1764638&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com