Re: [Python-Dev] PEP 450 adding statistics module

2013-09-09 Thread Paul Colomiets
Hi Guido,

On Sun, Sep 8, 2013 at 8:32 PM, Guido van Rossum  wrote:
> Going over the open issues:
>
> - Parallel arrays or arrays of tuples? I think the API should require
> an array of tuples. It is trivial to zip up parallel arrays to the
> required format, while if you have an array of tuples, extracting the
> parallel arrays is slightly more cumbersome. Also for manipulating of
> the raw data, an array of tuples makes it easier to do insertions or
> removals without worrying about losing the correspondence between the
> arrays.

I think there is a big reason to use parallel arrays that might be
overlooked. You can feed an array.array('f') to the function, which
may save a lot of memory.

--
Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dicts are broken Was: unicode hell/mixing str and unicode asdictionarykeys

2006-08-04 Thread Paul Colomiets
Hi!

Terry Reedy wrote:
> The fundamental axiom of sets and hence of dict keys is that any 
> object/value either is or is not a member (at any given time for 'mutable' 
> set collections).  This requires that testing an object for possible 
> membership by equality give a clean True or False answer.
>   
Yes this makes sense. But returning to dictionaries for python newbies, 
it will be strange why this
 >>> d = { u'abc': 1, u'ab\xe8': 2}
 >>> d['abc']
 >1
works as expected, but this
 >>> d['ab\xe8']
raises an exception.

Another good argument pronounced by M.-A. Lemburg:
> What's making this particular case interesting is that
> the comparison is hidden in the dictionary implementation
> and only triggers if you get a hash collision, which makes
> the whole issue appear to be happening randomly.
>
> This whole thread aside: it's never recommended to mix strings
> and Unicode, unless you really have to.
...
 >How about generating a warning instead and then go for the exception
 >in 2.6 ?

Well it's not recomended to mix strings and unicode in the dictionaries 
but if we mix for example integer and float we have the same thing. It 
doesn't raise exception but still it is not expected behavior for me:
 >>> d = { 1.0: 10, 2.0: 20 }
then if i somewhere later do:
 >>> d[1] = 100
 >>> d[2] = 200
to have here all floats in d.keys(). May be this is not a best example. 
So if you generate a warning, it should be generated every time when 
there are keys of different types inserted into dict. May be python 
should check type of the key after collision and before testing for 
equality? So the 1 and 1.0 is different as u'a' and 'a' also different. 
It even can give some perfomance overhead I think.

--
Regards,
  Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dicts are broken Was: unicode hell/mixing str and unicode asdictionarykeys

2006-08-04 Thread Paul Colomiets
Giovanni Bajo wrote:
> Paul Colomiets <[EMAIL PROTECTED]> wrote:
>
>   
>> Well it's not recomended to mix strings and unicode in the
>> dictionaries
>> but if we mix for example integer and float we have the same thing. It
>> doesn't raise exception but still it is not expected behavior for me:
>>  >>> d = { 1.0: 10, 2.0: 20 }
>> then if i somewhere later do:
>>  >>> d[1] = 100
>>  >>> d[2] = 200
>> to have here all floats in d.keys(). May be this is not a best
>> example.
>> 
>
> There is a strong difference. Python is moving towards unifying number types 
> in
> a way (see the true division issue): the idea is that, all in all, user
> shouldn't really care what type a number is, as long as he knows it's a 
> number.
> On the other hand, unicode and str are going to diverge more and more.
>
> Giovanni Bajo
>
>   
It makes sense, but consider this example:

 >>> from decimal import Decimal
 >>> d = {}
 >>> d[Decimal(0)] = 1
 >>> d[0] = 2
 >>> d[Decimal("0.5")] = 3
 >>> d[0.5]  = 4
 >>> d.keys()
[Decimal("0"), 0.5, Decimal("0.5")]

I expect d.keys() to have 2 or 4 keys but don't 3, it's confusing. Don't 
you think that someday line "d[0.5] = 4" will raise exception? Or at 
least that it should raise if mixing str and unicode raises?

--
Regards,
  Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Bus error in transformer.py

2007-07-28 Thread Paul Colomiets

Hi!

I'm still working on bug:
http://python.org/sf/1720241

First thing I've found is that `compile` works OK, but `compiler.parse` not.
And I feel that It's bug in python, or python port, because I'm getting 
Bus error
on some stage when I'm tracing execution and trying to backtrace. Also 
`parser.expr`

passes ok, and error raises in Transformer class.

I've attached part of the debugger session, and script I use.

Any hints how to debug it further?

--
Paul.
(Pdb) bt
  /usr/local/lib/python2.5/threading.py(460)__bootstrap()
-> self.run()
  /usr/local/lib/python2.5/threading.py(440)run()
-> self.__target(*self.__args, **self.__kwargs)
  /tmp/test1.py(9)test()
-> print Transformer().compile_node(b)
  /usr/local/lib/python2.5/compiler/transformer.py(160)compile_node()
-> return self.eval_input(node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(195)eval_input()
-> return Expression(self.com_node(nodelist[0]))
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(567)testlist()
-> return self.com_binary(Tuple, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(583)test()
-> then = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(597)or_test()
-> return self.com_binary(Or, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(602)and_test()
-> return self.com_binary(And, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(606)not_test()
-> result = self.com_node(nodelist[-1])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(613)comparison()
-> node = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(646)expr()
-> return self.com_binary(Bitor, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(650)xor_expr()
-> return self.com_binary(Bitxor, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(654)and_expr()
-> return self.com_binary(Bitand, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(658)shift_expr()
-> node = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(670)arith_expr()
-> node = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(682)term()
-> node = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(702)factor()
-> node = self.lookup_node(nodelist[-1])(nodelist[-1][1:])
  /usr/local/lib/python2.5/compiler/transformer.py(714)power()
-> node = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(726)atom()
-> return self._atom_dispatch[nodelist[0][0]](nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(731)atom_lpar()
-> return self.com_node(nodelist[1])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(577)testlist_gexp()
-> return self.testlist(nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(567)testlist()
-> return self.com_binary(Tuple, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node(n)(n[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(583)test()
-> then = self.com_node(nodelist[0])
  /usr/local/lib/python2.5/compiler/transformer.py(792)com_node()
-> return self._dispatch[node[0]](node[1:])
  /usr/local/lib/python2.5/compiler/transformer.py(597)or_test()
-> return self.com_binary(Or, nodelist)
  /usr/local/lib/python2.5/compiler/transformer.py(1065)com_binary()
-> return self.lookup_node

Re: [Python-Dev] Bus error in transformer.py

2007-07-28 Thread Paul Colomiets
Martin v. Löwis wrote:
> You should run it under gdb, or attach to the interpreter
> from gdb.
>   
I've run it with gdb before (when posted  a bug),
and sometimes I got a huge traceback with
1+ lines and sometimes less than 100
full of question marks so I've decided it's not of
a great interest. Today I've got quite good
backtrace :)
> Could it be that you get a stack overflow? To my knowledge,
> stack space is very scarce on FreeBSD if you use threads.
>   
Well, yes it is!

I've tested stack overflow before without using threads,
and it throws an exception as expected.

But this:
  def test():
  test()

  from threading import Thread
  t = Thread(target = test)
  t.start()
  t.join()
Produces "Segmentation fault" on python2.4 and "Bus error" on
python2.5.

Following line:
  threading.stack_size(1<<19)
Fixes this problem for python2.5.

Thanks a lot. I think I'll set up it in sitecustomize.py.
I don't know but maybe you should consider change platform defaults.

--
Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythreads and BSD descendants

2007-08-03 Thread Paul Colomiets
Cameron Laird wrote:
> Folklore that I remember so unreliably I avoid trying to repeat it here
> held that Python threading had problems on BSD and allied Unixes.  What's
> the status of this?  I suspect the answer is, "Everything works, and the
> only real problem ever was that *signals* have different semantics under
> Linux and *BSD."  Anyone who can answer explicitly, though, would repre-
> sent a help to me.
>   
I use Python with multithreading applications on FreeBSD
for several years, and really single problem I've discovered
is that default stack size for new threads is small for the
default recursion limit. It can be easily fixed in Python 2.5.

Apart from that everything works OK for me.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP-419: Protecting cleanup statements from interruptions

2012-04-08 Thread Paul Colomiets
Hi,

I present my first PEP.

http://www.python.org/dev/peps/pep-0419/

Added text to the end of email for easier reference. Comments are welcome.

-- 
Paul



PEP: 419
Title: Protecting cleanup statements from interruptions
Version: $Revision$
Last-Modified: $Date$
Author: Paul Colomiets 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 06-Apr-2012
Python-Version: 3.3


Abstract


This PEP proposes a way to protect Python code from being interrupted
inside a finally clause or during context manager cleanup.


Rationale
=

Python has two nice ways to do cleanup.  One is a ``finally``
statement and the other is a context manager (usually called using a
``with`` statement).  However, neither is protected from interruption
by ``KeyboardInterrupt`` or ``GeneratorExit`` caused by
``generator.throw()``.  For example::

lock.acquire()
try:
print('starting')
do_something()
finally:
print('finished')
lock.release()

If ``KeyboardInterrupt`` occurs just after the second ``print()``
call, the lock will not be released.  Similarly, the following code
using the ``with`` statement is affected::

from threading import Lock

class MyLock:

def __init__(self):
self._lock_impl = Lock()

def __enter__(self):
self._lock_impl.acquire()
print("LOCKED")

def __exit__(self):
print("UNLOCKING")
self._lock_impl.release()

lock = MyLock()
with lock:
do_something

If ``KeyboardInterrupt`` occurs near any of the ``print()`` calls, the
lock will never be released.


Coroutine Use Case
--

A similar case occurs with coroutines.  Usually coroutine libraries
want to interrupt the coroutine with a timeout.  The
``generator.throw()`` method works for this use case, but there is no
way of knowing if the coroutine is currently suspended from inside a
``finally`` clause.

An example that uses yield-based coroutines follows.  The code looks
similar using any of the popular coroutine libraries Monocle [1]_,
Bluelet [2]_, or Twisted [3]_. ::

def run_locked():
yield connection.sendall('LOCK')
try:
yield do_something()
yield do_something_else()
finally:
yield connection.sendall('UNLOCK')

with timeout(5):
yield run_locked()

In the example above, ``yield something`` means to pause executing the
current coroutine and to execute coroutine ``something`` until it
finishes execution.  Therefore the coroutine library itself needs to
maintain a stack of generators.  The ``connection.sendall()`` call waits
until the socket is writable and does a similar thing to what
``socket.sendall()`` does.

The ``with`` statement ensures that all code is executed within 5
seconds timeout.  It does so by registering a callback in the main
loop, which calls ``generator.throw()`` on the top-most frame in the
coroutine stack when a timeout happens.

The ``greenlets`` extension works in a similar way, except that it
doesn't need ``yield`` to enter a new stack frame.  Otherwise
considerations are similar.


Specification
=

Frame Flag 'f_in_cleanup'
-

A new flag on the frame object is proposed.  It is set to ``True`` if
this frame is currently executing a ``finally`` clause.  Internally,
the flag must be implemented as a counter of nested finally statements
currently being executed.

The internal counter also needs to be incremented during execution of
the ``SETUP_WITH`` and ``WITH_CLEANUP`` bytecodes, and decremented
when execution for these bytecodes is finished.  This allows to also
protect ``__enter__()`` and ``__exit__()`` methods.


Function 'sys.setcleanuphook'
-

A new function for the ``sys`` module is proposed.  This function sets
a callback which is executed every time ``f_in_cleanup`` becomes
false.  Callbacks get a frame object as their sole argument, so that
they can figure out where they are called from.

The setting is thread local and must be stored in the
``PyThreadState`` structure.


Inspect Module Enhancements
---

Two new functions are proposed for the ``inspect`` module:
``isframeincleanup()`` and ``getcleanupframe()``.

``isframeincleanup()``, given a frame or generator object as its sole
argument, returns the value of the ``f_in_cleanup`` attribute of a
frame itself or of the ``gi_frame`` attribute of a generator.

``getcleanupframe()``, given a frame object as its sole argument,
returns the innermost frame which has a true value of
``f_in_cleanup``, or ``None`` if no frames in the stack have a nonzero
value for that attribute.  It starts to inspect from the specified
frame and walks to outer frames using ``f_back`` pointers, just like
``getouterframes()`` does.


Example
===

An example implementa

Re: [Python-Dev] PEP-419: Protecting cleanup statements from interruptions

2012-04-08 Thread Paul Colomiets
Hi Antoine,

On Mon, Apr 9, 2012 at 12:06 AM, Antoine Pitrou  wrote:
>
> Hello Paul,
>
> Thanks for the PEP and the description of the various issues.
>
>> An example implementation of a SIGINT handler that interrupts safely
>> might look like::
>>
>>     import inspect, sys, functools
>>
>>     def sigint_handler(sig, frame):
>>         if inspect.getcleanupframe(frame) is None:
>>             raise KeyboardInterrupt()
>>         sys.setcleanuphook(functools.partial(sigint_handler, 0))
>
> It is not clear whether you are proposing this for the default signal
> handler, or only as an example that third-party libraries or frameworks
> could implement.
>

Only as an example. The reason is in "Modifying KeyboardInterrupt"
section under "Unresolved Issues". So it might be changed if there
is demand.

-- 
Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-419: Protecting cleanup statements from interruptions

2012-04-09 Thread Paul Colomiets
Hi Benjamin,

On Mon, Apr 9, 2012 at 12:42 AM, Benjamin Peterson  wrote:
> 2012/4/8 Paul Colomiets :
>> Function 'sys.setcleanuphook'
>> -
>>
>> A new function for the ``sys`` module is proposed.  This function sets
>> a callback which is executed every time ``f_in_cleanup`` becomes
>> false.  Callbacks get a frame object as their sole argument, so that
>> they can figure out where they are called from.
>
> Calling a function every time you leave a finally block? Isn't that a
> bit expensive?
>

For signal handler it isn't, because you set it only when signal happens,
and remove it when it first happens (in the common case)

For yield-based coroutines, there is a similar overhead of trampoline
at each yield and each return, and exit from finally doesn't happen more
often than return.

For both greenlets and yield-based coroutines it is intented to be used
for exceptional situation (when timeout happens *and* coroutine
currently in finally block), so can be turned off when unneeded
(and even turned on only for this specific coroutine).

When hook is not set it's only checking of single pointer for NULL
value at each exit from finally. This overhead should be negligible.

-- 
Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com