[Python-Dev] PEP 539 (second round): A new C API for Thread-Local Storage in CPython

2017-08-31 Thread Masayuki YAMAMOTO
Hi python-dev,

Since Erik started the PEP 539 thread on python-ideas, I've collected
feedbacks in the discussion and pull-request, and tried improvement for the
API specification and reference implementation, as the result I think
resolved issues which pointed out by feedbacks.

Well, it's probably not finish yet, there is one which bothers me.  I'm not
sure the CPython startup sequence design (PEP 432 Restructuring the CPython
startup sequence, it might be a conflict with the draft specification [1]),
please let me know what you think about the new API specification.  In any
case, I start a new thread of the updated draft.


Summary of technical changes:

- Two functions which correspond PyThread_delete_key_value and
PyThread_ReInitTLS are omitted, because these are for the removed CPython's
own TLS implementation.

- Add an internal field "_is_initialized" and a constant default value
"Py_tss_NEEDS_INIT" to Py_tss_t type to indicate the thread key's
initialization state independent of the underlying implementation.

- Then, define behaviors for functions which uses the "_is_initialized"
field.

- Change the key argument to pass a pointer, allow to use in the limited
API that does not know the key type size.

- Add three functions which dynamic (de-)allocation and the key's
initialization state checking, because handle opaque struct.

- Change platform support in the case of enabling thread support, all
platforms are required at least one of native thread implementations.

Also the draft has been added explanations and rationales for above
changes, moreover, additional annotations for information.


Regards,
Masayuki


[1]: The specifications of thread key creation and deletion refer how to
use in the API clients (Modules/_tracemalloc.c and Python/pystate.c).  One
of those, Py_Initialize function that is a caller's origin of
PyThread_tss_create is the flow "no-op when called for a second time" until
CPython 3.6 [2].  However, an internal function _Py_InitializeCore that has
been added newly in the current master branch is the flow "fatal error when
called for a second time" [3].

[2]: https://docs.python.org/3.6/c-api/init.html#c.Py_Initialize

[3]: https://github.com/python/cpython/blob/master/Python/pylifecycle.c#L508

First round for PEP 539:
https://mail.python.org/pipermail/python-ideas/2016-December/043983.html

Discussion for the issue:
https://bugs.python.org/issue25658

HTML version for PEP 539 draft:
https://www.python.org/dev/peps/pep-0539/

Diff between first round and second round:
https://gist.github.com/ma8ma/624f9e4435ebdb26230130b11ce12d20/revisions

And the pull-request for reference implementation (work in progress):
https://github.com/python/cpython/pull/1362




PEP: 539
Title: A New C-API for Thread-Local Storage in CPython
Version: $Revision$
Last-Modified: $Date$
Author: Erik M. Bray, Masayuki Yamamoto
BDFL-Delegate: Nick Coghlan
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 20-Dec-2016
Post-History: 16-Dec-2016


Abstract


The proposal is to add a new Thread Local Storage (TLS) API to CPython which
would supersede use of the existing TLS API within the CPython interpreter,
while deprecating the existing API.  The new API is named "Thread Specific
Storage (TSS) API" (see `Rationale for Proposed Solution`_ for the origin of
the name).

Because the existing TLS API is only used internally (it is not mentioned in
the documentation, and the header that defines it, ``pythread.h``, is not
included in ``Python.h`` either directly or indirectly), this proposal
probably
only affects CPython, but might also affect other interpreter
implementations
(PyPy?) that implement parts of the CPython API.

This is motivated primarily by the fact that the old API uses ``int`` to
represent TLS keys across all platforms, which is neither POSIX-compliant,
nor portable in any practical sense [1]_.

.. note::

Throughout this document the acronym "TLS" refers to Thread Local
Storage and should not be confused with "Transportation Layer Security"
protocols.


Specification
=

The current API for TLS used inside the CPython interpreter consists of 6
functions::

PyAPI_FUNC(int) PyThread_create_key(void)
PyAPI_FUNC(void) PyThread_delete_key(int key)
PyAPI_FUNC(int) PyThread_set_key_value(int key, void *value)
PyAPI_FUNC(void *) PyThread_get_key_value(int key)
PyAPI_FUNC(void) PyThread_delete_key_value(int key)
PyAPI_FUNC(void) PyThread_ReInitTLS(void)

These would be superseded by a new set of analogous functions::

PyAPI_FUNC(int) PyThread_tss_create(Py_tss_t *key)
PyAPI_FUNC(void) PyThread_tss_delete(Py_tss_t *key)
PyAPI_FUNC(int) PyThread_tss_set(Py_tss_t *key, void *value)
PyAPI_FUNC(void *) PyThread_tss_get(Py_tss_t *key)

The specification also adds a few new features:

* A new type ``Py_tss_t``--an opaque type the definition of which may
  depend on the underlying TLS i

Re: [Python-Dev] [python-committers] Python 3.3.7 release schedule and end-of-life

2017-08-31 Thread Victor Stinner
Hello,

2017-07-15 23:51 GMT+02:00 Ned Deily :
> To that end, I would like to schedule its next, and hopefully final, 
> security-fix release to coincide with the already announced 3.4.7 
> security-fix release. In particular, we'll plan to tag and release 3.3.7rc1 
> on Monday 2017-07-24 (UTC) and tag and release 3.3.7 final on Monday 
> 2017-08-07.  In the coming days, I'll be reviewing the outstanding 3.3 
> security issues and merging appropriate 3.3 PRs.  Some of them have been 
> sitting as patches for a long time so, if you have any such security issues 
> that you think belong in 3.3, it would be very helpful if you would review 
> such patches and turn them into 3.3 PRs.

Any update on the 3.3.7 release? 3.3.7rc1 wasn't released yet, no?

By the way, it seems like a recent update of libexpat caused a
regression :-( https://bugs.python.org/issue31170 I didn't have time
to look on that issue yet.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 539 (second round): A new C API for Thread-Local Storage in CPython

2017-08-31 Thread Nick Coghlan
On 31 August 2017 at 18:16, Masayuki YAMAMOTO  wrote:
> Hi python-dev,
>
> Since Erik started the PEP 539 thread on python-ideas, I've collected
> feedbacks in the discussion and pull-request, and tried improvement for the
> API specification and reference implementation, as the result I think
> resolved issues which pointed out by feedbacks.
>
> Well, it's probably not finish yet, there is one which bothers me.  I'm not
> sure the CPython startup sequence design (PEP 432 Restructuring the CPython
> startup sequence, it might be a conflict with the draft specification [1]),

I think that's just a bug in the startup refactoring - we don't
currently test the "Py_Initialize()/Py_Initialize()/Py_Finalize()"
sequence anywhere, and I'd missed that it's explicitly documented as
being permitted. I'd still want to keep the "multiple calls without an
intervening finalize are prohibited" behaviour for the new more
granular APIs (since it's simpler and easier to explain if it just
always fails rather than attempting to check that the previous
initialization supplied the same config settings), but the documented
Py_Initialize() behaviour can be reinstated by restoring the early
return in _Py_InitializeEx_Private.

It's also worth noting that we *do* test repeated
Py_Initialize()/Py_Finalize() cycles - Py_Finalize() explicitly clears
the internal flags that would otherwise lead to a fatal error in
_Py_InitializeCore.

As far as the PEP itself goes, this version looks good to me, so if
there aren't any other significant comments between now and then, I'm
likely to accept it at the core development sprint next week.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 539 (second round): A new C API for Thread-Local Storage in CPython

2017-08-31 Thread Erik Bray
On Thu, Aug 31, 2017 at 10:16 AM, Masayuki YAMAMOTO
 wrote:
> Hi python-dev,
>
> Since Erik started the PEP 539 thread on python-ideas, I've collected
> feedbacks in the discussion and pull-request, and tried improvement for the
> API specification and reference implementation, as the result I think
> resolved issues which pointed out by feedbacks.

Thanks Masayuki for taking the lead on updating the PEP.  I've been
off the ball with it for a while.  In particular the table summarizing
the changes is nice.   I just have a few minor changes to suggest
(typos and such) that I'll make in a pull request.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 539 (second round): A new C API for Thread-Local Storage in CPython

2017-08-31 Thread Masayuki YAMAMOTO
2017-08-31 18:51 GMT+09:00 Nick Coghlan :

> [...]
> I think that's just a bug in the startup refactoring - we don't
> currently test the "Py_Initialize()/Py_Initialize()/Py_Finalize()"
> sequence anywhere, and I'd missed that it's explicitly documented as
> being permitted. I'd still want to keep the "multiple calls without an
> intervening finalize are prohibited" behaviour for the new more
> granular APIs (since it's simpler and easier to explain if it just
> always fails rather than attempting to check that the previous
> initialization supplied the same config settings), but the documented
> Py_Initialize() behaviour can be reinstated by restoring the early
> return in _Py_InitializeEx_Private.
>
I get the point of difference between document and current code, I don't
mind :)

As far as the PEP itself goes, this version looks good to me, so if
> there aren't any other significant comments between now and then, I'm
> likely to accept it at the core development sprint next week.
>
I took time, but I'm happy to near to finish. Thanks you for helping!

Masayuki
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 539 (second round): A new C API for Thread-Local Storage in CPython

2017-08-31 Thread Masayuki YAMAMOTO
Because I couldn't go to there by myself, I'm really grateful to you and
your first draft.

Masayuki

2017-08-31 19:40 GMT+09:00 Erik Bray :

> On Thu, Aug 31, 2017 at 10:16 AM, Masayuki YAMAMOTO
>  wrote:
> > Hi python-dev,
> >
> > Since Erik started the PEP 539 thread on python-ideas, I've collected
> > feedbacks in the discussion and pull-request, and tried improvement for
> the
> > API specification and reference implementation, as the result I think
> > resolved issues which pointed out by feedbacks.
>
> Thanks Masayuki for taking the lead on updating the PEP.  I've been
> off the ball with it for a while.  In particular the table summarizing
> the changes is nice.   I just have a few minor changes to suggest
> (typos and such) that I'll make in a pull request.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-31 Thread Koos Zevenhoven
On Wed, Aug 30, 2017 at 5:36 PM, Yury Selivanov 
wrote:

> On Wed, Aug 30, 2017 at 9:44 AM, Yury Selivanov 
> wrote:
> [..]
> >> FYI, I've been sketching an alternative solution that addresses these
> kinds
> >> of things. I've been hesitant to post about it, partly because of the
> >> PEP550-based workarounds that Nick, Nathaniel, Yury etc. have been
> >> describing, and partly because that might be a major distraction from
> other
> >> useful discussions, especially because I wasn't completely sure yet
> about
> >> whether my approach has some fatal flaw compared to PEP 550 ;).
> >
> > We'll never know until you post it. Go ahead.
>
>
Anyway, thanks to these efforts, your proposal has become somewhat more
competitive compared to mine ;). I'll post mine as soon as I find the time
to write everything down. My intention is before next week.


—Koos


-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Inplace operations for PyLong objects

2017-08-31 Thread Manciu, Catalin Gabriel
Hi everyone,

While looking over the PyLong source code in Objects/longobject.c I came
across the fact that the PyLong object doesnt't include implementation for
basic inplace operations such as adding or multiplication:

[...]
long_long,  /*nb_int*/
0,  /*nb_reserved*/
long_float, /*nb_float*/
0,  /* nb_inplace_add */
0,  /* nb_inplace_subtract */
0,  /* nb_inplace_multiply */
0,  /* nb_inplace_remainder */
[...]

While I understand that the immutable nature of this type of object justifies
this approach, I wanted to experiment and see how much performance an inplace
add would bring.
My inplace add will revert to calling the default long_add function when:
- the refcount of the first operand indicates that it's being shared
or 
- that operand is one of the preallocated 'small ints'
which should mitigate the effects of not conforming to the PyLong immutability
specification.
It also allocates a new PyLong _only_ in case of a potential overflow.

The workload I used to evaluate this is a simple script that does a lot of
inplace adding:

import time
import sys

def write_progress(prev_percentage, value, limit):
percentage = (100 * value) // limit
if percentage != prev_percentage:
sys.stdout.write("%d%%\r" % (percentage))
sys.stdout.flush()
return percentage

progress = -1
the_value = 0
the_increment = ((1 << 30) - 1)
crt_iter = 0
total_iters = 10 ** 9

start = time.time()

while crt_iter < total_iters:
the_value += the_increment
crt_iter += 1

progress = write_progress(progress, crt_iter, total_iters)

end = time.time()

print ("\n%.3fs" % (end - start))
print ("the_value: %d" % (the_value))

Running the baseline version outputs:
./python inplace.py
100%
356.633s
the_value: 10737418230

Running the modified version outputs:
./python inplace.py
100%
308.606s
the_value: 10737418230

In summary, I got a +13.47% improvement for the modified version.
The CPython revision I'm using is 7f066844a79ea201a28b9555baf4bceded90484f
from the master branch and I'm running on a I7 6700K CPU with Turbo-Boost
disabled (frequency is pinned at 4GHz).

Do you think that such an optimization would be a good approach ?

Thank you,
Catalin
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-31 Thread Brett Cannon
On Wed, 30 Aug 2017 at 02:56 Paul Moore  wrote:

> On 30 August 2017 at 10:48, Nick Coghlan  wrote:
> > On 30 August 2017 at 19:39, Antoine Pitrou  wrote:
> >> On Wed, 30 Aug 2017 08:48:56 +0300
> >> Serhiy Storchaka  wrote:
> >>> Please, please don't forget to edit commit messages before merging. An
> >>> excessively verbose commit message will be kept in the repository
> >>> forever and will harm future developers that read a history.
> >>
> >> Sorry, I routinely forget about it.  Can we have an automated check for
> >> this?
> >
> > Not while we're pushing the "squash & merge" button directly, as
> > there's no opportunity for any automated hooks to run at that point :(
> >
> > More options open up if the actual commit is being handled by a bot,
> > but even that would still depend on us providing an updated commit
> > message via some mechanism.
>
> We could have a check for PRs that contain multiple commits. The check
> would flag that the commit message needs manual editing before
> merging, and would act as a prompt for submitters who are willing to
> do so to squash their PRs themselves. Personally, I'd prefer that as,
> apart from admin activities like making sure the PR and issue number
> references are present and correct, I think the original submitter has
> a better chance of properly summarising the PR in the merged commit
> message anyway.
>

So you would want a comment when the PR reaches "awaiting merge" with
instructions requesting the author do their own squash commit to simplify
the message for us?

There's also https://github.com/python/bedevere/issues/14 to help
post-commit remind core devs when they forgot to do something. While it
would be too late to fix a problem, hopefully regular reminders would still
be helpful.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-31 Thread Paul Moore
On 31 August 2017 at 20:27, Brett Cannon  wrote:
> So you would want a comment when the PR reaches "awaiting merge" with
> instructions requesting the author do their own squash commit to simplify
> the message for us?

That would work. It could say that the PR consists of multiple commits
and the commit message needs to be summarised when merging, and
suggest that if the submitter is willing, they could squash and
summarise the message themselves. I don't want to give the impression
we're insisting the submitter squash, rather we're pointing out
there's an additional step needed and the submitter can help by doing
that step if they wish.

> There's also https://github.com/python/bedevere/issues/14 to help
> post-commit remind core devs when they forgot to do something. While it
> would be too late to fix a problem, hopefully regular reminders would still
> be helpful.

Yeah, that would probably be useful for regular committers.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inplace operations for PyLong objects

2017-08-31 Thread Terry Reedy

On 8/31/2017 2:40 PM, Manciu, Catalin Gabriel wrote:

Hi everyone,

While looking over the PyLong source code in Objects/longobject.c I came
across the fact that the PyLong object doesnt't include implementation for
basic inplace operations such as adding or multiplication:

[...]
 long_long,  /*nb_int*/
 0,  /*nb_reserved*/
 long_float, /*nb_float*/
 0,  /* nb_inplace_add */
 0,  /* nb_inplace_subtract */
 0,  /* nb_inplace_multiply */
 0,  /* nb_inplace_remainder */
[...]

While I understand that the immutable nature of this type of object justifies
this approach, I wanted to experiment and see how much performance an inplace
add would bring.
My inplace add will revert to calling the default long_add function when:
- the refcount of the first operand indicates that it's being shared
or
- that operand is one of the preallocated 'small ints'
which should mitigate the effects of not conforming to the PyLong immutability
specification.
It also allocates a new PyLong _only_ in case of a potential overflow.

The workload I used to evaluate this is a simple script that does a lot of
inplace adding:

import time
import sys

def write_progress(prev_percentage, value, limit):
percentage = (100 * value) // limit
if percentage != prev_percentage:
sys.stdout.write("%d%%\r" % (percentage))
sys.stdout.flush()
return percentage

progress = -1
the_value = 0
the_increment = ((1 << 30) - 1)
crt_iter = 0
total_iters = 10 ** 9

start = time.time()

while crt_iter < total_iters:
the_value += the_increment
crt_iter += 1

progress = write_progress(progress, crt_iter, total_iters)
end = time.time()

print ("\n%.3fs" % (end - start))
print ("the_value: %d" % (the_value))

Running the baseline version outputs:
./python inplace.py
100%
356.633s
the_value: 10737418230

Running the modified version outputs:
./python inplace.py
100%
308.606s
the_value: 10737418230

In summary, I got a +13.47% improvement for the modified version.
The CPython revision I'm using is 7f066844a79ea201a28b9555baf4bceded90484f
from the master branch and I'm running on a I7 6700K CPU with Turbo-Boost
disabled (frequency is pinned at 4GHz).

Do you think that such an optimization would be a good approach ?


On my machine, the more realistic code, with an implicit C loop,
the_value = sum(the_increment for i in range(total_iters))
gives the same value twice as fast as your explicit Python loop.
(I cut total_iters down to 10**7).

You might check whether sum uses an in-place accumulator for ints.

--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-31 Thread Terry Reedy

On 8/31/2017 3:27 PM, Brett Cannon wrote:



On Wed, 30 Aug 2017 at 02:56 Paul Moore 


do so to squash their PRs themselves. Personally, I'd prefer that as,
apart from admin activities like making sure the PR and issue number
references are present and correct, I think the original submitter has
a better chance of properly summarising the PR in the merged commit
message anyway.


My experience is different.

So you would want a comment when the PR reaches "awaiting merge" with 
instructions requesting the author do their own squash commit to 


My impression is that this makes the individual commits disappear.


simplify the message for us?


Not me.

--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inplace operations for PyLong objects

2017-08-31 Thread Manciu, Catalin Gabriel
>On my machine, the more realistic code, with an implicit C loop,
>the_value = sum(the_increment for i in range(total_iters))
>gives the same value twice as fast as your explicit Python loop.
>(I cut total_iters down to 10**7).

Your code is faster due to a number of reasons:
- range in Python 3 is implemented in C so it's quite faster
  and, because your range only goes up to 10 ** 7, the fastest iterator
  is used: rangeiterobject for which the 'next' function is implemented
  using native longs instead of CPython PyLongs:
  rangeiter_next(rangeiterobject *r) from rangeobject.c
- my code also does some extra work to output a progress indicator
>You might check whether sum uses an in-place accumulator for ints.
- you're right, sum actually works with native longs until it overflows or
  you stop adding PyLongs, then it falls back to PyNumber_Add, check:
static PyObject * builtin_sum_impl(PyObject *module, PyObject 
*iterable, 
   PyObject *start)
from bltinmodule.c

The focus of this experiment was inplace adds in general. While, as you've
shown, there are ways to write the loop optimally, the benchmark was written
as a huge loop just to showcase that there is an improvement using this
approach. The performance improvement is a result of not having to
allocate/deallocate a PyLong per iteration.

A huge Python program with lots of PyLong inplace operations (not just
adds, this can be applied to all PyLong inplace operations), regardless of them
being in a loop or not, might benefit from such an optimization.

Thank you,
Catalin

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inplace operations for PyLong objects

2017-08-31 Thread Chris Angelico
On Fri, Sep 1, 2017 at 9:35 AM, Manciu, Catalin Gabriel
 wrote:
> A huge Python program with lots of PyLong inplace operations (not just
> adds, this can be applied to all PyLong inplace operations), regardless of 
> them
> being in a loop or not, might benefit from such an optimization.

If you're writing a lot of numerical work in Python, have you tried
running your code in PyPy? At very least, it's worth adding as another
data point in your performance comparisons.

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inplace operations for PyLong objects

2017-08-31 Thread Antoine Pitrou
On Thu, 31 Aug 2017 23:35:34 +
"Manciu, Catalin Gabriel"  wrote:
> 
> The focus of this experiment was inplace adds in general. While, as you've
> shown, there are ways to write the loop optimally, the benchmark was written
> as a huge loop just to showcase that there is an improvement using this
> approach. The performance improvement is a result of not having to
> allocate/deallocate a PyLong per iteration.
> 
> A huge Python program with lots of PyLong inplace operations (not just
> adds, this can be applied to all PyLong inplace operations), regardless of 
> them
> being in a loop or not, might benefit from such an optimization.

I'm skeptical there are some programs out there that are limited by the
speed of PyLong inplace additions.  Even if you have a bigint-intensive
workload (say public-key cryptography, not that it's specifically a
good idea to do so in pure Python), chances are it's spending much of
its time in more sophisticated operations such as multiplication, power
or division.

In other words, while your experiment has intellectual and educational
interest, I don't think it shows the path to a useful optimization.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com