2016-09-15 21:33 GMT+02:00 Victor Stinner :
> perf takes ~60 seconds by default. If you don't care of the accuracy,
> use --fast and it now only takes 20 seconds ;-)
Oops, I'm wrong. By default, a "single dot" (one worker process) takes
less 1 second, so 20 dots (default) takes less than 20 second
The discussion on benchmarking is no more related to compact dict, so
I start a new thread.
2016-09-15 13:27 GMT+02:00 Paul Moore :
> Just as a side point, perf provided essentially identical results but
> took 2 minutes as opposed to 8 seconds for timeit to do so. I
> understand why perf is bett
On 15.09.16 19:13, Antoine Pitrou wrote:
Since this micro-benchmark creates the keys in order just before
filling the dict with them, randomizing the insertion order destroys
the temporal locality of object header accesses when iterating over the
dict keys. *This* looks like the right explanation
On Sep 13 2016, Tim Peters wrote:
> [Terry Reedy ]
>>> Tim Peters investigated and empirically determined that an
>>> O(n*n) binary insort, as he optimized it on real machines, is faster
>>> than O(n*logn) sorting for up to around 64 items.
>
> [Nikolaus Rath ]
>> Out of curiosity: is this test re
On Thu, 15 Sep 2016 18:13:54 +0200
Antoine Pitrou wrote:
>
> This also shows that a micro-benchmark that merely looks ok can actually
> be a terrible proxy of actual performance.
... unless all your dicts have their key objects nicely arranged
sequentially in heap memory, of course.
Regards
An
On Thu, 15 Sep 2016 08:02:10 -0700
Raymond Hettinger wrote:
>
> Eric is correct on this one. The consecutive hashes make a huge difference
> for Python 3.5. While there is a table full table scan, the check for NULL
> entries becomes a predictable branch when all the keys are in consecutive
I wonder if this patch could just be rejected instead of lingering
forever? It clearly has no champion among the current core devs and
therefore it won't be included in Python 3.6 (we're all volunteers so
that's how it goes).
The use case for the patch is also debatable: Python's parser wasn't
des
On 09/15/2016 08:02 AM, Raymond Hettinger wrote:
Eric is correct on this one. The consecutive hashes make a huge difference for
Python 3.5. While there is a table full table scan, the check for NULL
entries becomes a predictable branch when all the keys are in consecutive
positions. Ther
Hello,
This is a monthly ping to get a review on http://bugs.python.org/issue26415 --
"Excessive peak memory consumption by the Python parser".
Following the comments from August, the patches now include a more detailed
comment for Init_ValidationGrammar().
The code change itself is still the
[Eric]
>> My understanding is that the all-int-keys case is an outlier. This is due
>> to how ints hash, resulting in fewer collisions and a mostly
>> insertion-ordered hash table. Consequently, I'd expect the above
>> microbenchmark to give roughly the same result between 3.5 and 3.6, which
>>
On Thu, 15 Sep 2016 07:08:50 -0600
Eric Snow wrote:
> On Sep 15, 2016 06:06, "Serhiy Storchaka" wrote:
> > Python 3.5: 10 loops, best of 3: 33.5 msec per loop
> > Python 3.6: 10 loops, best of 3: 37.5 msec per loop
> >
> > These results look surprisingly and inexplicably to me. I expected that
On Sep 15, 2016 06:06, "Serhiy Storchaka" wrote:
> Python 3.5: 10 loops, best of 3: 33.5 msec per loop
> Python 3.6: 10 loops, best of 3: 37.5 msec per loop
>
> These results look surprisingly and inexplicably to me. I expected that
even if there is some performance regression in the lookup or mod
On 15.09.16 12:43, Raymond Hettinger wrote:
On Sep 14, 2016, at 11:31 PM, Serhiy Storchaka wrote:
Note that this is made at the expense of the 20% slowing down an iteration.
$ ./python -m timeit -s "d = dict.fromkeys(range(10**6))" -- "list(d)"
Python 3.5: 66.1 msec per loop
Python 3.6: 82.5 m
On 15.09.16 11:57, Victor Stinner wrote:
Stop! Please stop using timeit, it's lying!
* You must not use the minimum but average or median
* You must run a microbenchmark in multiple processes to test
different randomized hash functions and different memory layouts
In short: you should use m
On 15.09.16 11:02, INADA Naoki wrote:
Are two Pythons built with same options?
Both are built from clean checkout with default options (hg update -C
3.x; ./configure; make -s). The only difference is -std=c99 and
additional warnings in 3.6:
Python 3.5:
gcc -pthread -c -Wno-unused-result -Ws
On 15 September 2016 at 10:43, Raymond Hettinger
wrote:
> Something like this will reveal the true and massive improvement in iteration
> speed:
>
> $ ./python.exe -m timeit -s "d=dict.fromkeys(map(str,range(10**6)))"
> "list(d)"
>py -3.5 -m timeit -s "d=dict.fromkeys(map(str,range(10**6))
2016-09-15 11:29 GMT+02:00 Antoine Pitrou :
> That sounds irrelevant. LTO+PGO improves performance, it does
> nothing for benchmarking per se.
In the past, I had bad surprised when running benchmarks without PGO:
https://haypo.github.io/journey-to-stable-benchmark-deadcode.html
I don't recall if
> On Sep 14, 2016, at 11:31 PM, Serhiy Storchaka wrote:
>
> Note that this is made at the expense of the 20% slowing down an iteration.
>
> $ ./python -m timeit -s "d = dict.fromkeys(range(10**6))" -- "list(d)"
> Python 3.5: 66.1 msec per loop
> Python 3.6: 82.5 msec per loop
A range of consec
On Thu, 15 Sep 2016 10:57:07 +0200
Victor Stinner wrote:
>
> > Both Python is built without neither `--with-optimizations` or `make
> > profile-opt`.
>
> That's bad :-) For most reliable benchmarks, it's better to use
> LTO+PGO compilation.
That sounds irrelevant. LTO+PGO improves performance
On 15 September 2016 at 09:57, Victor Stinner wrote:
> 2016-09-15 10:02 GMT+02:00 INADA Naoki :
>> In my environ:
>>
>> ~/local/python-master/bin/python3 -m timeit -s "d =
>> dict.fromkeys(range(10**6))" 'list(d)'
>
> Stop! Please stop using timeit, it's lying!
>
> * You must not use the minim
On Thu, Sep 15, 2016 at 5:57 PM Victor Stinner
wrote:
> 2016-09-15 10:02 GMT+02:00 INADA Naoki :
> > In my environ:
> >
> > ~/local/python-master/bin/python3 -m timeit -s "d =
> > dict.fromkeys(range(10**6))" 'list(d)'
>
> Stop! Please stop using timeit, it's lying!
>
> * You must not use the
2016-09-15 10:02 GMT+02:00 INADA Naoki :
> In my environ:
>
> ~/local/python-master/bin/python3 -m timeit -s "d =
> dict.fromkeys(range(10**6))" 'list(d)'
Stop! Please stop using timeit, it's lying!
* You must not use the minimum but average or median
* You must run a microbenchmark in multip
On 15 September 2016 at 07:31, Serhiy Storchaka wrote:
> Note that this is made at the expense of the 20% slowing down an iteration.
>
> $ ./python -m timeit -s "d = dict.fromkeys(range(10**6))" -- "list(d)"
> Python 3.5: 66.1 msec per loop
> Python 3.6: 82.5 msec per loop
On my Windows 7 PC with
>
>
> Note that this is made at the expense of the 20% slowing down an iteration.
>
> $ ./python -m timeit -s "d = dict.fromkeys(range(10**6))" -- "list(d)"
> Python 3.5: 66.1 msec per loop
> Python 3.6: 82.5 msec per loop
>
>
Are two Pythons built with same options?
In my environ:
~/local/python
2016-09-15 8:31 GMT+02:00 Serhiy Storchaka :
> Note that this is made at the expense of the 20% slowing down an iteration.
>
> $ ./python -m timeit -s "d = dict.fromkeys(range(10**6))" -- "list(d)"
> Python 3.5: 66.1 msec per loop
> Python 3.6: 82.5 msec per loop
>
> Fortunately the cost of the loo
25 matches
Mail list logo