Re: [Python-Dev] Benchmark results across all major Python implementations

2015-11-17 Thread Popa, Stefan A
Hi Python community,

Thank you for your feedback! We will look into this and come up with an e-mail 
format proposal in the following days.

Best regards,

--
Stefan A. POPA
Software Engineering Manager
System Technologies and Optimization Division
Software Services Group, Intel Romania

> On 17 Nov 2015, at 21:22, Stewart, David C  wrote:
> 
> +Stefan (owner of the 0-day lab)
> 
> 
> 
> 
>> On 11/17/15, 10:40 AM, "Python-Dev on behalf of R. David Murray" 
>> > rdmur...@bitdance.com> wrote:
>> 
>>> On Mon, 16 Nov 2015 23:37:06 +, "Stewart, David C" 
>>>  wrote:
>>> Last June we started publishing a daily performance report of the latest 
>>> Python tip against the previous day's run and some established synch point. 
>>> We mail these to the community to act as a "canary in the coal mine." I 
>>> wrote about it at https://01.org/lp/blog/0-day-challenge-what-pulse-internet
>>> 
>>> You can see our manager-style dashboard of a couple of key workloads at 
>>> http://languagesperformance.intel.com/
>>> (I have this running constantly on a dedicated screen in my office).
>> 
>> Just took a look at this.  Pretty cool.  The web page is a bit confusing,
>> though.  It doesn't give any clue as to what is being measured by the
>> numbers presented...it isn't obvious whether those downward sloping
>> lines represent progress or regression.  Also, the intro talks about
>> historical data, but other than the older dates[*] in the graph there's
>> no access to it.  Do you have plans to provide access to the raw data?
>> It also doesn't show all of the test shown in the example email in your
>> blog post or the emails to python-checkins...do you plan to make those
>> graphs available in the future as well?
> 
> The data on this website has been normalized so "up" is "good" so far as the 
> slope of the line. The daily email has a lot more detail about the hardware 
> and software configuration and the versions being compared. We run workloads 
> multiple times and visually show the relative standard distribution on the 
> graph.
> 
> No plans to show the raw data.
> 
> I think showing multiple workloads graphically sounds useful, we should look 
> into that.
> 
>> 
>> Also, in the emails, what is the PGO column percentage relative to?
> 
> It's the performance boost on the current rev from just using PGO. Another 
> way to think about it is, this is the performance that you leave on the table 
> by *not* building Cpython with PGO. For example, from last night's run, we 
> would see an 18.54% boost in django_v2 by building Python using PGO.
> 
> Note: PGO is not the default way to build Python because it is relatively 
> slow to compile it that way. (I think it should be the default). 
> 
> Here are the instructions for using it (thanks to Peter Wang for the 
> instructions):
> 
> hg clone https://hg.python.org/cpython cpython
> cd cpython
> hg update 2.7
> ./configure
> make profile-opt
> 
> 
> 
>> 
>> I suppose that for this to have maximum effect someone would have to
>> specifically be paying attention to performance and figuring out why
>> every (real) regression happened.  I don't suppose we have anyone in the
>> community currently who is taking on that role, though we certainly do
>> have people who are *interested* in Python performance :)
> 
> We're trying to fill that role as much as we can. When there is a significant 
> (and unexplained) regression that we see, I usually ask our engineers to 
> bisect it to identify the offending patch and root-cause it.
> 
>> 
>> --David
>> 
>> [*] Personally I'd find it easier to read those dates in MM-DD form,
>> but I suppose that's a US quirk, since in the US when using slashes
>> the month comes first...
> 
> You and me both. As you surmised, the site was developed by our friends in 
> Europe. :-)
> 
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: 
>> https://mail.python.org/mailman/options/python-dev/david.c.stewart%40intel.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-02 Thread Popa, Stefan A
Hi Fabio,

Let me know if you have any questions related to the Python benchmarks run 
nightly in Intel’s 0-Day Lab.

Thanks,
Stefan


From: "Stewart, David C" 
mailto:david.c.stew...@intel.com>>
Date: Tuesday 1 December 2015 at 17:26
To: Fabio Zadrozny mailto:fabi...@gmail.com>>
Cc: "R. David Murray" mailto:rdmur...@bitdance.com>>, 
"python-dev@python.org" 
mailto:python-dev@python.org>>, Stefan A Popa 
mailto:stefan.a.p...@intel.com>>
Subject: Re: [Python-Dev] Avoiding CPython performance regressions



From: Fabio Zadrozny mailto:fabi...@gmail.com>>
Date: Tuesday, December 1, 2015 at 1:36 AM
To: David Stewart mailto:david.c.stew...@intel.com>>
Cc: "R. David Murray" mailto:rdmur...@bitdance.com>>, 
"python-dev@python.org" 
mailto:python-dev@python.org>>
Subject: Re: [Python-Dev] Avoiding CPython performance regressions


On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C 
mailto:david.c.stew...@intel.com>> wrote:

On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" 
mailto:intel@python.org>
 on behalf of rdmur...@bitdance.com> wrote:

>
>There's also an Intel project posted about here recently that checks
>individual benchmarks for performance regressions and posts the results
>to python-checkins.

The description of the project is at https://01.org/lp - Python results are 
indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to 
Romania National Day holiday!)

There is also a graphic dashboard at http://languagesperformance.intel.com/

​Hi Dave,

Interesting, but ​I'm curious on which benchmark set are you running? From the 
graphs it seems it has a really high standard deviation, so, I'm curious to 
know if that's really due to changes in the CPython codebase / issues in the 
benchmark set or in how the benchmarks are run... (it doesn't seem to be the 
benchmarks from https://hg.python.org/benchmarks/ right?).

Fabio – my advice to you is to check out the daily emails sent to 
python-checkins. An example is 
https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If 
you still have questions, Stefan can answer (he is copied).

The graphs are really just a manager-level indicator of trends, which I find 
very useful (I have it running continuously on one of the monitors in my 
office) but core developers might want to see day-to-day the effect of their 
changes. (Particular if they thought one was going to improve performance. It's 
nice to see if you get community confirmation).

We do run nightly a subset of https://hg.python.org/benchmarks/ and run the 
full set when we are evaluating our performance patches.

Some of the "benchmarks" really do have a high standard deviation, which makes 
them hardly very useful for measuring incremental performance improvements, 
IMHO. I like to see it spelled out so I can tell whether I should be worried or 
not about a particular delta.

Dave
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com