Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Maciej Fijalkowski
Hi

Thanks for doing the work! I'm on of the pypy devs and I'm very
interested in seeing this getting somewhere. I must say I struggle to
read the graph - is red good or is red bad for example?

I'm keen to help you getting anything you want to run it repeatedly.

PS. The intel stuff runs one benchmark in a very questionable manner,
so let's maybe not rely on it too much.

On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray  wrote:
> On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny  wrote:
>> Note that uploading the data to SpeedTin should be pretty straightforward
>> (by using https://github.com/fabioz/pyspeedtin, so, the main issue would be
>> setting up o machine to run the benchmarks).
>
> Thanks, but Zach almost has this working using codespeed (he's still
> waiting on a review from infrastructure, I think).  The server was not in
> fact running; a large part of what Zach did was to get that server set up.
> I don't know what it would take to export the data to another consumer,
> but if you want to work on that I'm guessing there would be no objection.
> And I'm sure there would be no objection if you want to get involved
> in maintaining the benchmark server!
>
> There's also an Intel project posted about here recently that checks
> individual benchmarks for performance regressions and posts the results
> to python-checkins.
>
> --David
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C  wrote:

>
> On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray"
>  rdmur...@bitdance.com> wrote:
>
> >
> >There's also an Intel project posted about here recently that checks
> >individual benchmarks for performance regressions and posts the results
> >to python-checkins.
>
> The description of the project is at https://01.org/lp - Python results
> are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1
> due to Romania National Day holiday!)
>
> There is also a graphic dashboard at
> http://languagesperformance.intel.com/


​Hi Dave,

Interesting, but ​I'm curious on which benchmark set are you running? From
the graphs it seems it has a really high standard deviation, so, I'm
curious to know if that's really due to changes in the CPython codebase /
issues in the benchmark set or in how the benchmarks are run... (it doesn't
seem to be the benchmarks from https://hg.python.org/benchmarks/ right?).

​--
Fabio​


> ​
>
> Dave
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski  wrote:

> Hi
>
> Thanks for doing the work! I'm on of the pypy devs and I'm very
> interested in seeing this getting somewhere. I must say I struggle to
> read the graph - is red good or is red bad for example?
>
> I'm keen to help you getting anything you want to run it repeatedly.
>
> PS. The intel stuff runs one benchmark in a very questionable manner,
> so let's maybe not rely on it too much.
>

​Hi Maciej,

Great, it'd be awesome having data on multiple Python VMs (my latest target
is really having a way to compare across multiple VMs/versions easily and
help each implementation keep a focus on performance). Ideally, a single,
dedicated machine could be used just to run the benchmarks from multiple
VMs (one less variable to take into account for comparisons later on, as
I'm not sure it'd be reliable to normalize benchmark data from different
machines -- it seems Zach was the one to contact from that, but if there's
such a machine already being used to run PyPy, maybe it could be extended
to run other VMs too?).

As for the graph, it should be easy to customize (and I'm open to
suggestions). In the case, as it is, red is slower and blue is faster (so,
for instance in
https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time,  the
fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline).
I've updated the comments to make it clearer (and changed the second graph
to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for
the individual benchmarks.

Best Regards,

Fabio



>
> On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray 
> wrote:
> > On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny 
> wrote:
> >> Note that uploading the data to SpeedTin should be pretty
> straightforward
> >> (by using https://github.com/fabioz/pyspeedtin, so, the main issue
> would be
> >> setting up o machine to run the benchmarks).
> >
> > Thanks, but Zach almost has this working using codespeed (he's still
> > waiting on a review from infrastructure, I think).  The server was not in
> > fact running; a large part of what Zach did was to get that server set
> up.
> > I don't know what it would take to export the data to another consumer,
> > but if you want to work on that I'm guessing there would be no objection.
> > And I'm sure there would be no objection if you want to get involved
> > in maintaining the benchmark server!
> >
> > There's also an Intel project posted about here recently that checks
> > individual benchmarks for performance regressions and posts the results
> > to python-checkins.
> >
> > --David
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Maciej Fijalkowski
On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny  wrote:
>
> On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski  wrote:
>>
>> Hi
>>
>> Thanks for doing the work! I'm on of the pypy devs and I'm very
>> interested in seeing this getting somewhere. I must say I struggle to
>> read the graph - is red good or is red bad for example?
>>
>> I'm keen to help you getting anything you want to run it repeatedly.
>>
>> PS. The intel stuff runs one benchmark in a very questionable manner,
>> so let's maybe not rely on it too much.
>
>
> Hi Maciej,
>
> Great, it'd be awesome having data on multiple Python VMs (my latest target
> is really having a way to compare across multiple VMs/versions easily and
> help each implementation keep a focus on performance). Ideally, a single,
> dedicated machine could be used just to run the benchmarks from multiple VMs
> (one less variable to take into account for comparisons later on, as I'm not
> sure it'd be reliable to normalize benchmark data from different machines --
> it seems Zach was the one to contact from that, but if there's such a
> machine already being used to run PyPy, maybe it could be extended to run
> other VMs too?).
>
> As for the graph, it should be easy to customize (and I'm open to
> suggestions). In the case, as it is, red is slower and blue is faster (so,
> for instance in
> https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time,  the
> fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline).
> I've updated the comments to make it clearer (and changed the second graph
> to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for
> the individual benchmarks.
>
> Best Regards,
>
> Fabio

There is definitely a machine available. I suggest you ask
python-infra list for access. It definitely can be used to run more
than just pypy stuff. As for normalizing across multiple machines -
don't even bother. Different architectures make A LOT of difference,
especially with cache sizes and whatnot, that seems to have different
impact on different loads.

As for graph - I like the split on the benchmarks and a better
description (higher is better) would be good.

I have a lot of ideas about visualizations, pop in on IRC, I'm happy
to discuss :-)

Cheers,
fijal
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Victor Stinner
2015-12-01 10:49 GMT+01:00 Fabio Zadrozny :
> As for the graph, it should be easy to customize (and I'm open to
> suggestions). In the case, as it is, red is slower and blue is faster (so,
> for instance in
> https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time

For me, -10% means "faster" in the context of a benchmark. On this
graph, I see -21% but it's slower in fact. I'm confused.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 9:35 AM, Victor Stinner 
wrote:

> 2015-12-01 10:49 GMT+01:00 Fabio Zadrozny :
> > As for the graph, it should be easy to customize (and I'm open to
> > suggestions). In the case, as it is, red is slower and blue is faster
> (so,
> > for instance in
> > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time
>
> For me, -10% means "faster" in the context of a benchmark. On this
> graph, I see -21% but it's slower in fact. I'm confused.
>
> Victor
>

Humm, I understand your point, although I think the main reason for the
confusion is the lack of a real legend there...

I.e.: the reason it's like that is because the idea is that it's a
comparison among 2 versions, not absolute benchmark times, so negative
means one version is 'slower/worse' than another and blue means it's
'faster/better' (as a reference, Eclipse also uses the same format for
reporting it -- e.g.:
http://download.eclipse.org/eclipse/downloads/drops4/R-4.5-201506032000/performance/performance.php?fp_type=0
)

I've added a legend now, so, hopefully it clears up the confusion ;)

--
Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Stewart, David C


From: Fabio Zadrozny mailto:fabi...@gmail.com>>
Date: Tuesday, December 1, 2015 at 1:36 AM
To: David Stewart mailto:david.c.stew...@intel.com>>
Cc: "R. David Murray" mailto:rdmur...@bitdance.com>>, 
"python-dev@python.org" 
mailto:python-dev@python.org>>
Subject: Re: [Python-Dev] Avoiding CPython performance regressions


On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C 
mailto:david.c.stew...@intel.com>> wrote:

On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" 
mailto:intel@python.org>
 on behalf of rdmur...@bitdance.com> wrote:

>
>There's also an Intel project posted about here recently that checks
>individual benchmarks for performance regressions and posts the results
>to python-checkins.

The description of the project is at https://01.org/lp - Python results are 
indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to 
Romania National Day holiday!)

There is also a graphic dashboard at http://languagesperformance.intel.com/

​Hi Dave,

Interesting, but ​I'm curious on which benchmark set are you running? From the 
graphs it seems it has a really high standard deviation, so, I'm curious to 
know if that's really due to changes in the CPython codebase / issues in the 
benchmark set or in how the benchmarks are run... (it doesn't seem to be the 
benchmarks from https://hg.python.org/benchmarks/ right?).

Fabio – my advice to you is to check out the daily emails sent to 
python-checkins. An example is 
https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If 
you still have questions, Stefan can answer (he is copied).

The graphs are really just a manager-level indicator of trends, which I find 
very useful (I have it running continuously on one of the monitors in my 
office) but core developers might want to see day-to-day the effect of their 
changes. (Particular if they thought one was going to improve performance. It's 
nice to see if you get community confirmation).

We do run nightly a subset of https://hg.python.org/benchmarks/ and run the 
full set when we are evaluating our performance patches.

Some of the "benchmarks" really do have a high standard deviation, which makes 
them hardly very useful for measuring incremental performance improvements, 
IMHO. I like to see it spelled out so I can tell whether I should be worried or 
not about a particular delta.

Dave
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 8:14 AM, Maciej Fijalkowski  wrote:

> On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny  wrote:
> >
> > On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski 
> wrote:
> >>
> >> Hi
> >>
> >> Thanks for doing the work! I'm on of the pypy devs and I'm very
> >> interested in seeing this getting somewhere. I must say I struggle to
> >> read the graph - is red good or is red bad for example?
> >>
> >> I'm keen to help you getting anything you want to run it repeatedly.
> >>
> >> PS. The intel stuff runs one benchmark in a very questionable manner,
> >> so let's maybe not rely on it too much.
> >
> >
> > Hi Maciej,
> >
> > Great, it'd be awesome having data on multiple Python VMs (my latest
> target
> > is really having a way to compare across multiple VMs/versions easily and
> > help each implementation keep a focus on performance). Ideally, a single,
> > dedicated machine could be used just to run the benchmarks from multiple
> VMs
> > (one less variable to take into account for comparisons later on, as I'm
> not
> > sure it'd be reliable to normalize benchmark data from different
> machines --
> > it seems Zach was the one to contact from that, but if there's such a
> > machine already being used to run PyPy, maybe it could be extended to run
> > other VMs too?).
> >
> > As for the graph, it should be easy to customize (and I'm open to
> > suggestions). In the case, as it is, red is slower and blue is faster
> (so,
> > for instance in
> > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time,
> the
> > fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline).
> > I've updated the comments to make it clearer (and changed the second
> graph
> > to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for
> > the individual benchmarks.
> >
> > Best Regards,
> >
> > Fabio
>
> There is definitely a machine available. I suggest you ask
> python-infra list for access. It definitely can be used to run more
> than just pypy stuff. As for normalizing across multiple machines -
> don't even bother. Different architectures make A LOT of difference,
> especially with cache sizes and whatnot, that seems to have different
> impact on different loads.
>
> As for graph - I like the split on the benchmarks and a better
> description (higher is better) would be good.
>
> I have a lot of ideas about visualizations, pop in on IRC, I'm happy
> to discuss :-)
>
>
​Ok, I mailed infrastructure(at)python.org to see how to make it work.

I did add a legend now, so, it should be much easier to read already ;)

As for ideas on visualizations, I definitely want to hear about suggestions
on how to improve it, although I'll start focusing on having the servers to
get benchmark data running and will move on to improving the graphs right
afterwards.

Cheers,

Fabio




> Cheers,
> fijal
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Stewart, David C





On 12/1/15, 7:26 AM, "Python-Dev on behalf of Stewart, David C" 
 wrote:

>
>Fabio – my advice to you is to check out the daily emails sent to 
>python-checkins. An example is 
>https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. 
>If you still have questions, Stefan can answer (he is copied).

Whoops - silly me - today is a national holiday in Romania where Stefan lives 
so might not get an answer until tomorrow. :-/

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deleting with setting C API functions

2015-12-01 Thread Serhiy Storchaka

On 25.11.15 08:39, Nick Coghlan wrote:

On 25 November 2015 at 07:33, Guido van Rossum  wrote:

Ooooh, that's probably really old code. I guess for the slots the
reasoning is to save on slots. For the public functions, alas it will
be hard to know if anyone is depending on it, even if it's
undocumented. Perhaps add a deprecation warning to these if the value
is NULL for one release cycle?


I did a quick scan for "PyObject_SetAttr", and it turns out
PyObject_DelAttr is only a convenience macro for calling
PyObject_SetAttr with NULL as the value argument. bltinmodule.c and
ceval.c also both include direct calls to PyObject_SetAttr with
"(PyObject *)NULL" as the value argument.

Investigating some of the uses that passed a variable as the value
argument, one case is the weakref proxy implementation, which uses
PyObject_SetAttr on the underlying object in its implementation of the
setattr slot in the proxy.

So it looks to me like replicating the NULL-handling behaviour of the
slots in the public Set* APIs was intentional, and it's just the
documentation of that detail that was missed (since most folks
presumably use the Del* convenience APIs instead).


I'm not sure. This looks rather as implementation detail to me. There 
cases found by you are the only cases in the core/stdlib that call 
PyObject_SetAttr with third argument is NULL. Tests are passed after 
replacing Set* functions with Del* functions in these cases and making 
Set* functions to reject value is NULL. [1]


Wouldn't be worth to deprecate deleting with Set* functions? Neither 
other abstract Set* APIs, not concrete Set* APIs don't support deleting. 
Deleting with Set* API can be unintentional and hide a bug.


[1] http://bugs.python.org/issue25773

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] "python.exe is not a valid Win32 app"

2015-12-01 Thread Alexei Belenki via Python-Dev
Installed python 3.5 (from https://www.python.org/downloads/) on Windows 
XPsp3/32
On starting >>python.exe got the text above in the Windows message box.
Any suggestions?Thanks.AB___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "python.exe is not a valid Win32 app"

2015-12-01 Thread Ryan Gonzalez
Did you get the x86-64 version or x86? If you had gotten the former, it would 
lead to that error.

On December 1, 2015 8:30:25 AM CST, Alexei Belenki via Python-Dev 
 wrote:
>Installed python 3.5 (from https://www.python.org/downloads/) on
>Windows XPsp3/32
>On starting >>python.exe got the text above in the Windows message box.
>Any suggestions?Thanks.AB
>
>
>
>___
>Python-Dev mailing list
>Python-Dev@python.org
>https://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe:
>https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com

-- 
Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity.___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "python.exe is not a valid Win32 app"

2015-12-01 Thread Mark Lawrence

On 01/12/2015 14:30, Alexei Belenki via Python-Dev wrote:

Installed python 3.5 (from https://www.python.org/downloads/) on Windows
XPsp3/32

On starting >>python.exe got the text above in the Windows message box.

Any suggestions?
Thanks.
AB




This isn't really the place to ask questions such as this.  However 
Python 3.5 is *NOT* supported on XP.  Work has been done for 3.5.1 to 
improve the user experience in this scenario.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Maciej Fijalkowski
Hi David.

Any reason you run a tiny tiny subset of benchmarks?

On Tue, Dec 1, 2015 at 5:26 PM, Stewart, David C
 wrote:
>
>
> From: Fabio Zadrozny mailto:fabi...@gmail.com>>
> Date: Tuesday, December 1, 2015 at 1:36 AM
> To: David Stewart 
> mailto:david.c.stew...@intel.com>>
> Cc: "R. David Murray" mailto:rdmur...@bitdance.com>>, 
> "python-dev@python.org" 
> mailto:python-dev@python.org>>
> Subject: Re: [Python-Dev] Avoiding CPython performance regressions
>
>
> On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C 
> mailto:david.c.stew...@intel.com>> wrote:
>
> On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" 
> mailto:intel@python.org>
>  on behalf of rdmur...@bitdance.com> wrote:
>
>>
>>There's also an Intel project posted about here recently that checks
>>individual benchmarks for performance regressions and posts the results
>>to python-checkins.
>
> The description of the project is at https://01.org/lp - Python results are 
> indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to 
> Romania National Day holiday!)
>
> There is also a graphic dashboard at http://languagesperformance.intel.com/
>
> Hi Dave,
>
> Interesting, but I'm curious on which benchmark set are you running? From the 
> graphs it seems it has a really high standard deviation, so, I'm curious to 
> know if that's really due to changes in the CPython codebase / issues in the 
> benchmark set or in how the benchmarks are run... (it doesn't seem to be the 
> benchmarks from https://hg.python.org/benchmarks/ right?).
>
> Fabio – my advice to you is to check out the daily emails sent to 
> python-checkins. An example is 
> https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. 
> If you still have questions, Stefan can answer (he is copied).
>
> The graphs are really just a manager-level indicator of trends, which I find 
> very useful (I have it running continuously on one of the monitors in my 
> office) but core developers might want to see day-to-day the effect of their 
> changes. (Particular if they thought one was going to improve performance. 
> It's nice to see if you get community confirmation).
>
> We do run nightly a subset of https://hg.python.org/benchmarks/ and run the 
> full set when we are evaluating our performance patches.
>
> Some of the "benchmarks" really do have a high standard deviation, which 
> makes them hardly very useful for measuring incremental performance 
> improvements, IMHO. I like to see it spelled out so I can tell whether I 
> should be worried or not about a particular delta.
>
> Dave
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Stewart, David C
On 12/1/15, 10:56 AM, "Maciej Fijalkowski"  wrote:



>Hi David.
>
>Any reason you run a tiny tiny subset of benchmarks?

We could always run more. There are so many in the full set in 
https://hg.python.org/benchmarks/ with such divergent results that it seems 
hard to see the forest because there are so many trees. I'm more interested in 
gradually adding to the set rather than the huge blast of all of them in daily 
email. Would you disagree?

Part of the reason that I monitor ssbench so closely on Python 2 is that Swift 
is a major element in cloud computing (and OpenStack in particular) and has 
~70% of its cycles in Python.

We are really interested in workloads which are representative of the way 
Python is used by a lot of people and which produce repeatable results. (and 
which are open source). Do you have a suggestions?

Dave
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "python.exe is not a valid Win32 app"

2015-12-01 Thread Laura Creighton
In a message of Tue, 01 Dec 2015 10:13:10 -0600, Ryan Gonzalez writes:
>Did you get the x86-64 version or x86? If you had gotten the former, it would 
>lead to that error.

No, his problem is his windows XP.

Python 3.5 is not supported on windows XP.  Upgrade your OS or
stick with 3.4

Laura Creighton


>
>On December 1, 2015 8:30:25 AM CST, Alexei Belenki via Python-Dev 
> wrote:
>>Installed python 3.5 (from https://www.python.org/downloads/) on
>>Windows XPsp3/32
>>On starting >>python.exe got the text above in the Windows message box.
>>Any suggestions?Thanks.AB
>>
>>
>>
>>___
>>Python-Dev mailing list
>>Python-Dev@python.org
>>https://mail.python.org/mailman/listinfo/python-dev
>>Unsubscribe:
>>https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com
>
>-- 
>Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity.
>___
>Python-Dev mailing list
>Python-Dev@python.org
>https://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>https://mail.python.org/mailman/options/python-dev/lac%40openend.se
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Maciej Fijalkowski
On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C
 wrote:
> On 12/1/15, 10:56 AM, "Maciej Fijalkowski"  wrote:
>
>
>
>>Hi David.
>>
>>Any reason you run a tiny tiny subset of benchmarks?
>
> We could always run more. There are so many in the full set in 
> https://hg.python.org/benchmarks/ with such divergent results that it seems 
> hard to see the forest because there are so many trees. I'm more interested 
> in gradually adding to the set rather than the huge blast of all of them in 
> daily email. Would you disagree?
>
> Part of the reason that I monitor ssbench so closely on Python 2 is that 
> Swift is a major element in cloud computing (and OpenStack in particular) and 
> has ~70% of its cycles in Python.

Last time I checked, Swift was quite a bit faster under pypy :-)


>
> We are really interested in workloads which are representative of the way 
> Python is used by a lot of people and which produce repeatable results. (and 
> which are open source). Do you have a suggestions?

You know our benchmark suite (https://bitbucket.org/pypy/benchmarks),
we're gradually incorporating what people report. That means that
(Typically) it'll be open source library benchmarks, if they get to
the point of writing some. I have for example coming django ORM
benchmark, can show you if you want. I don't think there is a
"representative benchmark" or maybe even "representative set", also
because open source code tends to be higher quality and less
spaghetti-like than closed source code that I've seen, but we're
adding and adding.

Cheers,
fijal
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Stewart, David C





On 12/1/15, 11:38 AM, "Maciej Fijalkowski"  wrote:

>On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C
> wrote:
>>
>> Part of the reason that I monitor ssbench so closely on Python 2 is that 
>> Swift is a major element in cloud computing (and OpenStack in particular) 
>> and has ~70% of its cycles in Python.
>
>Last time I checked, Swift was quite a bit faster under pypy :-)

There is some porting required, but it's very promising. :-)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deleting with setting C API functions

2015-12-01 Thread Nick Coghlan
On 2 December 2015 at 01:50, Serhiy Storchaka  wrote:
> On 25.11.15 08:39, Nick Coghlan wrote:
>> So it looks to me like replicating the NULL-handling behaviour of the
>> slots in the public Set* APIs was intentional, and it's just the
>> documentation of that detail that was missed (since most folks
>> presumably use the Del* convenience APIs instead).
>
> I'm not sure. This looks rather as implementation detail to me. There cases
> found by you are the only cases in the core/stdlib that call
> PyObject_SetAttr with third argument is NULL. Tests are passed after
> replacing Set* functions with Del* functions in these cases and making Set*
> functions to reject value is NULL. [1]

Which means at the very least, folks relying on the current behaviour
are relying on untested functionality, and would be better of
switching to the tested APIs regardless of what happens on the
deprecation front.

> Wouldn't be worth to deprecate deleting with Set* functions? Neither other
> abstract Set* APIs, not concrete Set* APIs don't support deleting. Deleting
> with Set* API can be unintentional and hide a bug.

Since the behaviour is currently neither documented not tested, and it
doesn't raise any new Python 2/3 migation issues, I don't personally
mind deprecating the "delete via set" APIs for 3.6 - as you say,
having "set this field/attribute to this value" occasionally mean
"delete this field/attribute" if a pointer is NULL offers a surprising
second way to do something that already has a more explicit spelling.

Regards,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com