[Python-Dev] Deprecated __cmp__ and total ordering

2009-03-10 Thread Mart Sõmermaa
__cmp__ used to provide a convenient way to make all ordering operators work
by defining a single method. For better or worse, it's gone in 3.0.

To provide total ordering without __cmp__ one has to implement all of
__lt__, __gt__, __le__, __ge__, __eq__ and __ne__. However, in all but a few
cases it suffices only to provide a "real" implementation for e.g. __lt__
and define all the other methods in terms of it as follows:

class TotalOrderMixin(object):
def __lt__(self, other):
raise NotImplemented # override this

def __gt__(self, other):
return other < self

def __le__(self, other):
return not (other < self)

def __ge__(self, other):
return not (self < other)

__eq__ and __ne__ are somewhat special, although it is possible to define
them in terms of __lt__

def __eq__(self, other):
return not (self == other)

def __ne__(self, other):
return self < other or other < self

it may be inefficient.

So, to avoid the situation where all the projects that match
http://www.google.com/codesearch?q=__cmp__+lang%3Apython have to implement
their own TotalOrderMixin, perhaps one could be provided in the stdlib? Or
even better, shouldn't a class grow automagic __gt__, __le__, __ge__ if
__lt__ is provided, and, in a similar vein, __ne__ if __eq__ is provided?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecated __cmp__ and total ordering

2009-03-10 Thread Mart Sõmermaa
On Tue, Mar 10, 2009 at 3:57 PM, Michael Foord wrote:

> Is there something you don't like about this one:
> http://code.activestate.com/recipes/576529/
>

Yes -- it is not in the standard library. As I said, eventually all the
15,000 matches on Google Code need to update their code and copy that
snippet to their util/, write tests for it etc.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] version compare function into main lib

2009-03-27 Thread Mart Sõmermaa
See http://wiki.python.org/moin/ApplicationInfrastructure , "Version
handling" below for a possible strict version API.

The page is relevant for the general packaging discussion as well, although
it's not fully fleshed out yet.

MS

On Fri, Mar 27, 2009 at 5:11 PM, "Martin v. Löwis" wrote:

> Correct me if I wrong, but shouldn't Python include function for
>> version comparisons?
>>
>
> On the packaging summit yesterday, people agreed that yes, we should
> have something like that in the standard library, and it should be more
> powerful than what distutils currently offers.
>
> There was no conclusion of how specifically that functionality should
> be offered; several people agreed that Python should mandate a standard
> format, which it is then able to compare. So you might not be able to
> spell it "10.3.40-beta", but perhaps "10.3.40b1" or "10.3.40~beta".
>
> Regards,
> Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] version compare function into main lib

2009-03-27 Thread Mart Sõmermaa
> Instead of trying to parse some version string, distutils should
> require defining the version as tuple with well-defined entries -
> much like what we have in sys.version_info for Python.
>
> The developer can then still use whatever string format s/he wants.
>
> The version compare function would then work on this version tuple
> and probably be called cmp() (at least in Python 2.x ;-).
>


Except there need to be functions for parsing the tuple from a string and
preferably a canonical string representation to ease that parsing. Hence the
Version class in "Version Handling" referred to above.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] version compare function into main lib

2009-03-28 Thread Mart Sõmermaa
On Sat, Mar 28, 2009 at 12:37 AM, Ben Finney <
bignose+hates-s...@benfinney.id.au >wrote:

> "Martin v. Löwis"  writes:
>
> > I don't mind the setuptools implementation being used as a basis
> > (assuming it gets contributed), but *independently* I think a
> > specfication is needed what version strings it actually understands.
> > Such specification must precede the actual implementation (in
> > distutils).
>
> Yes, please. The comparison of version strings needs to be easily done
> by non-Python programs (e.g. tools for packaging Python
> distributions), so a specification that can be implemented in other
> languages or environments is a must.


There's a specification in
http://wiki.python.org/moin/ApplicationInfrastructure , see "Version API"
below (at least, it's a start).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-12 Thread Mart Sõmermaa
The general consensus in python-ideas is that the following is needed, so I
bring it to python-dev to final discussions before I file a feature request
in bugs.python.org.

Proposal: add add_query_params() for appending query parameters to an URL to
urllib.parse and urlparse.

Implementation:
http://github.com/mrts/qparams/blob/83d1ec287ec10934b5e637455819cf796b1b421c/qparams.py(feel
free to fork and comment).

Behaviour (longish, guided by "simple things are simiple, complex things
possible"):

In the simplest form, parameters can be passed via keyword arguments:

>>> add_query_params('foo', bar='baz')
'foo?bar=baz'

>>> add_query_params('http://example.com/a/b/c?a=b', b='d')
'http://example.com/a/b/c?a=b&b=d'

Note that '/', if given in arguments, is encoded:

>>> add_query_params('http://example.com/a/b/c?a=b', b='d', foo='/bar')
'http://example.com/a/b/c?a=b&b=d&foo=%2Fbar'

Duplicates are discarded:

>>> add_query_params('http://example.com/a/b/c?a=b', a='b')
'http://example.com/a/b/c?a=b'

>>> add_query_params('http://example.com/a/b/c?a=b&c=q', a='b', b='d',
...  c='q')
'http://example.com/a/b/c?a=b&c=q&b=d'

But different values for the same key are supported:

>>> add_query_params('http://example.com/a/b/c?a=b', a='c', b='d')
'http://example.com/a/b/c?a=b&a=c&b=d'

Pass different values for a single key in a list (again, duplicates are
removed):

>>> add_query_params('http://example.com/a/b/c?a=b', a=('q', 'b', 'c'),
... b='d')
'http://example.com/a/b/c?a=b&a=q&a=c&b=d'

Keys with no value are respected, pass ``None`` to create one:

>>> add_query_params('http://example.com/a/b/c?a', b=None)
'http://example.com/a/b/c?a&b'

But if a value is given, the empty key is considered a duplicate (i.e. the
case of a&a=b is considered nonsensical):

>>> add_query_params('http://example.com/a/b/c?a', a='b', c=None)
'http://example.com/a/b/c?a=b&c'

If you need to pass in key names that are not allowed in keyword arguments,
pass them via a dictionary in second argument:

>>> add_query_params('foo', {"+'|äüö": 'bar'})
'foo?%2B%27%7C%C3%A4%C3%BC%C3%B6=bar'

Order of original parameters is retained, although similar keys are grouped
together. Order of keyword arguments is not (and can not be) retained:

>>> add_query_params('foo?a=b&b=c&a=b&a=d', a='b')
'foo?a=b&a=d&b=c'

>>> add_query_params('http://example.com/a/b/c?a=b&q=c&e=d',
... x='y', e=1, o=2)
'http://example.com/a/b/c?a=b&q=c&e=d&e=1&x=y&o=2'

If you need to retain the order of the added parameters, use an
:class:`OrderedDict` as the second argument (*params_dict*):

>>> from collections import OrderedDict
>>> od = OrderedDict()
>>> od['xavier'] = 1
>>> od['abacus'] = 2
>>> od['janus'] = 3
>>> add_query_params('http://example.com/a/b/c?a=b', od)
'http://example.com/a/b/c?a=b&xavier=1&abacus=2&janus=3'

If both *params_dict* and keyword arguments are provided, values from the
former are used before the latter:

>>> add_query_params('http://example.com/a/b/c?a=b', od, xavier=1.1,
... zorg='a', alpha='b', watt='c', borg='d')
'
http://example.com/a/b/c?a=b&xavier=1&xavier=1.1&abacus=2&janus=3&zorg=a&borg=d&watt=c&alpha=b
'

Do nothing with a single argument:

>>> add_query_params('a')
'a'

>>> add_query_params('arbitrary strange stuff?öäüõ*()+-=42')
'arbitrary strange stuff?\xc3\xb6\xc3\xa4\xc3\xbc\xc3\xb5*()+-=42'
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-12 Thread Mart Sõmermaa
On Sun, Apr 12, 2009 at 3:23 PM, Jacob Holm  wrote:

> Hi Mart
>
>>>> add_query_params('http://example.com/a/b/c?a=b', b='d', foo='/bar')
>>'http://example.com/a/b/c?a=b&b=d&foo=%2Fbar <
>> http://example.com/a/b/c?a=b&b=d&foo=%2Fbar>'
>>
>> Duplicates are discarded:
>>
>
> Why discard duplicates?  They are valid and have a well-defined meaning.



The bad thing about reasoning about query strings is that there is no
comprehensive documentation about their meaning. Both RFC 1738 and RFC 3986
are rather vague in that matter. But I agree that duplicates actually have a
meaning (an ordered list of identical values), so I'll remove the bits that
prune them unless anyone opposes (which I doubt).


>> But if a value is given, the empty key is considered a duplicate (i.e. the
>> case of a&a=b is considered nonsensical):
>>
>
> Again, it is a valid url and this will change its meaning.  Why?


I'm uncertain whether a&a=b has a meaning, but don't see any harm in
supporting it, so I'll add the feature.


>>>>> add_query_params('http://example.com/a/b/c?a', a='b', c=None)
>>'http://example.com/a/b/c?a=b&c '
>>
>> If you need to pass in key names that are not allowed in keyword
>> arguments,
>> pass them via a dictionary in second argument:
>>
>>>>> add_query_params('foo', {"+'|äüö": 'bar'})
>>'foo?%2B%27%7C%C3%A4%C3%BC%C3%B6=bar'
>>
>> Order of original parameters is retained, although similar keys are
>> grouped
>> together.
>>
>
> Why the grouping?  Is it a side effect of your desire to discard
> duplicates?   Changing the order like that changes the meaning of the url.
>  A concrete case where the order of field names matters is the ":records"
> converter in http://pypi.python.org/pypi/zope.httpform/1.0.1 (a small
> independent package extracted from the form handling code in zope).


 It's also related to duplicate handling, but it mostly relates to the data
structure used in the initial implementation (an OrderedDict). Re-grouping
is removed now and not having to deal with duplicates simplified the code
considerably (using a simple list of key-value tuples now).

If you change it to keep duplicates and not unnecessarily mangle the field
> order I am +1, else I am -0.


Thanks for your input! Changes pushed to github (see the updated behaviour
there as well):

http://github.com/mrts/qparams/blob/4f32670b55082f8d0ef01c33524145c3264c161a/qparams.py

MS
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-13 Thread Mart Sõmermaa
On Mon, Apr 13, 2009 at 12:56 AM, Antoine Pitrou wrote:

> Mart Sõmermaa  gmail.com> writes:
> >
> > Proposal: add add_query_params() for appending query parameters to an URL
> to
> urllib.parse and urlparse.
>
> Is there anything to /remove/ a query parameter?


I'd say this is outside the scope of add_query_params().

As for the duplicate handling, I've implemented a threefold strategy that
should address all use cases raised before:

 def add_query_params(*args, **kwargs):
"""
add_query_parms(url, [allow_dups, [args_dict, [separator]]], **kwargs)

Appends query parameters to an URL and returns the result.

:param url: the URL to update, a string.
:param allow_dups: if
* True: plainly append new parameters, allowing all duplicates
  (default),
* False: disallow duplicates in values and regroup keys so that
  different values for the same key are adjacent,
* None: disallow duplicates in keys -- each key can have a single
  value and later values override the value (like dict.update()).
:param args_dict: optional dictionary of parameters, default is {}.
:param separator: either ';' or '&', the separator between key-value
pairs, default is '&'.
:param kwargs: parameters as keyword arguments.

:return: original URL with updated query parameters or the original URL
unchanged if no parameters given.
"""

The commit is

http://github.com/mrts/qparams/blob/b9bdbec46bf919d142ff63e6b2b822b5d57b6f89/qparams.py

extensive description of the behaviour is in the doctests.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-13 Thread Mart Sõmermaa
On Mon, Apr 13, 2009 at 8:23 PM, Steven Bethard
 wrote:
>
> On Mon, Apr 13, 2009 at 2:29 AM, Mart Sõmermaa  wrote:
> >
> >
> > On Mon, Apr 13, 2009 at 12:56 AM, Antoine Pitrou 
> > wrote:
> >>
> >> Mart Sõmermaa  gmail.com> writes:
> >> >
> >> > Proposal: add add_query_params() for appending query parameters to an
> >> > URL to
> >> urllib.parse and urlparse.
> >>
> >> Is there anything to /remove/ a query parameter?
> >
> > I'd say this is outside the scope of add_query_params().
> >
> > As for the duplicate handling, I've implemented a threefold strategy that
> > should address all use cases raised before:
> >
> >  def add_query_params(*args, **kwargs):
> > """
> > add_query_parms(url, [allow_dups, [args_dict, [separator]]], **kwargs)
> >
> > Appends query parameters to an URL and returns the result.
> >
> > :param url: the URL to update, a string.
> > :param allow_dups: if
> > * True: plainly append new parameters, allowing all duplicates
> >   (default),
> > * False: disallow duplicates in values and regroup keys so that
> >   different values for the same key are adjacent,
> > * None: disallow duplicates in keys -- each key can have a single
> >   value and later values override the value (like dict.update()).
>
> Unnamed flag parameters are unfriendly to the reader. If I see something like:
>
>  add_query_params(url, True, dict(a=b, c=d))
>
> I can pretty much guess what the first and third arguments are, but I
> have no clue for the second. Even if I have read the documentation
> before, I may not remember whether the middle argument is "allow_dups"
> or "keep_dups".

Keyword arguments are already used for specifying the arguments to the
query, so naming can't be used. Someone may need an 'allow_dups' key
in their query and forget to pass it in params_dict.

A default behaviour should be found that works according to most
user's expectations so that they don't need to use the positional
arguments generally.

Antoine Pitrou wrote:
> You could e.g. rename the function to update_query_params() and decide that
> every parameter whose specified value is None must atcually be removed from
> the URL.

I agree that removing parameters is useful. Currently, None is used
for signifying a key with no value. Instead, booleans could be used:
if a key is True (but obviously not any other value that evaluates to
True), it is a key with no value, if False (under the same evaluation
restriction), it should be removed from the query if present. None
should not be treated specially under that scheme. As an example:

>>> update_query_params('http://example.com/?q=foo', q=False, a=True, b='c', 
>>> d=None)
'http://example.com/?a&b=c&d=None'

However,
1) I'm not sure about the implications of 'foo is True', I have never
used it and PEP 8 explicitly warns against it -- does it work
consistently across different Python implementations? (Assuming on the
grounds that True should be a singleton no different from None that it
should work.)
2) the API gets overly complicated -- as per the complaint above, it's
usability-challenged already.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-18 Thread Mart Sõmermaa
On Sat, Apr 18, 2009 at 3:41 PM, Nick Coghlan  wrote:
> Yep - Guido has pointed out in a few different API design discussions
> that a boolean flag that is almost always set to a literal True or False
> is a good sign that there are two functions involved rather than just
> one. There are exceptions to that guideline (e.g. the reverse argument
> for sorted and list.sort), but they aren't common, and even when they do
> crop up, making them keyword-only arguments is strongly recommended.

As you yourself previously noted -- "it is often
better to use *args for the two positional arguments - it avoids
accidental name conflicts between the positional arguments and arbitrary
keyword arguments" -- kwargs may cause name conflicts.

But I also agree, that the current proliferation of positional args is ugly.

add_query_params_no_dups() would be suboptimal though, as there are
currently three different ways to handle the duplicates:
* allow duplicates everywhere (True),
* remove duplicate *values* for the same key (False),
* behave like dict.update -- remove duplicate *keys*, unless
explicitly passed a list (None).

(See the documentation at
http://github.com/mrts/qparams/blob/bf1b29ad46f9d848d5609de6de0bfac1200da310/qparams.py
).

Additionally, as proposed by Antoine Pitrou, removing keys could be implemented.

It feels awkward to start a PEP for such a marginal feature, but
clearly a couple of enlightened design decisions are required.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Proposed addtion to urllib.parse in 3.1 (and urlparse in 2.7)

2009-04-19 Thread Mart Sõmermaa
On Sun, Apr 19, 2009 at 2:06 AM, Nick Coghlan  wrote:
> That said, I'm starting to wonder if an even better option may be to
> just drop the kwargs support from the function and require people to
> always supply a parameters dictionary. That would simplify the signature
> to the quite straightforward:
>
>  def add_query_params(url, params, allow_dups=True, sep='&')

That's the most straightforward and I like this more than the one below.

> I agree that isn't a good option, but mapping True/False/None to those
> specific behaviours also seems rather arbitrary (specifically, it is
> difficult to remember which of "allow_dups=False" and "allow_dups=None"
> means to ignore any duplicate keys and which means to ignore only
> duplicate items).

I'd say it's less of a problem when using named arguments, i.e. you read it as:

allow_dups=True : yes
allow_dups=False : effeminately no :),
allow_dups=None : strictly no

which more or less corresponds to the behaviour.

> It also doesn't provide a clear mechanism for
> extension (e.g. what if someone wanted duplicate params to trigger an
> exception?)
>
> Perhaps the extra argument should just be a key/value pair filtering
> function, and we provide functions for the three existing behaviours
> (i.e. allow_duplicates(), ignore_duplicate_keys(),
> ignore_duplicate_items()) in the urllib.parse module.

This would be the most flexible and conceptually right (ye olde
strategy pattern), but would clutter the API.

> Note that your implementation and docstring currently conflict with each
> other - the docstring says "pass them via a dictionary in second
> argument:" but the dictionary is currently the third argument (the
> docstring also later refers to passing OrderedDictionary as the second
> argument).

It's a mistake that exemplifies once again that positional args are awkward :).

---

So, gentlemen, either

def add_query_params(url, params, allow_dups=True, sep='&')

or

def allow_duplicates(...)

def remove_duplicate_values(...)

...

def add_query_params(url, params, strategy=allow_duplicates, sep='&')
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A wordcode-based Python

2009-11-04 Thread Mart Sõmermaa
ver time: http://tinyurl.com/yh7vmlh


slowunpickle:
Min: 0.411744 -> 0.356784: 15.40% faster
Avg: 0.444638 -> 0.393261: 13.06% faster
Significant (t=7.009269, a=0.95)
Stddev: 0.04147 -> 0.06044: 31.38% larger

Mem max: 7132.000 -> 7848.000: 9.12% larger
Usage over time: http://tinyurl.com/yfwvz3g


startup_nosite:
Min: 0.664456 -> 0.598770: 10.97% faster
Avg: 0.933034 -> 0.761228: 22.57% faster
Significant (t=20.660776, a=0.95)
Stddev: 0.09645 -> 0.06728: 43.37% smaller

Mem max: 1940.000 -> 1940.000: -0.00% smaller
Usage over time: http://tinyurl.com/yzzxcmd


threaded_count:
Min: 0.220059 -> 0.138708: 58.65% faster
Avg: 0.232347 -> 0.156120: 48.83% faster
Significant (t=23.804797, a=0.95)
Stddev: 0.01889 -> 0.02586: 26.96% larger

Mem max: 6460.000 -> 7664.000: 15.71% larger
Usage over time: http://tinyurl.com/yzm3awu


unpack_sequence:
Min: 0.000129 -> 0.000120: 7.57% faster
Avg: 0.000218 -> 0.000194: 12.14% faster
Significant (t=3.946194, a=0.95)
Stddev: 0.00139 -> 0.00128: 8.13% smaller

Mem max: 18948.000 -> 19056.000: 0.57% larger
Usage over time: http://tinyurl.com/yf8es3f


unpickle:
Min: 1.191468 -> 1.206198: 1.22% slower
Avg: 1.248471 -> 1.281957: 2.61% slower
Significant (t=-2.658526, a=0.95)
Stddev: 0.05513 -> 0.11325: 51.32% larger

Mem max: 7776.000 -> 8676.000: 10.37% larger
Usage over time: http://tinyurl.com/yz96gw2


unpickle_list:
Min: 0.922200 -> 0.861167: 7.09% faster
Avg: 0.955964 -> 0.976829: 2.14% slower
Not significant
Stddev: 0.04374 -> 0.21061: 79.23% larger

Mem max: 6820.000 -> 8324.000: 18.07% larger
Usage over time: http://tinyurl.com/yjbraxg

---

The diff between the two trees is at
http://dpaste.org/RpIv/

Best,
Mart Sõmermaa
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A wordcode-based Python

2009-11-04 Thread Mart Sõmermaa
On Wed, Nov 4, 2009 at 5:54 PM, Collin Winter  wrote:
> Do note that the --track_memory option to perf.py imposes some
> overhead that interferes with the performance figures.

Thanks for the notice, without -m/--track_memory the deviation in
results is indeed much smaller.

> I'd recommend
> running the benchmarks again without --track_memory.

Done:

$ python unladen-tests/perf.py -r --benchmarks=-2to3,all py261/python wpy/python

Report on Linux zeus 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16
14:05:01 UTC 2009 x86_64
Total CPU cores: 2

ai:
Min: 0.629343 -> 0.576259: 9.21% faster
Avg: 0.634689 -> 0.581551: 9.14% faster
Significant (t=39.404870, a=0.95)
Stddev: 0.01259 -> 0.00484: 160.04% smaller


call_simple:
Min: 1.796710 -> 1.700046: 5.69% faster
Avg: 1.801533 -> 1.716367: 4.96% faster
Significant (t=137.452069, a=0.95)
Stddev: 0.00522 -> 0.00333: 56.64% smaller


django:
Min: 1.280840 -> 1.275350: 0.43% faster
Avg: 1.287179 -> 1.287233: 0.00% slower
Not significant
Stddev: 0.01055 -> 0.00581: 81.60% smaller


iterative_count:
Min: 0.211744 -> 0.123271: 71.77% faster
Avg: 0.213148 -> 0.128596: 65.75% faster
Significant (t=88.510311, a=0.95)
Stddev: 0.00233 -> 0.00926: 74.80% larger


normal_startup:
Min: 0.520829 -> 0.516412: 0.86% faster
Avg: 0.559170 -> 0.554678: 0.81% faster
Not significant
Stddev: 0.02031 -> 0.02093: 2.98% larger


pickle:
Min: 1.988127 -> 1.926643: 3.19% faster
Avg: 2.000676 -> 1.936185: 3.33% faster
Significant (t=36.712505, a=0.95)
Stddev: 0.01650 -> 0.00603: 173.67% smaller


pickle_dict:
Min: 1.681116 -> 1.619192: 3.82% faster
Avg: 1.701952 -> 1.629548: 4.44% faster
Significant (t=34.513963, a=0.95)
Stddev: 0.01721 -> 0.01200: 43.46% smaller


pickle_list:
Min: 0.918128 -> 0.884967: 3.75% faster
Avg: 0.925534 -> 0.891200: 3.85% faster
Significant (t=60.451407, a=0.95)
Stddev: 0.00496 -> 0.00276: 80.00% smaller


pybench:
Min: 58692 -> 51128: 14.79% faster
Avg: 59914 -> 52316: 14.52% faster

regex_compile:
Min: 0.894190 -> 0.816447: 9.52% faster
Avg: 0.900353 -> 0.826003: 9.00% faster
Significant (t=24.974080, a=0.95)
Stddev: 0.00448 -> 0.02943: 84.78% larger


regex_effbot:
Min: 0.124442 -> 0.123750: 0.56% faster
Avg: 0.134908 -> 0.126137: 6.95% faster
Significant (t=5.496357, a=0.95)
Stddev: 0.01581 -> 0.00218: 625.68% smaller


regex_v8:
Min: 0.132730 -> 0.143494: 7.50% slower
Avg: 0.134287 -> 0.147387: 8.89% slower
Significant (t=-40.654627, a=0.95)
Stddev: 0.00108 -> 0.00304: 64.34% larger


rietveld:
Min: 0.754050 -> 0.737335: 2.27% faster
Avg: 0.770227 -> 0.754642: 2.07% faster
Significant (t=7.547765, a=0.95)
Stddev: 0.01434 -> 0.01486: 3.49% larger


slowpickle:
Min: 0.858494 -> 0.795162: 7.96% faster
Avg: 0.862350 -> 0.799479: 7.86% faster
Significant (t=133.690989, a=0.95)
Stddev: 0.00394 -> 0.00257: 52.92% smaller


slowspitfire:
Min: 0.955587 -> 0.909843: 5.03% faster
Avg: 0.965960 -> 0.925845: 4.33% faster
Significant (t=16.351067, a=0.95)
Stddev: 0.01237 -> 0.02119: 41.63% larger


slowunpickle:
Min: 0.409312 -> 0.346982: 17.96% faster
Avg: 0.412381 -> 0.349148: 18.11% faster
Significant (t=242.889869, a=0.95)
Stddev: 0.00198 -> 0.00169: 17.61% smaller


startup_nosite:
Min: 0.195620 -> 0.194328: 0.66% faster
Avg: 0.230811 -> 0.238523: 3.23% slower
Significant (t=-3.869944, a=0.95)
Stddev: 0.01932 -> 0.02052: 5.87% larger


threaded_count:
Min: 0.222133 -> 0.133764: 66.06% faster
Avg: 0.236670 -> 0.147750: 60.18% faster
Significant (t=57.472693, a=0.95)
Stddev: 0.01317 -> 0.00813: 61.98% smaller


unpack_sequence:
Min: 0.000129 -> 0.000119: 8.43% faster
Avg: 0.000132 -> 0.000123: 7.22% faster
Significant (t=24.614061, a=0.95)
Stddev: 0.3 -> 0.00011: 77.02% larger


unpickle:
Min: 1.191255 -> 1.149132: 3.67% faster
Avg: 1.218023 -> 1.162351: 4.79% faster
Significant (t=21.222711, a=0.95)
Stddev: 0.02242 -> 0.01362: 64.54% smaller


unpickle_list:
Min: 0.880991 -> 0.965611: 8.76% slower
Avg: 0.898949 -> 0.985231: 8.76% slower
Significant (t=-17.387537, a=0.95)
Stddev: 0.04838 -> 0.01103: 338.79% smaller
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] V8, TraceMonkey, SquirrelFish and Python

2009-01-27 Thread Mart Sõmermaa
As most of you know there's constant struggle on the JavaScript front to get
even faster performance out of interpreters.
V8, TraceMonkey and SquirrelFish have brought novel ideas to interpreter
design, wouldn't it make sense to reap the best bits and bring them to
Python?

Has anyone delved into the designs and considered their applicability to
Python?

Hoping-to-see-some-V8-and-Python-teams-collaboration-in-Mountain-View-ly
yours,
Mart Sõmermaa
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] V8, TraceMonkey, SquirrelFish and Python

2009-01-27 Thread Mart Sõmermaa
On Tue, Jan 27, 2009 at 5:04 PM, Jesse Noller  wrote:

> Hi Mart,
>
> This is a better discussion for the python-ideas list. That being
> said, there was a thread discussing this last year, see:
>
> http://mail.python.org/pipermail/python-dev/2008-October/083176.html
>
> -jesse
>

Indeed, sorry. Incidentally, there is a similar discussion going on just
now.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com