Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
What if you broke up the read and built the final string object up. I
always assumed this is where the real gain was with read_into.
On Nov 25, 2011 5:55 AM, "Eli Bendersky"  wrote:

> On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou  wrote:
>
>> On Thu, 24 Nov 2011 20:15:25 +0200
>> Eli Bendersky  wrote:
>> >
>> > Oops, readinto takes the same time as copying. This is a real shame,
>> > because readinto in conjunction with the buffer interface was supposed
>> to
>> > avoid the redundant copy.
>> >
>> > Is there a real performance regression here, is this a well-known
>> issue, or
>> > am I just missing something obvious?
>>
>> Can you try with latest 3.3 (from the default branch)?
>>
>
> Sure. Updated the default branch just now and built:
>
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
> 1000 loops, best of 3: 1.14 msec per loop
> $1 -m timeit -s'import fileread_bytearray'
> 'fileread_bytearray.readandcopy()'
> 100 loops, best of 3: 2.78 msec per loop
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
> 1000 loops, best of 3: 1.6 msec per loop
>
> Strange. Although here, like in python 2, the performance of readinto is
> close to justread and much faster than readandcopy, but justread itself is
> much slower than in 2.7 and 3.2!
>
> Eli
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
Eli,

Example coming shortly, the differences are quite significant.

On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky  wrote:
> On Fri, Nov 25, 2011 at 00:02, Matt Joiner  wrote:
>>
>> What if you broke up the read and built the final string object up. I
>> always assumed this is where the real gain was with read_into.
>
> Matt, I'm not sure what you mean by this - can you suggest the code?
>
> Also, I'd be happy to know if anyone else reproduces this as well on other
> machines/OSes.
>
> Eli
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
It's my impression that the readinto method does not fully support the
buffer interface I was expecting. I've never had cause to use it until
now. I've created a question on SO that describes my confusion:

http://stackoverflow.com/q/8263899/149482

Also I saw some comments on "top-posting" am I guilty of this? Gmail
defaults to putting my response above the previous email.

On Fri, Nov 25, 2011 at 11:49 AM, Matt Joiner  wrote:
> Eli,
>
> Example coming shortly, the differences are quite significant.
>
> On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky  wrote:
>> On Fri, Nov 25, 2011 at 00:02, Matt Joiner  wrote:
>>>
>>> What if you broke up the read and built the final string object up. I
>>> always assumed this is where the real gain was with read_into.
>>
>> Matt, I'm not sure what you mean by this - can you suggest the code?
>>
>> Also, I'd be happy to know if anyone else reproduces this as well on other
>> machines/OSes.
>>
>> Eli
>>
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
On Fri, Nov 25, 2011 at 12:07 PM, Antoine Pitrou  wrote:
> On Fri, 25 Nov 2011 12:02:17 +1100
> Matt Joiner  wrote:
>> It's my impression that the readinto method does not fully support the
>> buffer interface I was expecting. I've never had cause to use it until
>> now. I've created a question on SO that describes my confusion:
>>
>> http://stackoverflow.com/q/8263899/149482
>
> Just use a memoryview and slice it:
>
> b = bytearray(...)
> m = memoryview(b)
> n = f.readinto(m[some_offset:])

Cheers, this seems to be what I wanted. Unfortunately it doesn't
perform noticeably better if I do this.

Eli, the use pattern I was referring to is when you read in chunks,
and and append to a running buffer. Presumably if you know in advance
the size of the data, you can readinto directly to a region of a
bytearray. There by avoiding having to allocate a temporary buffer for
the read, and creating a new buffer containing the running buffer,
plus the new.

Strangely, I find that your readandcopy is faster at this, but not by
much, than readinto. Here's the code, it's a bit explicit, but then so
was the original:

BUFSIZE = 0x1

def justread():
# Just read a file's contents into a string/bytes object
f = open(FILENAME, 'rb')
s = b''
while True:
b = f.read(BUFSIZE)
if not b:
break
s += b

def readandcopy():
# Read a file's contents and copy them into a bytearray.
# An extra copy is done here.
f = open(FILENAME, 'rb')
s = bytearray()
while True:
b = f.read(BUFSIZE)
if not b:
break
s += b

def readinto():
# Read a file's contents directly into a bytearray,
# hopefully employing its buffer interface
f = open(FILENAME, 'rb')
s = bytearray(os.path.getsize(FILENAME))
o = 0
while True:
b = f.readinto(memoryview(s)[o:o+BUFSIZE])
if not b:
break
o += b

And the timings:

$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.justread()'
10 loops, best of 3: 298 msec per loop
$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 9.22 msec per loop
$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.readinto()'
100 loops, best of 3: 9.31 msec per loop

The file was 10MB. I expected readinto to perform much better than
readandcopy. I expected readandcopy to perform slightly better than
justread. This clearly isn't the case.

>
>> Also I saw some comments on "top-posting" am I guilty of this?

If tehre's a magical option in gmail someone knows about, please tell.

>
> Kind of :)
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-25 Thread Matt Joiner
On Fri, Nov 25, 2011 at 5:41 PM, Eli Bendersky  wrote:
>> Eli, the use pattern I was referring to is when you read in chunks,
>> and and append to a running buffer. Presumably if you know in advance
>> the size of the data, you can readinto directly to a region of a
>> bytearray. There by avoiding having to allocate a temporary buffer for
>> the read, and creating a new buffer containing the running buffer,
>> plus the new.
>>
>> Strangely, I find that your readandcopy is faster at this, but not by
>> much, than readinto. Here's the code, it's a bit explicit, but then so
>> was the original:
>>
>> BUFSIZE = 0x1
>>
>> def justread():
>>    # Just read a file's contents into a string/bytes object
>>    f = open(FILENAME, 'rb')
>>    s = b''
>>    while True:
>>        b = f.read(BUFSIZE)
>>        if not b:
>>            break
>>        s += b
>>
>> def readandcopy():
>>    # Read a file's contents and copy them into a bytearray.
>>    # An extra copy is done here.
>>    f = open(FILENAME, 'rb')
>>    s = bytearray()
>>    while True:
>>        b = f.read(BUFSIZE)
>>        if not b:
>>            break
>>        s += b
>>
>> def readinto():
>>    # Read a file's contents directly into a bytearray,
>>    # hopefully employing its buffer interface
>>    f = open(FILENAME, 'rb')
>>    s = bytearray(os.path.getsize(FILENAME))
>>    o = 0
>>    while True:
>>        b = f.readinto(memoryview(s)[o:o+BUFSIZE])
>>        if not b:
>>            break
>>        o += b
>>
>> And the timings:
>>
>> $ python3 -O -m timeit 'import fileread_bytearray'
>> 'fileread_bytearray.justread()'
>> 10 loops, best of 3: 298 msec per loop
>> $ python3 -O -m timeit 'import fileread_bytearray'
>> 'fileread_bytearray.readandcopy()'
>> 100 loops, best of 3: 9.22 msec per loop
>> $ python3 -O -m timeit 'import fileread_bytearray'
>> 'fileread_bytearray.readinto()'
>> 100 loops, best of 3: 9.31 msec per loop
>>
>> The file was 10MB. I expected readinto to perform much better than
>> readandcopy. I expected readandcopy to perform slightly better than
>> justread. This clearly isn't the case.
>>
>
> What is 'python3' on your machine? If it's 3.2, then this is consistent with
> my results. Try it with 3.3 and for a larger file (say ~100MB and up), you
> may see the same speed as on 2.7

It's Python 3.2. I tried it for larger files and got some interesting results.

readinto() for 10MB files, reading 10MB all at once:

readinto/2.7 100 loops, best of 3: 8.6 msec per loop
readinto/3.2 10 loops, best of 3: 29.6 msec per loop
readinto/3.3 100 loops, best of 3: 19.5 msec per loop

With 100KB chunks for the 10MB file (annotated with #):

matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for
v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import
readinto' "readinto.$f()"; done; done
read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually
faster than the 10MB read
read/3.2 10 loops, best of 3: 253 msec per loop # wtf?
read/3.3 10 loops, best of 3: 747 msec per loop # wtf??
bytearray_read/2.7 100 loops, best of 3: 7.9 msec per loop
bytearray_read/3.2 100 loops, best of 3: 7.48 msec per loop
bytearray_read/3.3 100 loops, best of 3: 15.8 msec per loop # wtf?
readinto/2.7 100 loops, best of 3: 8.93 msec per loop
readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2
is performing well?
readinto/3.3 10 loops, best of 3: 20.4 msec per loop

Here's the code: http://pastebin.com/nUy3kWHQ

>
> Also, why do you think chunked reads are better here than slurping the whole
> file into the bytearray in one go? If you need it wholly in memory anyway,
> why not just issue a single read?

Sometimes it's not available all at once, I do a lot of socket
programming, so this case is of interest to me. As shown above, it's
also faster for python2.7. readinto() should also be significantly
faster for this case, tho it isn't.

>
> Eli
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-25 Thread Matt Joiner
You can see in the tests on the largest buffer size tested, 8192, that
the naive "read" actually outperforms readinto(). It's possibly by
extrapolating into significantly larger buffer sizes that readinto()
gets left behind. It's also reasonable to assume that this wasn't
tested thoroughly.

On Fri, Nov 25, 2011 at 9:55 PM, Antoine Pitrou  wrote:
> On Fri, 25 Nov 2011 08:38:48 +0200
> Eli Bendersky  wrote:
>>
>> Just to be clear, there were two separate issues raised here. One is the
>> speed regression of readinto() from 2.7 to 3.2, and the other is the
>> relative slowness of justread() in 3.3
>>
>> Regarding the second, I'm not sure it's an issue because I tried a larger
>> file (100MB and then also 300MB) and the speed of 3.3 is now on par with
>> 3.2 and 2.7
>>
>> However, the original question remains - on the 100MB file also, although
>> in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the
>> same speed (even a few % slower). That said, I now observe with Python 3.3
>> the same speed as with 2.7, including the readinto() speedup - so it
>> appears that the readinto() regression has been solved in 3.3? Any clue
>> about where it happened (i.e. which bug/changeset)?
>
> It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-25 Thread Matt Joiner
On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou  wrote:
> On Fri, 25 Nov 2011 20:34:21 +1100
> Matt Joiner  wrote:
>>
>> It's Python 3.2. I tried it for larger files and got some interesting 
>> results.
>>
>> readinto() for 10MB files, reading 10MB all at once:
>>
>> readinto/2.7 100 loops, best of 3: 8.6 msec per loop
>> readinto/3.2 10 loops, best of 3: 29.6 msec per loop
>> readinto/3.3 100 loops, best of 3: 19.5 msec per loop
>>
>> With 100KB chunks for the 10MB file (annotated with #):
>>
>> matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for
>> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import
>> readinto' "readinto.$f()"; done; done
>> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually
>> faster than the 10MB read
>> read/3.2 10 loops, best of 3: 253 msec per loop # wtf?
>> read/3.3 10 loops, best of 3: 747 msec per loop # wtf??
>
> No "wtf" here, the read() loop is quadratic since you're building a
> new, larger, bytes object every iteration.  Python 2 has a fragile
> optimization for concatenation of strings, which can avoid the
> quadratic behaviour on some systems (depends on realloc() being fast).

Is there any way to bring back that optimization? a 30 to 100x slow
down on probably one of the most common operations... string
contatenation, is very noticeable. In python3.3, this is representing
a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s.

>
>> readinto/2.7 100 loops, best of 3: 8.93 msec per loop
>> readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2
>> is performing well?
>> readinto/3.3 10 loops, best of 3: 20.4 msec per loop
>
> What if you allocate the bytearray outside of the timed function?

This change makes readinto() faster for 100K chunks than the other 2
methods and clears the differences between the versions.
readinto/2.7 100 loops, best of 3: 6.54 msec per loop
readinto/3.2 100 loops, best of 3: 7.64 msec per loop
readinto/3.3 100 loops, best of 3: 7.39 msec per loop

Updated test code: http://pastebin.com/8cEYG3BD

>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>

So as I think Eli suggested, the readinto() performance issue goes
away with large enough reads, I'd put down the differences to some
unrelated language changes.

However the performance drop on read(): Python 3.2 is 30x slower than
2.7, and 3.3 is 100x slower than 2.7.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-25 Thread Matt Joiner
I was under the impression this is already in 3.3?

On Nov 25, 2011 10:58 PM, "Eli Bendersky"  wrote:
>
>
>> > However, the original question remains - on the 100MB file also,
although
>> > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the
>> > same speed (even a few % slower). That said, I now observe with Python
3.3
>> > the same speed as with 2.7, including the readinto() speedup - so it
>> > appears that the readinto() regression has been solved in 3.3? Any clue
>> > about where it happened (i.e. which bug/changeset)?
>>
>> It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/
>
>
> Great, thanks. This is an important change, definitely something to wait
for in 3.3
> Eli
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecation policy

2011-11-28 Thread Matt Joiner
On Mon, Nov 28, 2011 at 11:14 PM, Steven D'Aprano  wrote:
> Xavier Morel wrote:
>
>> Not being too eager to kill APIs is good, but giving rise to this kind of
>> living-dead APIs is no better in my opinion, even more so since Python has
>> lost one of the few tools it had to manage them (as DeprecationWarning was
>> silenced by default). Both choices are harmful to users, but in the long
>> run I do think zombie APIs are worse.
>
> I would much rather have my code relying on "zombie" APIs and keep working,
> than to have that code suddenly stop working when the zombie is removed.
> Working code should stay working. Unless the zombie is actively harmful,
> what's the big deal if there is a newer, better way of doing something? If
> it works, and if it's fast enough, why force people to "fix" it?
>
> It is a good thing that code or tutorials from Python 1.5 still (mostly)
> work, even when there are newer, better ways of doing something. I see a lot
> of newbies, and the frustration they suffer when they accidentally
> (carelessly) try following 2.x instructions in Python3, or vice versa, is
> great. It's bad enough (probably unavoidable) that this happens during a
> major transition like 2 to 3, without it also happening during minor
> releases.
>
> Unless there is a good reason to actively remove an API, it should stay as
> long as possible. "I don't like this and it should go" is not a good reason,
> nor is "but there's a better way you should use". When in doubt, please
> don't break people's code.

This is a great argument. But people want to see new, bigger better
things in the standard library, and the #1 reason cited against this
is "we already have too much". I think that's where the issue lies:
Either lots of cool nice stuff is added and supported (we all want our
favourite things in the standard lib for this reason), and or the old
stuff lingers...

I'm sure a while ago there was mention of a "staging" area for
inclusion in the standard library. This attracts interest,
stabilization, and quality from potential modules for inclusion.
Better yet, the existing standard library ownership is somehow
detached from the CPython core, so that changes enabling easier
customization to fit other implementations (jpython, pypy etc.) are
possible.

tl;dr old stuff blocks new hotness. make room or separate standard
library concerns from cpython

>
>
>
> --
> Steven
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecation policy

2011-11-29 Thread Matt Joiner
I like this article on it:

http://semver.org/

The following snippets being relevant here:

Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards
compatible functionality is introduced to the public API. It MUST be
incremented if any public API functionality is marked as deprecated.

Major version X (X.y.z | X > 0) MUST be incremented if any backwards
incompatible changes are introduced to the public API.

With the exception of actually dropping stuff (however this only
occurs in terms of modules, which hardly count in special cases?),
Python already conforms to this standard very well.

On Wed, Nov 30, 2011 at 11:00 AM, Benjamin Peterson  wrote:
> 2011/11/29 Nick Coghlan :
>> On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw  wrote:
>>> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote:
>>>
Well, that's why I think the version number components are not
correctly named. I don't think any of the 2.x or 3.x releases can be
called "minor" by any stretch of the word. A quick glance at
http://docs.python.org/dev/whatsnew/index.html should be enough.
>>>
>>> Agreed, but it's too late to change it.  I look at it as the attributes of 
>>> the
>>> namedtuple being evocative of the traditional names for the digit positions,
>>> not the assignment of those positions to Python's semantics.
>>
>> Hmm, I wonder about that. Perhaps we could add a second set of names
>> in parallel with the "major.minor.micro" names:
>> "series.feature.maint".
>>
>> That would, after all, reflect what is actually said in practice:
>> - release series: 2.x, 3.x  (usually used in a form like "In the 3.x
>> series, X is true. In 2.x, Y is true)
>> - feature release: 2.7, 3.2, etc
>> - maintenance release: 2.7.2, 3.2.1, etc
>>
>> I know I tend to call feature releases major releases and I'm far from
>> alone in that. The discrepancy in relation to sys.version_info is
>> confusing, but we can't make 'major' refer to a different field
>> without breaking existing programs. But we *can* change:
>>
> sys.version_info
>> sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
>>
>> to instead read:
>>
>> sys.version_info(series=2, feature=7, maint=2, releaselevel='final', 
>> serial=0)
>>
>> while allowing 'major' as an alias of 'series', 'minor' as an alias of
>> 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd
>> have started down the path towards coherent terminology for the three
>> fields in the version numbers (by accepting that 'major' has now
>> become irredeemably ambiguous in the context of CPython releases).
>>
>> This idea of renaming all three fields has come up before, but I
>> believe we got stuck on the question of what to call the first number
>> (i.e. the one I'm calling the "series" here).
>
> Can we drop this now? Too much effort for very little benefit. We call
> releases what we call releases.
>
>
>
> --
> Regards,
> Benjamin
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] LZMA support has landed

2011-11-29 Thread Matt Joiner
Congrats, this is an excellent feature.

On Wed, Nov 30, 2011 at 10:34 AM, Amaury Forgeot d'Arc
 wrote:
> 2011/11/29 Nadeem Vawda 
>>
>> I'm pleased to announce that as of changeset 74d182cf0187, the
>> standard library now includes support for the LZMA compression
>> algorithm
>
>
> Congratulations!
>
>>
>> I'd like to ask the owners of (non-Windows) buildbots to install the
>> XZ Utils development headers so that they can build the new module.
>
>
> And don't worry about Windows builbots, they will automatically download
> the XZ prebuilt binaries from the usual place.
> (svn export http://svn.python.org/projects/external/xz-5.0.3)
>
> Next step: add support for tar.xz files (issue5689)...
>
> --
> Amaury Forgeot d'Arc
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] STM and python

2011-11-30 Thread Matt Joiner
Given GCC's announcement that Intel's STM will be an extension for C
and C++ in GCC 4.7, what does this mean for Python, and the GIL?

I've seen efforts made to make STM available as a context, and for use
in user code. I've also read about the "old attempts way back" that
attempted to use finer grain locking. The understandably failed due to
the heavy costs involved in both the locking mechanisms used, and the
overhead of a reference counting garbage collection system.

However given advances in locking and garbage collection in the last
decade, what attempts have been made recently to try these new ideas
out? In particular, how unlikely is it that all the thread safe
primitives, global contexts, and reference counting functions be made
__transaction_atomic, and magical parallelism performance boosts
ensue?

I'm aware that C89, platforms without STM/GCC, and single threaded
performance are concerns. Please ignore these for the sake of
discussion about possibilities.

http://gcc.gnu.org/wiki/TransactionalMemory
http://linux.die.net/man/4/futex
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] STM and python

2011-11-30 Thread Matt Joiner
I did see this, I'm not convinced it's only relevant to PyPy.

On Thu, Dec 1, 2011 at 2:25 AM, Benjamin Peterson  wrote:
> 2011/11/30 Matt Joiner :
>> Given GCC's announcement that Intel's STM will be an extension for C
>> and C++ in GCC 4.7, what does this mean for Python, and the GIL?
>>
>> I've seen efforts made to make STM available as a context, and for use
>> in user code. I've also read about the "old attempts way back" that
>> attempted to use finer grain locking. The understandably failed due to
>> the heavy costs involved in both the locking mechanisms used, and the
>> overhead of a reference counting garbage collection system.
>>
>> However given advances in locking and garbage collection in the last
>> decade, what attempts have been made recently to try these new ideas
>> out? In particular, how unlikely is it that all the thread safe
>> primitives, global contexts, and reference counting functions be made
>> __transaction_atomic, and magical parallelism performance boosts
>> ensue?
>
> Have you seen 
> http://morepypy.blogspot.com/2011/08/we-need-software-transactional-memory.html
> ?
>
>
> --
> Regards,
> Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] STM and python

2011-11-30 Thread Matt Joiner
I saw this, I believe it just exposes an STM primitive to user code.
It doesn't make use of STM for Python internals.

Explicit STM doesn't seem particularly useful for a language that
doesn't expose raw memory in its normal usage.

On Thu, Dec 1, 2011 at 4:41 PM, Nick Coghlan  wrote:
> On Thu, Dec 1, 2011 at 10:58 AM, Gregory P. Smith  wrote:
>> Azul has been using hardware transactional memory on their custom CPUs (and
>> likely STM in their current x86 virtual machine based products) to great
>> effect for their massively parallel Java VM (700+ cpu cores and gobs of ram)
>> for over 4 years.  I'll leave it to the reader to do the relevant searching
>> to read more on that.
>>
>> My point is: This is up to any given Python VM implementation to take
>> advantage of or not as it sees fit.  Shoe horning it into an existing VM may
>> not make much sense but anyone is welcome to try.
>
> There's a patch somewhere on the tracker to add an "Armin Rigo hook"
> to the CPython eval loop so he can play with STM in Python as well (at
> least, I think it was STM he wanted it for - it might have been
> something else).
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] STM and python

2011-12-01 Thread Matt Joiner
Armin, thanks for weighing in on this. I'm keen to see a CPython
making use of STM, maybe I'll give it a try over Christmas break. I'm
willing to take the single threaded performance hit, as I have several
applications that degrade due to significant contention with the GIL.

The other benefits of STM you describe make it a lot more appealing. I
actually tried out Haskell recently to make use of many of the
advanced features but came crawling back.

If anyone else is keen to try this, I'm happy to receive patches for
testing and review.

On Thu, Dec 1, 2011 at 10:01 PM, Armin Rigo  wrote:
> Hi,
>
> On Thu, Dec 1, 2011 at 07:06, Matt Joiner  wrote:
>> I saw this, I believe it just exposes an STM primitive to user code.
>> It doesn't make use of STM for Python internals.
>
> That's correct.
>
>> Explicit STM doesn't seem particularly useful for a language that
>> doesn't expose raw memory in its normal usage.
>
> In my opinion, that sentence could not be more wrong.
>
> It is true that, as I discuss on the blog post cited a few times in
> this thread, the first goal I see is to use STM to replace the GIL as
> an internal way of keeping the state of the interpreter consistent.
> This could quite possibly be achieved using the new GCC
> __transaction_atomic keyword, although I see already annoying issues
> (e.g. the keyword can only protect a _syntactically nested_ piece of
> code as a transaction).
>
> However there is another aspect: user-exposed STM, which I didn't
> explore much.  While it is potentially even more important, it is a
> language design question, so I'm happy to delegate it to python-dev.
> In my opinion, explicit STM (like Clojure) is not only *a* way to
> write multithreaded Python programs, but it seems to be *the only* way
> that really makes sense in general, for more than small examples and
> more than examples where other hacks are enough (see
> http://en.wikipedia.org/wiki/Software_transactional_memory#Composable_operations
> ).  In other words, locks are low-level and should not be used in a
> high-level language, like direct memory accesses, just because it
> forces the programmer to think about increasingly complicated
> situations.
>
> And of course there is the background idea that TM might be available
> in hardware someday.  My own guess is that it will occur, and I bet
> that in 5 to 10 years all new Intel and AMD CPUs will have Hybrid TM.
> On such hardware, the performance penalty mostly disappears (which is
> also, I guess, the reasoning behind GCC 4.7, offering a future path to
> use Hybrid TM).
>
> If python-dev people are interested in exploring the language design
> space in that direction, I would be most happy to look in more detail
> at GCC 4.7.  If we manage to make use of it, then we could get a
> version of CPython using STM internally with a very minimal patch.  If
> it seems useful we can then turn that patch into #ifdefs into the
> normal CPython.  It would of course be off by default because of the
> performance hit; still, it would give an optional alternate
> "CPythonSTM" to play with in order to come up with good user-level
> abstractions.  (This is what I'm already trying to do with PyPy
> without using GCC 4.7, and it's progressing nicely.)  (My existing
> patch to CPython emulating user-level STM with the GIL is not really
> satisfying, also for the reason that it cannot emulate some other
> potentially useful user constructs, like abort_and_retry().)
>
>
> A bientôt,
>
> Armin.



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] STM and python

2011-12-06 Thread Matt Joiner
This is very interesting, cheers for the link.

On Tue, Dec 6, 2011 at 8:55 PM, Armin Rigo  wrote:
> Hi,
>
> Actually, not even one month ago, Intel announced that its processors
> will offer Hardware Transactional Memory in 2013:
>
> http://www.h-online.com/newsticker/news/item/Processor-Whispers-About-Haskell-and-Haswell-1389507.html
>
> So yes, obviously, it's going to happen.
>
>
> A bientôt,
>
> Armin.



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] readd u'' literal support in 3.3?

2011-12-08 Thread Matt Joiner
Nobody is using 3 yet ;)

Sure, I use it for some personal projects, and other people pretend to
support it. Not really.

The worst of the pain in porting to Python 3000 has yet to even begin!

On Thu, Dec 8, 2011 at 6:33 PM, Nick Coghlan  wrote:
> Such code still won't work on 3.2, hence restoring the redundant notation
> would be ultimately pointless.
>
> --
> Nick Coghlan (via Gmail on Android, so likely to be more terse than usual)
>
> On Dec 8, 2011 4:34 PM, "Chris McDonough"  wrote:
>>
>> On Thu, 2011-12-08 at 01:18 -0500, Benjamin Peterson wrote:
>> > 2011/12/8 Chris McDonough :
>> > > On Thu, 2011-12-08 at 01:02 -0500, Benjamin Peterson wrote:
>> > >> 2011/12/8 Chris McDonough :
>> > >> > On the heels of Armin's blog post about the troubles of making the
>> > >> > same
>> > >> > codebase run on both Python 2 and Python 3, I have a concrete
>> > >> > suggestion.
>> > >> >
>> > >> > It would help a lot for code that straddles both Py2 and Py3 to be
>> > >> > able
>> > >> > to make use of u'' literals.
>> > >>
>> > >> Helpful or not helpful, I think that ship has sailed. The earliest it
>> > >> could see the light of day is 3.3, which would leave people trying to
>> > >> support 3.1 and 3.2 in a bind.
>> > >
>> > > Right.. the title does say "readd ... support in 3.3".  Are you
>> > > suggesting "the ship has sailed" for eternity because it can't be
>> > > supported in Python < 3.3?
>> >
>> > I'm questioning the real utility of it.
>>
>> All I can really offer is my own experience here based on writing code
>> that needs to straddle Python 2.5, 2.6, 2.7 and 3.2 without use of 2to3.
>> Having u'' work across all of these would mean porting would not require
>> as much eyeballing as code modified via "from future import
>> unicode_literals", it would let more code work on 2.5 unchanged, and the
>> resulting code would execute faster than code that required us to use a
>> u() function.
>>
>> What's the case against?
>>
>> - C
>>
>>
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fixing the XML batteries

2011-12-09 Thread Matt Joiner
+1

On Sat, Dec 10, 2011 at 2:09 AM, Dirkjan Ochtman  wrote:
> On Fri, Dec 9, 2011 at 09:02, Stefan Behnel  wrote:
>> a) The stdlib documentation should help users to choose the right tool right
>> from the start.
>> b) cElementTree should finally loose it's "special" status as a separate
>> library and disappear as an accelerator module behind ElementTree.
>
> An at least somewhat informed +1 from me. The ElementTree API is a
> very good way to deal with XML from Python, and it deserves to be
> promoted over the included alternatives.
>
> Let's deprecate the NiCad batteries and try to guide users toward the
> Li-Ion ones.
>
> Cheers,
>
> Dirkjan
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PATCH] Adding braces to __future__

2011-12-09 Thread Matt Joiner
If braces were introduced I would switch to Haskell, I can't stand the
noise. If you want to see a language that allows both whitespace, semi
colons and braces take a look at it. Nails it.
On Dec 10, 2011 9:31 AM, "Cedric Sodhi"  wrote:

> On Fri, Dec 09, 2011 at 02:21:42PM -0800, Guido van Rossum wrote:
> >On Fri, Dec 9, 2011 at 12:26 PM, Cedric Sodhi <[1]man...@gmx.net>
> wrote:
> >
> >  IF YOU THINK YOU MUST REPLY SOMETHING WITTY, ITERATE THAT THIS HAD
> BEEN
> >  DISCUSSED BEFORE, REPLY THAT "IT'S SIMPLY NOT GO'NNA HAPPEN", THAT
> "WHO
> >  DOESN'T LIKE IT IS FREE TO CHOOSE ANOTHER LANGUAGE" OR SOMETHING
> >  SIMILAR, JUST DON'T.
> >
> >Every single response in this thread so far has ignored this request.
> The
> >correct response honoring this should have been deafening silence.
> >
> >For me, if I had to design a new language today, I would probably use
> >braces, not because they're better than whitespace, but because pretty
> >much every other lanugage uses them, and there are more interesting
> >concepts to distinguish a new language. That said, I don't regret that
> >Python uses indentation, and the rest I have to say about the topic
> would
> >violate the above request.
> >
>
> I think this deserves a reply. Thank you for contributing your opinion
> and respecting my request and therefore honoring the rules of a
> civilized debate.
>
> -- Cedric
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fixing the XML batteries

2011-12-09 Thread Matt Joiner
I second this. The doco is very bad.
On Dec 10, 2011 6:34 AM, "Bill Janssen"  wrote:

> Xavier Morel  wrote:
>
> > On 2011-12-09, at 19:15 , Bill Janssen wrote:
> > > I use ElementTree for parsing valid XML, but minidom for producing it.
> > Could you expand on your reasons to use minidom for producing XML?
>
> Inertia, I guess.  I tried that first, and it seems to work.
>
> I tend to use html5lib and/or BeautifulSoup instead of ElementTree, and
> that's mainly because I find the documentation for ElementTree is
> confusing and partial and inconsistent.  Having various undated but
> obsolete tutorials and documentation still up on effbot.org doesn't
> help.
>
>
> Bill
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PATCH] Adding braces to __future__

2011-12-09 Thread Matt Joiner
Ditch the colon too. Also you're a troll.
On Dec 10, 2011 9:58 AM, "Cedric Sodhi"  wrote:

> I reply to your contribution mainly because I see another, valid
> argument hidden in what you formulated as an opinion:
>
> Readability would be reduced by such "noise". To anticipate other people
> agreeing with that, let me say, that it would be exactly one more
> character, and the same amount of key presses. All that, assuming you
> use editor automatisms, which particularly the advocates of WSB tend to
> bring forth in defense of WSB and aforementioned problems associated
> with it.
>
> Only one more character and not more key presses? Yes, instead of
> opening a block with a colon, you open it with an opening bracket. And
> you close it with a closing one.
>
> Referring to "noise", I take it you are preferring naturally expressed
> languages (what Roff's PIC, for example, exemplifies to banality).
>
> How is a COLON, which, in natural language, PUNCTUATES a context, any
> more suited than braces, which naturally ENCLOSE a structure?
>
> Obviously, it by far is not, even from the standpoint of not
> intersparsing readable code with unnatural characters.
>
> On Sat, Dec 10, 2011 at 09:40:54AM +1100, Matt Joiner wrote:
> >If braces were introduced I would switch to Haskell, I can't stand the
> >noise. If you want to see a language that allows both whitespace, semi
> >colons and braces take a look at it. Nails it.
> >
> >On Dec 10, 2011 9:31 AM, "Cedric Sodhi" <[1]man...@gmx.net> wrote:
> >
> >  On Fri, Dec 09, 2011 at 02:21:42PM -0800, Guido van Rossum wrote:
> >  >On Fri, Dec 9, 2011 at 12:26 PM, Cedric Sodhi
> >  <[1][2]man...@gmx.net> wrote:
> >  >
> >  >  IF YOU THINK YOU MUST REPLY SOMETHING WITTY, ITERATE THAT
> THIS
> >  HAD BEEN
> >  >  DISCUSSED BEFORE, REPLY THAT "IT'S SIMPLY NOT GO'NNA HAPPEN",
> >  THAT "WHO
> >  >  DOESN'T LIKE IT IS FREE TO CHOOSE ANOTHER LANGUAGE" OR
> SOMETHING
> >  >  SIMILAR, JUST DON'T.
> >  >
> >  >Every single response in this thread so far has ignored this
> >  request. The
> >  >correct response honoring this should have been deafening
> silence.
> >  >
> >  >For me, if I had to design a new language today, I would
> probably
> >  use
> >  >braces, not because they're better than whitespace, but because
> >  pretty
> >  >much every other lanugage uses them, and there are more
> interesting
> >  >concepts to distinguish a new language. That said, I don't
> regret
> >  that
> >  >Python uses indentation, and the rest I have to say about the
> topic
> >  would
> >  >violate the above request.
> >  >
> >
> >  I think this deserves a reply. Thank you for contributing your
> opinion
> >  and respecting my request and therefore honoring the rules of a
> >  civilized debate.
> >
> >  -- Cedric
> >  ___
> >  Python-Dev mailing list
> >  [3]Python-Dev@python.org
> >  [4]http://mail.python.org/mailman/listinfo/python-dev
> >  Unsubscribe:
> >  [5]
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
> >
> > References
> >
> >Visible links
> >1. mailto:man...@gmx.net
> >2. mailto:man...@gmx.net
> >3. mailto:Python-Dev@python.org
> >4. http://mail.python.org/mailman/listinfo/python-dev
> >5.
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Potential NULL pointer dereference in descrobject.c

2011-12-17 Thread Matt Joiner
ಠ_ಠ

On Sat, Dec 17, 2011 at 8:55 PM, Michael Mueller
 wrote:
> Hi Guys,
>
> We've been analyzing CPython with our static analysis tool (Sentry)
> and a NULL pointer dereference popped up the other day, in
> Objects/descrobject.c:
>
>    if (descr != NULL) {
>        Py_XINCREF(type);
>        descr->d_type = type;
>        descr->d_name = PyUnicode_InternFromString(name);
>        if (descr->d_name == NULL) {
>            Py_DECREF(descr);
>            descr = NULL;
>        }
>        descr->d_qualname = NULL; // Possible NULL pointer dereference
>    }
>
> If the inner conditional block can be reached, descr will be set NULL
> and then dereferenced on the next line.  The commented line above was
> added in this commit: http://hg.python.org/cpython/rev/73948#l4.92
>
> Hopefully someone can take a look and determine the appropriate fix.
>
> Best,
> Mike
>
> --
> Mike Mueller
> Phone: (401) 405-1525
> Email: mmuel...@vigilantsw.com
>
> http://www.vigilantsw.com/
> Static Analysis for C and C++
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Anyone still using Python 2.5?

2011-12-21 Thread Matt Joiner
I'm paid to write Python3. I've also been writing Python3 for hobby
projects since mid 2010. I'm on the verge of going back to 2.7 due to
compatibility issues :(

On Thu, Dec 22, 2011 at 1:45 PM, Mike Meyer  wrote:
> On Thu, 22 Dec 2011 01:49:37 +
> Michael Foord  wrote:
>> These figures can't possibly be true. No-one is using Python 3 yet. ;-)
>
> Since you brought it up. Is anyone paying people (or trying to hire
> people) to write Python 3?
>
>        Thanks,
>         --
> Mike Meyer               http://www.mired.org/
> Independent Software developer/SCM consultant, email for more information.
>
> O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 7 clarification request: braces

2012-01-03 Thread Matt Joiner
FWIW I'm against forcing braces to be used. Readability is the highest
concern, and this should be at the discretion of the contributor. A
code formatting tool, or compiler extension is the only proper handle
this, and neither are in use or available.

On Tue, Jan 3, 2012 at 7:44 PM, "Martin v. Löwis"  wrote:
>> He keeps leaving them out, I occasionally tell him they should always
>> be included (most recently this came up when we gave conflicting
>> advice to a patch contributor). He says what he's doing is OK, because
>> he doesn't consider the example in PEP 7 as explicitly disallowing it,
>> I think it's a recipe for future maintenance hassles when someone adds
>> a second statement to one of the clauses but doesn't add the braces.
>> (The only time I consider it reasonable to leave out the braces is for
>> one liner if statements, where there's no else clause at all)
>
> While this appears to be settled, I'd like to add that I sided with
> Benjamin here all along.
>
> With Python, I accepted a style of "minimal punctuation". Examples
> of extra punctuation are:
> - parens around expression in Python's if (and while):
>
>    if (x < 10):
>      foo ()
>
> - parens around return expression (C and Python)
>
>    return(*p);
>
> - braces around single-statement blocks in C
>
> In all these cases, punctuation can be left out without changing
> the meaning of the program.
>
> I personally think that a policy requiring braces would be (mildly)
> harmful, as it decreases readability of the code. When I read code,
> I read every character: not just the identifiers, but also every
> punctuation character. If there is extra punctuation, I stop and wonder
> what the motivation for the punctuation is - is there any hidden
> meaning that required the author to put the punctuation?
>
> There is a single case where I can accept extra punctuation in C:
> to make the operator precedence explicit. Many people (including
> myself) don't know how
>
>   a | b << *c * *d
>
> would group, so I readily accept extra parens as a clarification.
>
> Wrt. braces, I don't share the concern that there is a risk of
> somebody being confused when adding a second statement to a braceless
> block. An actual risk is stuff like
>
>   if (cond)
>     MACRO(argument);
>
> when MACRO expands to multiple statements. However, we should
> accept that this is a bug in MACRO (which should have used the
> do-while(0)-idiom), not in the application of the macro.
>
> Regards,
> Martin
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] usefulness of Python version of threading.RLock

2012-01-05 Thread Matt Joiner
I'm pretty sure the Python version of RLock is in use in several
alternative implementations that provide an alternative _thread.lock. I
think gevent would fall into this camp, as well as a personal project of
mine in a similar vein that operates on python3.

2012/1/6 Charles-François Natali 

> Hi,
>
> Issue #13697 (http://bugs.python.org/issue13697) deals with a problem
> with the Python version of threading.RLock (a signal handler which
> tries to acquire the same RLock is called right at the wrong time)
> which doesn't affect the C version.
> Whether such a use case can be considered good practise or the best
> way to fix this is not settled yet, but the question that arose to me
> is: "why do we have both a C and Python version?".
> Here's Antoine answer (he suggested to me to bring this up on python-dev":
> """
> The C version is quite recent, and there's a school of thought that we
> should always provide fallback Python implementations.
> (also, arguably a Python implementation makes things easier to
> prototype, although I don't think it's the case for an RLock)
> """
>
> So, what do you guys think?
> Would it be okay to nuke the Python version?
> Do you have more details on this "school of thought"?
>
> Also, while we're at it, Victor created #13550 to try to rewrite the
> "logging hack" of the threading module: there again, I think we could
> just remove this logging altogether. What do you think?
>
> Cheers,
>
> cf
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] usefulness of Python version of threading.RLock

2012-01-06 Thread Matt Joiner
_PyRLock is not used directly. Instead, no _CRLock is provided, so the
threading.RLock function calls _PyRLock.

It's done this way because green threading libraries may only provide a
greened lock. _CRLock in these contexts would not work: It would block the
entire native thread.

I suspect that if you removed _PyRLock, these implementations would have to
expose their own RLock primitive which works the same way as the one just
removed from the standard library. I don't know if this is a good thing.

I would recommend checking with at least the gevent and eventlet developers.

2012/1/7 Charles-François Natali 

> Thanks for those precisions, but I must admit it doesn't help me much...
> Can we drop it? A yes/no answer will do it ;-)
>
> > I'm pretty sure the Python version of RLock is in use in several
> alternative
> > implementations that provide an alternative _thread.lock. I think gevent
> > would fall into this camp, as well as a personal project of mine in a
> > similar vein that operates on python3.
>
> Sorry, I'm not sure I understand. Do those projects use _PyRLock directly?
> If yes, then aliasing it to _CRLock should do the trick, no?
>



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] usefulness of Python version of threading.RLock

2012-01-07 Thread Matt Joiner
Nick did you mean to say "wrap python code around a reentrant lock to
create a non-reentrant lock"? Isn't that what PyRLock is doing?

FWIW having now read issues 13697 and 13550, I'm +1 for dropping Python
RLock, and all the logging machinery in threading.

2012/1/8 Nick Coghlan 

> 2012/1/7 Charles-François Natali :
> > Thanks for those precisions, but I must admit it doesn't help me much...
> > Can we drop it? A yes/no answer will do it ;-)
>
> The yes/no answer is "No, we can't drop it".
>
> Even though CPython no longer uses the Python version of RLock in
> normal operation, it's still the reference implementation for everyone
> else that has to perform the same task (i.e. wrap Python code around a
> non-reentrant lock to create a reentrant one).
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python C API: Problem sending tuple to a method of a python Class

2012-01-10 Thread Matt Joiner
Perhaps the python-dev mailing list should be renamed to python-core.

On Tue, Jan 10, 2012 at 7:35 PM, Stefan Behnel  wrote:
> Hi,
>
> sorry for hooking into this off-topic thread.
>
> Amaury Forgeot d'Arc, 09.01.2012 19:09:
>> 2012/1/9 
>>> I am trying to send a tuple to a method of a python class and I got a Run
>>> failed from netbeans compiler
>>> when I want to send a tuple to a simple method in a module it works,when I
>>> want to send a simple parameter to a method of a clas it works also but not
>>> a tuple to a method of a class
>>
>> This mailing list is for the development *of* python.
>> For development *with* python, please ask your questions on
>> the comp.lang.python group or the python-l...@python.org mailing list.
>> There you will find friendly people willing to help.
>
> It's also worth mentioning the cython-users mailing list here, in case the
> OP cares about simplifying these kinds of issues from the complexity of
> C/C++ into Python. Cython is a really good and simple way to implement
> these kinds of language interactions, also for embedding Python.
>
>
>> [for your particular question: keep in mind that PyObject_Call takes
>> arguments as a tuple;
>> if you want to pass one tuple, you need to build a 1-tuple around your
>> tuple]
>
> The presented code also requires a whole lot of fixes (specifically in the
> error handling parts) that Cython would basically just handle for you already.
>
> Stefan
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com



-- 
ಠ_ಠ
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] devguide: Backporting is obsolete. Add details that I had to learn.

2012-01-10 Thread Matt Joiner
http://semver.org/

This has made sense since Gentoo days.

On Tue, Jan 10, 2012 at 11:57 PM, Antoine Pitrou  wrote:
> On Tue, 10 Jan 2012 08:49:04 +
> Rob Cliffe  wrote:
>> But "minor version" and "major version" are readily understandable to
>> the general reader, e.g. me, whereas "feature release" and "release
>> series" I find are not.  Couldn't the first two terms be defined once
>> and then used throughout?
>
> To me "minor" is a bugfix release, e.g. 2.7.2, and "major" is a feature
> release, e.g. 3.3.  I have a hard time considering 3.2 or 3.3 "minor".
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python C API: Problem sending tuple to a method of a python Class

2012-01-10 Thread Matt Joiner
I suspect it actually would fix the confusion. "dev" usually means
development, not "core implementation development". People float past
looking for dev help... python-dev. Python-list is a bit generic.

On Tue, Jan 10, 2012 at 11:17 PM, Stefan Behnel  wrote:
> Matt Joiner, 10.01.2012 09:40:
>> Perhaps the python-dev mailing list should be renamed to python-core.
>
> Well, there *is* a rather visible warning on the list subscription page
> that tells people that it's most likely not the list they actually want to
> use. If they manage to ignore that, I doubt that a different list name
> would fix it for them.
>
> Stefan
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed PEP on concurrent programming support

2012-01-11 Thread Matt Joiner
On Thu, Jan 12, 2012 at 11:01 AM, Mike Meyer  wrote:
> On Wed, 4 Jan 2012 00:07:27 -0500
> PJ Eby  wrote:
>
>> On Tue, Jan 3, 2012 at 7:40 PM, Mike Meyer  wrote:
>> > A suite is marked
>> > as a `transaction`, and then when an unlocked object is modified,
>> > instead of indicating an error, a locked copy of it is created to be
>> > used through the rest of the transaction. If any of the originals
>> > are modified during the execution of the suite, the suite is rerun
>> > from the beginning. If it completes, the locked copies are copied
>> > back to the originals in an atomic manner.
>> I'm not sure if "locked" is really the right word here.  A private
>> copy isn't "locked" because it's not shared.
>
> Do you have a suggestion for a better word? Maybe the "safe" state
> used elsewhere?
>
>> > For
>> > instance, combining STM with explicit locking would allow explicit
>> > locking when IO was required,
>> I don't think this idea makes any sense, since STM's don't really
>> "lock", and to control I/O in an STM system you just STM-ize the
>> queues. (Generally speaking.)
>
> I thought about that. I couldn't convince myself that STM by itself
> sufficient. If you need to make irreversible changes to the state of
> an object, you can't use STM, so what do you use? Can every such
> situation be handled by creating "safe" values then using an STM to
> update them?
>
>        ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com

IMHO STM by itself isn't sufficient. Either immutability, or careful
use of references protected by STM amounting to the same are the only
reasonable ways to do it. Both also perform much better than the
alternatives.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 380 ("yield from") is now Final

2012-01-13 Thread Matt Joiner
Great work Nick, I've been looking forward to this one. Thanks all for
putting the effort in.

On Fri, Jan 13, 2012 at 11:14 PM, Nick Coghlan  wrote:
> I marked PEP 380 as Final this evening, after pushing the tested and
> documented implementation to hg.python.org:
> http://hg.python.org/cpython/rev/d64ac9ab4cd0
>
> As the list of names in the NEWS and What's New entries suggests, it
> was quite a collaborative effort to get this one over the line, and
> that's without even listing all the people that offered helpful
> suggestions and comments along the way :)
>
> print("\n".join(list((lambda:(yield from ("Cheers,", "Nick")))(
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 407: New release cycle and introducing long-term support versions

2012-01-17 Thread Matt Joiner
If minor/feature releases are introducing breaking changes perhaps it's
time to adopt accelerated major versioning schedule. For instance there are
breaking ABI changes between 3.0/3.1, and 3.2, and while acceptable for the
early adoption state of Python 3, such changes should normally be reserved
for major versions.

If every 4th or so feature release is sufficiently different to be worth of
an LTS, consider this a major release albeit with smaller beading changes
than Python 3.

Aside from this, given the radical features of 3.3, and the upcoming Ubuntu
12.04 LTS, I would recommend adopting 2.7 and 3.2 as the first LTSs, to be
reviewed 2 years hence should this go ahead.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-17 Thread Matt Joiner
Just to clarify, this differs in functionality from enhanced generators by
allowing you to yield from an arbitrary call depth rather than having to
"yield from" through a chain of calling generators? Furthermore there's no
syntactical change except to the bottommost frame doing a co_yield? Does
this capture the major differences?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 407: New release cycle and introducing long-term support versions

2012-01-18 Thread Matt Joiner
On Wed, Jan 18, 2012 at 6:55 PM, Georg Brandl  wrote:
> The main reason is changes in the library.  We have been getting complaints
> about the standard library bitrotting for years now, and one of the main
> reasons it's so hard to a) get decent code into the stdlib and b) keep it
> maintained is that the release cycles are so long.  It's a tough thing for
> contributors to accept that the feature you've just implemented will only
> be in a stable release in 16 months.
>
> If the stdlib does not get more reactive, it might just as well be cropped
> down to a bare core, because 3rd-party libraries do everything as well and
> do it before we do.  But you're right that if Python came without batteries,
> the current release cycle would be fine.

I think this is the real issue here. The batteries in Python are so
important because:
1) The stability and quality of 3rd party libraries is not guaranteed.
2) The mechanism used to obtain 3rd party libraries, is not popular or
considered reliable.

Much of the "bitrot" is that standard library modules have been
deprecated by third party ones that are of a much higher
functionality. Rather than importing these libraries, it needs to be
trivial to obtain them.

Putting some of these higher quality 3rd party modules into lock step
with Python is an unpopular move, and hampers their future growth.

From the top of my head, libraries such as LXML, argparse, and
requests are such popular libraries that shouldn't be baked in. In the
long term, it would be nice to see these kinds of libraries dropped
from the standard installation, and made available through the new
distribute package systems etc.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-18 Thread Matt Joiner
PEP380 and Mark's coroutines could coexist, so I really don't "it's
too late" matters. Furthermore, PEP380 has utility in its own right
without considering its use for "explicit coroutines".

I would like to see these coroutines considered, but as someone else
mentioned, coroutines via PEP380 enhanced generators have some
interesting characteristics, from my experimentations they feel
monadic.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-19 Thread Matt Joiner
On Fri, Jan 20, 2012 at 8:41 AM, Greg  wrote:
> Glyph wrote:
>>
>> [Guido] mentions the point that coroutines that can implicitly switch out
>> from under you have the same non-deterministic property as threads: you
>> don't know where you're going to need a lock or lock-like construct to
>> update any variables, so you need to think about concurrency more deeply
>> than if you could explicitly always see a 'yield'.
>
>
> I'm not convinced that being able to see 'yield's will help
> all that much. In any system that makes substantial use of
> generator-based coroutines, you're going to see 'yield from's
> all over the place, from the lowest to the highest levels.
> But that doesn't mean you need a correspondingly large
> number of locks. You can't look at a 'yield' and conclude
> that you need a lock there or tell what needs to be locked.
>
> There's no substitute for deep thought where any kind of
> theading is involved, IMO.
>
> --
> Greg
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com

I wasn't aware that Guido had brought this up, and I believe what he
says to be true. Preemptive coroutines, are just a hack around the
GIL, and reduce OS overheads. It's the explicit nature of the enhanced
generators that is their greatest value.

FWIW, I wrote a Python 3 compatible equivalent to gevent (also
greenlet based, and also very similar to Brett's et al coroutine
proposal), which didn't really solve the concurrency problems I hoped.
There were no guarantees whether functions would "switch out", so all
the locking and threading issues simply reemerged, albeit with also
needing to have all calls non-blocking, losing compatibility with any
routine that didn't make use of nonblocking calls and/or expose it's
"yield" in the correct way, but reducing GIL contention. Overall not
worth it.

In short, implicit coroutines are just a GIL work around, that break
compatibility for little gain.

Thanks Glyph for those links.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-21 Thread Matt Joiner
> My concern is that you will end up with vastly more 'yield from's
> than places that require locks, so most of them are just noise.
> If you bite your nails over whether a lock is needed every time
> you see one, they will cause you a lot more anxiety than they
> alleviate.

Not necessarily. The yield from's follow the blocking control flow,
which is surprisingly less common than you might think. Parts of your
code naturally arise as not requiring blocking behaviour in the same
manner as in Haskell where parts of your code are identified as
requiring the IO monad.

>> Sometimes there's no alternative, but wherever I can, I avoid thinking,
>> especially hard thinking.  This maxim has served me very well throughout my
>> programming career ;-).

I'd replace "hard thinking" with "future confusion" here.

> There are already well-known techniques for dealing with
> concurrency that minimise the amount of hard thinking required.
> You devise some well-behaved abstractions, such as queues, and
> put all your hard thinking into implementing them. Then you
> build the rest of your code around those abstractions. That
> way you don't have to rely on crutches such as explicitly
> marking everything that might cause a task switch, because
> it doesn't matter.

It's my firm belief that this isn't sufficient. If this were true,
then the Python internals could be improved by replacing the GIL with
a series of channels/queues or what have you. State is complex, and
without guarantees of immutability, it's just not practical to try to
wrap every state object in some protocol to be passed back and forth
on queues.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] io module types

2012-01-24 Thread Matt Joiner
Can calls to the C types in the io module be made into module lookups
more akin to how it would work were it written in Python? The C
implementation for io_open invokes the C type objects for FileIO, and
friends, instead of looking them up on the io or _io modules. This
makes it difficult to subclass and/or modify the behaviour of those
classes from Python.

http://hg.python.org/cpython/file/0bec943f6778/Modules/_io/_iomodule.c#l413
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-24 Thread Matt Joiner
After much consideration, and playing with PEP380, I've changed my
stance on this. Full blown coroutines are the proper way forward.
greenlet doesn't cut it because the Python interpreter isn't aware of
the context switches. Profiling, debugging and tracebacks are
completely broken by this. Stackless would need to be merged, and
that's clearly not going to happen.

I built a basic scheduler and had a go at "enhancing" the stdlib using
PEP380, here are some examples making use of this style:
https://bitbucket.org/anacrolix/green380/src/8f7fdc20a8ce/examples

After realising it was a dead-end, I read up on Mark's ideas, there's
some really good stuff in there:
http://www.dcs.gla.ac.uk/~marks/
http://hotpy.blogspot.com/

If someone can explain what's stopping real coroutines being into
Python (3.3), that would be great.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-27 Thread Matt Joiner
+0. I think the idea is right, and will help to get good quality
modules in at a faster rate. However it is compensating for a lack of
interface and packaging standardization in the 3rd party module world.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-27 Thread Matt Joiner
> A more normal incantation, as is often the way for packages that became
> parts of the standard library after first being a third party library
> (sometimes under a different name, e.g. simplejson -> json):
>
> try:
>    from __preview__ import thing
> except ImportError:
>    import thing
>
> So no need to target a very specific version of Python.

I think this is suboptimal, having to guess where modules are located,
you end up with this in every module:

try:
import cjson as json
except ImportError:
   try:
   import simplejson as json
  except ImportError:
  import json as json

Perhaps the versioned import stuff could be implemented (whatever the
syntax may be), in order that something like this can be done instead:

import regex('__preview__')
import regex('3.4')

Where clearly the __preview__ version makes no guarantees about
interface or implementation whatsoever.

etc.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-27 Thread Matt Joiner
On Fri, Jan 27, 2012 at 12:26 PM, Alex  wrote:
> I think a significantly healthier process (in terms of maximizing feedback and
> getting something into it's best shape) is to let a project evolve naturally 
> on
> PyPi and in the ecosystem, give feedback to it from an inclusion perspective,
> and then include it when it becomes ready on it's own merits. The counter
> argument to  this is that putting it in the stdlib gets you signficantly more
> eyeballs (and hopefully more feedback, therefore), my only response to this 
> is:
> if it doesn't get eyeballs on PyPi I don't think there's a great enough need 
> to
> justify it in the stdlib.

Strongly agree.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-27 Thread Matt Joiner
FWIW I'm now -1 for this idea. Stronger integration with PyPI and
packaging systems is much preferable. Python core public releases are
no place for testing.

On Sat, Jan 28, 2012 at 2:42 AM, Matt Joiner  wrote:
> On Fri, Jan 27, 2012 at 12:26 PM, Alex  wrote:
>> I think a significantly healthier process (in terms of maximizing feedback 
>> and
>> getting something into it's best shape) is to let a project evolve naturally 
>> on
>> PyPi and in the ecosystem, give feedback to it from an inclusion perspective,
>> and then include it when it becomes ready on it's own merits. The counter
>> argument to  this is that putting it in the stdlib gets you signficantly more
>> eyeballs (and hopefully more feedback, therefore), my only response to this 
>> is:
>> if it doesn't get eyeballs on PyPi I don't think there's a great enough need 
>> to
>> justify it in the stdlib.
>
> Strongly agree.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-28 Thread Matt Joiner
> +1. I'd much rather just use the module from PyPI.
>
> It would be good to have a practical guide on how to manage the
> transition from third-party to core library module though. A PEP with
> a list of modules earmarked for upcoming inclusion in the standard
> library (and which Python version they're intended to be included in)
> might focus community effort on using, testing and fixing modules
> before they make it into core and fixing becomes a lot harder.

+1 for your +1, and earmarking. That's the word I was looking for, and
instead chose "advocacy".
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-28 Thread Matt Joiner
> __preview__ would fall into this category as well). And yet I have
> essentially no means of gaining access to any 3rd party modules,
> whether they are packaged by the distro or obtained from PyPI.  (And
> "build your own" isn't an option in many cases, if only because a C
> compiler may well not be available!) This is essentially due to
> corporate inertia and bogged down "do-nothing" policies rather than
> due dilligence or supportability concerns. But it is a reality for me
> (and many others, I suspect).
>
> Having said this, of course, the same corporate inertia means that
> Python 3.3 is a pipe-dream for me in those environments for many years
> yet. So ignoring them may be reasonable.

You clearly want access to external modules sooner. A preview
namespace addresses this indirectly. The separated stdlib versioning
concept is far superior for this use case.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-29 Thread Matt Joiner
I think an advocacy of 3rd party modules would start with modules such as
ipaddr, requests, regex. Linking directly to them from the python core
documentation, while requesting they hold a successful moratorium in order
to be included in a later standard module release.
On Jan 30, 2012 10:47 AM, "Nick Coghlan"  wrote:

> On Mon, Jan 30, 2012 at 8:44 AM, Barry Warsaw  wrote:
> > Nothing beats people beating on it heavily for years in production code
> to
> > shake things out.  I often think a generic answer to "did I get the API
> right"
> > could be "no, but it's okay" :)
>
> Heh, my answer to complaints about the urrlib (etc) APIs being
> horrendous in the modern web era is to point out that they were put
> together in an age where "web" mostly meant "unauthenticated HTTP GET
> requests".
>
> They're hard to use for modern authentication protocols because they
> *predate* widespread use of such things...
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] threading.Semaphore()'s counter can become negative for non-ints

2012-01-30 Thread Matt Joiner
It's also potentially lossy if you incremented and decremented until
integer precision is lost. My vote is for an int type check. No casting.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Store timestamps as decimal.Decimal objects

2012-01-30 Thread Matt Joiner
Sounds good, but I also prefer Alexander's method. The type information is
already encoded in the class object. This way you don't need to maintain a
mapping of strings to classes, and other functions/third party can join in
the fun without needing access to the latest canonical mapping. Lastly
there will be no confusion or contention for duplicate keys.
On Jan 31, 2012 10:32 AM, "Victor Stinner" 
wrote:

> Hi,
>
> In issues #13882 and #11457, I propose to add an argument to functions
> returning timestamps to choose the timestamp format. Python uses float
> in most cases whereas float is not enough to store a timestamp with a
> resolution of 1 nanosecond. I added recently time.clock_gettime() to
> Python 3.3 which has a resolution of a nanosecond. The (first?) new
> timestamp format will be decimal.Decimal because it is able to store
> any timestamp in any resolution without loosing bits. Instead of
> adding a boolean argument, I would prefer to support more formats. My
> last patch provides the following formats:
>
>  - "float": float (used by default)
>  - "decimal": decimal.Decimal
>  - "datetime": datetime.datetime
>  - "timespec": (sec, nsec) tuple # I don't think that we need it, it
> is just another example
>
> The proposed API is:
>
>  time.time(format="datetime")
>  time.clock_gettime(time.CLOCK_REALTIME, format="decimal")
>  os.stat(path, timestamp="datetime)
>  etc.
>
> This API has an issue: importing the datetime or decimal object is
> implicit, I don't know if it is really an issue. (In my last patch,
> the import is done too late, but it can be fixed, it is not really a
> matter.)
>
> Alexander Belopolsky proposed to use
> time.time(format=datetime.datetime) instead.
>
> --
>
> The first step would be to add an argument to functions returning
> timestamps. The second step is to accept these new formats (Decimal?)
> as input, for datetime.datetime.fromtimestamp() and os.utime() for
> example.
>
> (Using decimal.Decimal, we may remove os.utimens() and use the right
> function depending on the timestamp resolution.)
>
> --
>
> I prefer Decimal over a dummy tuple like (sec, nsec) because you can
> do arithmetic on it: t2-t1, a+b, t/k, etc. It stores also the
> resolution of the clock: time.time() and time.clock_gettime() have for
> example different resolution (sec, ms, us for time.time() and ns for
> clock_gettime()).
>
> The decimal module is still implemented in Python, but there is
> working implementation in C which is much faster. Store timestamps as
> Decimal can be a motivation to integrate the C implementation :-)
>
> --
>
> Examples with the time module:
>
> $ ./python
> Python 3.3.0a0 (default:52f68c95e025+, Jan 26 2012, 21:54:31)
> >>> import time
> >>> time.time()
> 1327611705.948446
> >>> time.time('decimal')
> Decimal('1327611708.988419')
> >>> t1=time.time('decimal'); t2=time.time('decimal'); t2-t1
> Decimal('0.000550')
> >>> t1=time.time('float'); t2=time.time('float'); t2-t1
> 5.9604644775390625e-06
> >>> time.clock_gettime(time.CLOCK_MONOTONIC, 'decimal')
> Decimal('1211833.389740312')
> >>> time.clock_getres(time.CLOCK_MONOTONIC, 'decimal')
> Decimal('1E-9')
> >>> time.clock()
> 0.12
> >>> time.clock('decimal')
> Decimal('0.12')
>
> Examples with os.stat:
>
> $ ./python
> Python 3.3.0a0 (default:2914ce82bf89+, Jan 30 2012, 23:07:24)
> >>> import os
> >>> s=os.stat("setup.py", timestamp="datetime")
> >>> s.st_mtime - s.st_ctime
> datetime.timedelta(0)
> >>> print(s.st_atime - s.st_ctime)
> 52 days, 1:44:06.191293
> >>> os.stat("setup.py", timestamp="timespec").st_ctime
> (1323458640, 702327236)
> >>> os.stat("setup.py", timestamp="decimal").st_ctime
> Decimal('1323458640.702327236')
>
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Store timestamps as decimal.Decimal objects

2012-01-31 Thread Matt Joiner
Nick mentioned using a single type and converting upon return, I'm starting
to like that more. A limited set of time formats is mostly arbitrary, and
there will always be a performance hit deciding which type to return.

The goal here is to allow high precision timings with minimal cost. A
separate module, and an agreement on what the best performing high
precision type is I think is the best way forward.
On Feb 1, 2012 8:47 AM, "Victor Stinner" 
wrote:

> > (I removed the timespec format, I consider that we don't need it.)
> >
> > Rather, I guess you removed it because it didn't fit the "types as flags"
> > pattern.
>
> I removed it because I don't like tuple: you cannot do arithmetic on
> tuple, like t2-t1. Print a tuple doesn't give you a nice output. It is
> used in C because you have no other choice, but in Python, we can do
> better.
>
> > As I said in another message, another hint that this is the wrong API
> design:
> > Will the APIs ever support passing in types other than these five?
>  Probably
> > not, so I strongly believe they should not be passed in as types.
>
> I don't know if we should only support 3 types today, or more, but I
> suppose that we will add more later (e.g. if datetime is replaced by
> another new and better datetime module).
>
> You mean that we should use a string instead of type, so
> time.time(format="decimal")? Or do something else?
>
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Store timestamps as decimal.Decimal objects

2012-01-31 Thread Matt Joiner
Analysis paralysis commence. +1 for separate module using decimal.
On Feb 1, 2012 1:44 PM, "PJ Eby"  wrote:

> On Tue, Jan 31, 2012 at 7:35 PM, Nick Coghlan  wrote:
>
>> Such a protocol can easily be extended to any other type - the time
>> module could provide conversion functions for integers and float
>> objects (meaning results may have lower precision than the underlying
>> system calls), while the existing "fromtimestamp" APIs in datetime can
>> be updated to accept the new optional arguments (and perhaps an
>> appropriate class method added to timedelta, too). A class method
>> could also be added to the decimal module to construct instances from
>> integer components (as shown above), since that method of construction
>> isn't actually specific to timestamps.
>>
>
> Why not just make it something like __fromfixed__() and make it a standard
> protocol, implemented on floats, ints, decimals, etc.  Then the API is just
> "time.time(type)", where type is any object providing a __fromfixed__
> method.  ;-)
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 409 - final?

2012-02-01 Thread Matt Joiner
raise from None seems pretty "in band". A NoException class could have many
other uses and leaves no confusion about intent.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-02-03 Thread Matt Joiner
Woohoo! :)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-02-04 Thread Matt Joiner
+1
On Feb 4, 2012 8:37 PM, "Paul Moore"  wrote:
>
> On 4 February 2012 11:25, Steven D'Aprano  wrote:
> > It strikes me that it would be helpful sometimes to programmatically
> > recognise "preview" modules in the std lib. Could we have a
recommendation
> > in PEP 8 that such modules should have a global variable called
PREVIEW, and
> > non-preview modules should not, so that the recommended way of telling
them
> > apart is with hasattr(module, "PREVIEW")?
>
> In what situation would you want that when you weren't referring to a
> specific module? If you're referring to a specific module and you
> really care, just check sys.version. (That's annoying and ugly enough
> that it'd probably make you thing about why you are doing it - I
> cannot honestly think of a case where I'd actually want to check in
> code if a module is a preview - hence my question as to what your use
> case is).
>
> Feels like YAGNI to me.
> Paul.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 394 request for pronouncement (python2 symlink in *nix systems)

2012-02-15 Thread Matt Joiner
+1 for using symlinks where possible. In deploying Python to different
operating systems and filesystems I've often had to run a script to "fix"
the hardlinking done by make install because the deployment mechanism or
system couldn't be trusted to do the right thing with respect to minimising
installation size. Symlinks are total win when disk use is a concern, and
make intent clear. I'm not aware of any mainstream systems that don't
support them.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP czar for PEP 3144?

2012-02-20 Thread Matt Joiner
On Mon, Feb 20, 2012 at 11:27 PM, Antoine Pitrou  wrote:
> IMHO, nesting without a good, consistent, systematic categorization
> leads to very unpleasant results (e.g. "from urllib.request import
> urlopen").
>
> Historically, our stdlib has been flat and I think it should stay so,
> short of redoing the whole hierarchy.

I concur. Arbitrary nesting should be avoided.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 413: Faster evolution of the Python Standard Library

2012-02-24 Thread Matt Joiner
I think every minor release should be fully supported. The current rate of
change is very high and there's a huge burden on implementers and
production users to keep up, so much so that upgrading is undesirable
except for serious enthusiasts.

Include just the basics and CPython specific modules in the core release
and version the stdlib separately. The stdlib should be supported such that
it can be installed to an arbitrary version of Python.

Better yet I'd like to see the stdlib become a list of vetted external
libraries that meet some requirements on usefulness, stability and
compatibility (PEP), that get cut at regular intervals. This takes the
burden away from core, improves innovation, allows for different
implementations, and ensures that the Python package management system is
actually useful.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 413: Faster evolution of the Python Standard Library

2012-02-24 Thread Matt Joiner
Why not cut the (external) stdlib before each minor release?

Testing new language features is not the role of a public release, this is
no reason to require ownership on a module.

Evidently some modules have to ship with core if they are required (sys),or
expose internals (os, gc). Others are clearly extras, (async{ore,hat},
subprocess, unittest, select).

There are so many third party modules languishing because inferior forms
exist in the stdlib, and no centralized method for their recommendation and
discovery. Breaking out optional parts of the stdlib is an enabling step
towards addressing this. I would suggest Haskell, node.js and golang as
examples of how stdlibs are minimal enough to define basic idiomatic
interfaces but allow and encourage extension.
On Feb 25, 2012 10:53 AM, "Brett Cannon"  wrote:

>
>
> On Fri, Feb 24, 2012 at 21:08, Matt Joiner  wrote:
>
>> I think every minor release should be fully supported. The current rate
>> of change is very high and there's a huge burden on implementers and
>> production users to keep up, so much so that upgrading is undesirable
>> except for serious enthusiasts.
>>
>> Include just the basics and CPython specific modules in the core release
>> and version the stdlib separately. The stdlib should be supported such that
>> it can be installed to an arbitrary version of Python.
>>
>
> That idea has been put forth and shot down. The stdlib has to be tied to
> at least some version of Python just like any other project. Plus the
> stdlib is where we try out new language features to make sure they make
> sense. Making it a separate project is not that feasible.
>
>
>>  Better yet I'd like to see the stdlib become a list of vetted external
>> libraries that meet some requirements on usefulness, stability and
>> compatibility (PEP), that get cut at regular intervals. This takes the
>> burden away from core, improves innovation, allows for different
>> implementations, and ensures that the Python package management system is
>> actually useful.
>>
>
> That's been called a sumo release and proposed before, but no one has
> taken the time to do it (although the 3rd-party releases of Python somewhat
> take this view). Thinning out the stdlib in favour of the community
> providing solutions is another can of worms which does not directly impact
> the discussion of how to handle stdlib releases unless you are pushing to
> simply drop the stdlib which is not possible as Python itself depends on it.
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Versioning proposal: syntax.stdlib.bugfix

2012-02-25 Thread Matt Joiner
Chrome does something similar. All digits keep rising in that scheme.
However in your examples you can't identify whether bug fixes are to stdlib
or interpreter?
On Feb 26, 2012 10:07 AM, "Terry Reedy"  wrote:

> We have two similar proposals, PEPs 407 and 413, to speed up the release
> of at least library changes. To me, both have major problems with version
> numbering.
>
> I think the underlying problem is starting with a long-term fixed leading
> '3', which conveys no information about current and future changes (at
> least for another decade).
>
> So I propose for consideration that we use the first digit to indicate a
> version of core python with fixed grammar/syntax and corresponding
> semantics. I would have this be stable for at least two years. It seems
> that most current syntax proposals amount to duplication of current
> function to suite someone's or some people's stylistic preference. My
> current view is that current syntax in mostly good enough, the
> implementation thereof is close to bug-free, and we should think carefully
> about changes.
>
> We could then use the second digit to indicate library version. The .0
> library version would be for a long-term support version. The library
> version could change every six months, but I would not necessarily fix it
> at any particular interval. If we have some important addition or upgrade
> at four months, release it. If we need another month to include an
> important change, perhaps wait.
>
> The third digit would be for initial (.0) and bugfix releases, as at
> present. Non .0 bugfix releases would mostly be for x.0 long-term
> syntax+library versions. x.(y!=0).0 library-change-only releases would only
> get x.(y!=0).1 bugfix releases on an 'emergency' basis.
>
> How this would work:
>
> Instead of 3.3.0, release 4.0.0. That would be followed by 4.0.1, 4.0.2,
> etc, bugfixes, however often we feel like it, until 5.0.0 is released.
>
> 4.0.0 would also be followed by 4.1.0 with updated stdlib in about 6
> months, then barring mistakes, 4.2.0, etc, again until 5.0.0.
>
> A variation of this proposal would be to prefix '3.' to core.lib.fix. I
> disfavor that for 3 reasons.
> 1. It is not needed to indicate 'not Python 2' as *any* leading digit
> greater than 2 says the same.
> 2. It makes for a more awkward 4 level number.
> 3. It presupposes a 3 to 4 transition something like the 2 to 3
> transition. However, I am dubious about for more than one reason (another
> topic for another post).
>
> --
> Terry Jan Reedy
>
> __**_
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/**mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status regarding Old vs. Advanced String Formating

2012-02-26 Thread Matt Joiner
Big +1
On Feb 26, 2012 4:41 PM, "Eli Bendersky"  wrote:

>
> On Sat, Feb 25, 2012 at 12:20, "Martin v. Löwis" wrote:
>
>> > I find that strange, especially for an expert Python dev. I, a newbie,
>> > find it far friendlier (and easier for a new programmer to grasp).
>> > Maybe it's because I use it all the time, and you don't?
>>
>> That is most likely the case. You learn by practice. For that very
>> reason, the claim "and easier for a new programmer to grasp" is
>> difficult to prove. It was easier for *you*, since you started using
>> it, and then kept using it. I don't recall any particular obstacles
>> learning % formatting (even though I did for C, not for C++).
>> Generalizing that it is *easier* is invalid: you just didn't try
>> learning that instead first, and now you can't go back in a state
>> where either are new to you.
>>
>> C++ is very similar here: they also introduced a new way of output
>> (iostreams, and << overloading). I used that for a couple of years,
>> primarily because people said that printf is "bad" and "not object-
>> oriented". I then recognized that there is nothing wrong with printf
>> per so, and would avoid std::cout in C++ these days, in favor of
>> std::printf (yes, I know that it does have an issue with type safety).
>>
>
> Not to mention that the performance of iostreams is pretty bad, to the
> extent that some projects actively discourage using them in favor of either
> C-style IO (fgets, printf, etc.) or custom IO implementations. This is
> marginally off-topic, although it does show that an initial thought of
> deprecating an existing functionality for new one doesn't always work out
> in the long run, even for super-popular languages like C++.
>
> Eli
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] State of PEP-3118 (memoryview part)

2012-02-26 Thread Matt Joiner
+1 for won't fix.
On Feb 26, 2012 9:46 PM, "Antoine Pitrou"  wrote:

> On Sun, 26 Feb 2012 14:27:21 +0100
> Stefan Krah  wrote:
> >
> > The underlying problems with memoryview were intricate and required
> > a long discussion (issue #10181) that led to a complete rewrite
> > of memoryobject.c.
> >
> >
> > We have several options with regard to 2.7 and 3.2:
> >
> >   1) Won't fix.
>
> Given the extent of the rewrite, this one has my preference.
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives

2012-03-12 Thread Matt Joiner
Definitely think some library vetting needs to occur. Superior alternatives
do exist and are difficult to find and choose from. Stuff like LXML,
Requests, Tornado are clear winners.

The more of this done externally (ie PyPI the better). I still think a set
of requirements for "official approval" would be good. This could outline
things like requiring that certain stable Python versions are supported,
interface stability, demonstrated user base, documentation etc.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives

2012-03-13 Thread Matt Joiner
On Mar 14, 2012 5:27 AM, "Antoine Pitrou"  wrote:
>
> On Tue, 13 Mar 2012 14:16:40 -0700
> Guido van Rossum  wrote:
>
> > On Tue, Mar 13, 2012 at 12:49 PM, Terry Reedy  wrote:
> > > Authors of separately maintained packages are, from our viewpoint, as
> > > eligible to help with tracker issues as anyone else, even while they
> > > continue work on their external package. Some of them are more likely
than
> > > most contributors to have the knowledge needed for some particular
issues.
> >
> > This is a good idea. I was chatting w. Senthil this morning about
> > adding improvements to urllib/request.py based upon ideas from
> > urllib3, requests, httplib2 (?), and we came to the conclusion that it
> > might be a good idea to let those packages' authors review the
> > proposed stdlib improvements.
>
> We don't have any provisions against reviewal by third-party
> developers already. I think the main problem (for us, of course) is that
> these people generally aren't interested enough to really dive in
> stdlib patches and proposals.
>
> For example, for the ssl module, I have sometimes tried to involve
> authors of third-party packages such as pyOpenSSL (or, IIRC, M2Crypto),
> but I got very little or no reviewing.

Rather than indicating apathy on the party of third party developers, this
might be a sign that core Python is unapproachable or not worth the effort.

For instance I have several one line patches languishing, I can't imagine
how disappointing it would be to have significantly larger patches ignored,
but it happens.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives

2012-03-13 Thread Matt Joiner
Thanks for the suggestions.
On Mar 14, 2012 12:03 PM, "Eli Bendersky"  wrote:

> > Rather than indicating apathy on the party of third party developers,
> this
> > might be a sign that core Python is unapproachable or not worth the
> effort.
> >
> > For instance I have several one line patches languishing, I can't imagine
> > how disappointing it would be to have significantly larger patches
> ignored,
> > but it happens.
> >
>
> A one-line patch for a complex module or piece of code may require
> much more than looking at that single line to really review. I hope
> you understand that.
>
> That said, if you find any issues in the bug tracker that in your
> opinion need only a few minutes of attention from a core developer,
> feel free to send a note to the mentorship mailing list. People
> sometimes come there asking for precisely this thing (help reviewing a
> simple patch they submitted), and usually get help quickly if their
> request is justified.
>
> Eli
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-14 Thread Matt Joiner
I have some observations regarding this:

Victor's existing time.monotonic and time.wallclock make use of
QueryPerformanceCounter, and CLOCK_MONOTONIC_RAW as possible. Both of
these are hardware-based counters, their monotonicity is just a
convenient property of the timer sources. Furthermore, time values can
actually vary depending on the processor the call is served on.
time.hardware()? time.monotonic_raw()?

There are bug reports on Linux that CLOCK_MONOTONIC isn't always
monotonic. This is why CLOCK_MONOTONIC_RAW was created. There's also
the issue of time leaps (forward), which also isn't a problem with the
latter form. time.monotonic(raw_only=False)?

The real value of "monotonic" timers is that they don't leap
backwards, and preferably don't leap forwards. Whether they are
absolute is of no consequence. I would suggest that the API reflect
this, and that more specific time values be obtained using the proper
raw syscall wrapper (like time.clock_gettime) if required.
time.relative(strict=False)?

The ultimate use of the function name being established is for use in
timeouts and relative timings.

Where an option is present, it disallows fallbacks like
CLOCK_MONOTONIC and other weaker forms:
 * time.hardware(fallback=True) -> reflects the source of the timing
impeccably. alerts users to possible affinity issues
 * time.monotonic_raw() -> a bit linux specific...
 * time.relative(strict=False) -> matches the use case. a warning
could be put regarding hardware timings
 * time.monotonic(raw_only=False) -> closest to the existing
implementation. the keyword name i think is better.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docs of weak stdlib modules should encourage exploration of 3rd-party alternatives

2012-03-14 Thread Matt Joiner
> Can you give a pointer to these one-liners?
> Once a patch gets a month old or older, it tends to disappear from
> everyone's radar unless you somehow "ping" on the tracker, or post a
> message to the mailing-list.

All of these can be verified with a few minutes of checking the
described code paths.

http://bugs.python.org/issue13839
http://bugs.python.org/issue13872
http://bugs.python.org/issue12684
http://bugs.python.org/issue13694

Thanks
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-14 Thread Matt Joiner
On Thu, Mar 15, 2012 at 12:22 AM, Guido van Rossum  wrote:
> I have a totally different observation. Presumably the primary use
> case for these timers is to measure real time intervals for the
> purpose of profiling various operations. For this purpose we want them
> to be as "steady" as possible: tick at a constant rate, don't jump
> forward or backward. (And they shouldn't invoke the concept of "CPU"
> time -- we already have time.clock() for that, besides it's often
> wrong, e.g. you might be measuring some sort of I/O performance.) If
> this means that a second as measured by time.time() is sometimes not
> the same as a second measured by this timer (due to time.time()
> occasionally jumping due to clock adjustments), that's fine with me.
> If this means it's unreliable inside a VM, well, it seems that's a
> property of the underlying OS timer, and there's not much we can do
> about it except letting a knowledgeable user override the timer user.
> As for names, I like Jeffrey's idea of having "steady" in the name.

In that case I'd suggest either time.hardware(strict=True), or
time.steady(strict=True), since the only timers exposed natively that
are both high resolution and steady are on the hardware. A warning
about CPU affinity is also still wise methinks.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-14 Thread Matt Joiner
FWIW the name is quite important, because these kind of timings are
quite important so I think it's worth the effort.

> - By default, it should fall back to time.time if a better source is
>  not available, but there should be a flag that can disable this
>  fallback for users who really *need* a monotonic/steady time source.

Agreed. As Guido mentioned, some platforms might not be able to access
to hardware times, so falling back should be the default, lest unaware
users trigger unexpected errors.

> - Proposed names for the function:
>  * monotonic

Doesn't indicate that the timing is also prevented from leaping forward.

>  * steady_clock

I think the use of clock might infer CPU time on doc-skimming user.
"clock" is overloaded here.

> For the flag name, I'm -1 on "monotonic" -- it sounds like a flag to
> decide whether to use a monotonic time source always or never, while
> it actually decides between "always" and "sometimes". I think "strict"
> is nicer than "fallback", but I'm fine with either one.

I agree, "strict" fits in with existing APIs.

I think time.hardware(), and time.steady() are still okay here.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-14 Thread Matt Joiner
I also can live with steady, with strict for the flag.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-14 Thread Matt Joiner
Victor, I think that steady can always be monotonic, there are time sources
enough to ensure this on the platforms I am aware of. Strict in this sense
refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs
CLOCK_MONOTONIC_RAW.

Non monotonicity of this call should be considered a bug. Strict would be
used for profiling where forward leaps would disqualify the timing.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-15 Thread Matt Joiner
On Mar 15, 2012 4:23 PM, "Paul Moore"  wrote:
>
> On 15 March 2012 01:58, Matt Joiner  wrote:
> > Victor, I think that steady can always be monotonic, there are time
sources
> > enough to ensure this on the platforms I am aware of. Strict in this
sense
> > refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs
> > CLOCK_MONOTONIC_RAW.
>
> I agree - Kristján pointed out that you can ensure that backward jumps
> never occur by implementing a cache of the last value.

Without knowing more, either QPC was buggy on his platform, or he didn't
account for processor affinity (QPC derives from a per processor counter).

>
> > Non monotonicity of this call should be considered a bug.
>
> +1
>
> > Strict would be used for profiling where forward leaps would disqualify
the timing.
>
> I'm baffled as to how you even identify "forward leaps". In relation
> to what? A more accurate time source? I thought that by definition
> this was the most accurate time source we have!

Monotonic clocks are not necessarily hardware based, and may be adjusted
forward by NTP.

>
> +1 on a simple time.steady() with guaranteed monotonicity and no flags
> to alter behaviour.
>
> Paul.

I don't mind since I'll be using it for timeouts, but clearly the strongest
possible guarantee should be made. If forward leaps are okay, then by
definition the timer is monotonic but not steady.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-15 Thread Matt Joiner
+1. I now prefer time.monotonic(), no flags. It attempts to be as high
precision as possible and guarantees never to jump backwards. Russell's
comment is right, the only steady sources are from hardware, and these are
too equipment and operating system specific. (For this call anyway).
On Mar 16, 2012 3:23 AM, "Russell E. Owen"  wrote:

> In article
> ,
>  Kristján Valur Jónsson  wrote:
>
> > What does "jumping forward" mean?  That's what happens with every clock
> at
> > every time quantum.  The only effect here is that this clock will be
> slightly
> > noisy, i.e. its precision becomes worse.  On average it is still correct.
> > Look at the use cases for this function
> > 1) to enable timeouts for certaing operations, like acquiring locks:
> >   Jumping backwards is bad, because that may cause infinite wait
> time.  But
> > jumping forwards is ok, it may just mean that your lock times out a bit
> early
> > 2) performance measurements:
> >   If you are running on a platform with a broken runtime clock, you
> are not
> > likely to be running performance measurements.
> >
> > Really, I urge you to skip the "strict" keyword.  It just adds confusion.
> > Instead, lets just give the best monotonic clock we can do which doesn"t
> move
> > backwards.
> > Let's just provide a "practical" real time clock with high resolution
> that is
> > appropriate for providing timeout functionality and so won't jump
> backwards
> > for the next 20 years.  Let's simply point out to people that it may not
> be
> > appropriate for high precision timings on old and obsolete hardware and
> be
> > done with it.
>
> I agree. I prefer the name time.monotonic with no flags. It will suit
> most use cases. I think supplying truly steady time is a low level
> hardware function (e.g. buy a GPS timer card) with a driver.
>
> -- Russell
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-15 Thread Matt Joiner
Windows also has this albeit course grained and also 32 bit. I don't think
ticks reflects the reason why using the timer is desirable.

monotonic_time seems reasonable, there's no reason to persist short names
when users can import it how they like.
On Mar 16, 2012 7:20 AM, "Steven D'Aprano"  wrote:

> Terry Reedy wrote:
>
>> On 3/15/2012 5:27 PM, Alexander Belopolsky wrote:
>>
>>> On Thu, Mar 15, 2012 at 3:55 PM, Matt Joiner
>>>  wrote:
>>>
>>>> +1. I now prefer time.monotonic(), no flags.
>>>>
>>>
>>> Am I alone thinking that an adjective is an odd choice for a function
>>> name?
>>>
>>
>> I would normally agree, but in this case, it is a function of a module
>> whose short name names what the adjective is modifying. I expect that this
>> will normally be called with the module name.
>>
>>  I think monotonic_clock or monotonic_time would be a better option.
>>>
>>
>> time.monotonic_time seems redundant.
>>
>
> Agreed. Same applies to "steady_time", and "steady" on its own is weird.
> Steady what?
>
> While we're bike-shedding, I'll toss in another alternative. Early Apple
> Macintoshes had a system function that returned the time since last reboot
> measured in 1/60th of a second, called "the ticks".
>
> If I have understood correctly, the monotonic timer will have similar
> properties: guaranteed monotonic, as accurate as the hardware can provide,
> but not directly translatable to real (wall-clock) time. (Wall clocks
> sometimes go backwards.)
>
> The two functions are not quite identical: Mac "ticks" were 32-bit
> integers, not floating point numbers. But the use-cases seem to be the same.
>
> time.ticks() seems right as a name to me. It suggests a steady heartbeat
> ticking along, without making any suggestion that it returns "the time".
>
>
>
> --
> Steven
>
> __**_
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/**mailman/listinfo/python-dev<http://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> anacrolix%40gmail.com<http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Issue #10278: Add an optional strict argument to time.steady(), False by default

2012-03-20 Thread Matt Joiner
I believe we should make a monotonic_time method that assures monotonicity
and be done with it. Forward steadiness can not be guaranteed. No
parameters.
On Mar 20, 2012 2:56 PM, "Steven D'Aprano"  wrote:

> On Mon, Mar 19, 2012 at 01:35:49PM +0100, Victor Stinner wrote:
>
> > Said differently: time.steady(strict=True) is always monotonic (*),
> > whereas time.steady() may or may not be monotonic, depending on what
> > is avaiable.
> >
> > time.steady() is a best-effort steady clock.
> >
> > (*) time.steady(strict=True) relies on the OS monotonic clock. If the
> > OS provides a "not really monotonic" clock, Python cannot do better.
>
> I don't think that is true. Surely Python can guarantee that the clock
> will never go backwards by caching the last value. A sketch of an
> implementation:
>
> def monotonic(_last=[None]):
>t = system_clock()  # best effort, but sometimes goes backwards
>if _last[0] is not None:
>t = max(t, _last[0])
>_last[0] = t
>return t
>
> Overhead if done in Python may be excessive, in which case do it in C.
>
> Unless I've missed something, that guarantees monotonicity -- it may not
> advance from one call to the next, but it will never go backwards.
>
> There's probably even a cleverer implementation that will not repeat the
> same value more than twice in a row. I leave that as an exercise :)
>
> As far as I can tell, "steady" is a misnomer. We can't guarantee that
> the timer will tick at a steady rate. That will depend on the quality of
> the hardware clock.
>
>
> --
> Steven
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Playing with a new theme for the docs

2012-03-21 Thread Matt Joiner
Turn your monitor portrait or make the window smaller :)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Setting up a RHEL6 buildbot

2012-03-22 Thread Matt Joiner
The 24 core machine at my last workplace could configure and make the tip
in 45 seconds from a clean checkout.

Lots of cores? :)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-23 Thread Matt Joiner
Yes, call it what it is. monotonic or monotonic_time, because that's why
I'm using it. No flags.

I've followed this thread throughout, and I'm still not sure if "steady"
gives the real guarantees it claims. It's trying to be too much. Existing
bugs complain about backward jumps and demand a clock that doesn't do this.
The function should guarantee monotonicity only, and not get
overcomplicated.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Playing with a new theme for the docs, iteration 2

2012-03-25 Thread Matt Joiner
Is nice yes?! When I small the nav bar, then embiggen it again, the text
centers vertically. It's in the wrong place. The new theme is very minimal,
perhaps a new color should be chosen. We've done green, what about orange,
brown or blue?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Playing with a new theme for the docs, iteration 2

2012-03-25 Thread Matt Joiner
Not sure if you addressed this in your answers to other comments...

Scroll down the page. Minimize the nav bar on the left. Bring it back
out again. Now the text in the nav bar permanently starts at an offset
from the top of the page.

On Sun, Mar 25, 2012 at 7:44 PM, Matt Joiner  wrote:
> Is nice yes?! When I small the nav bar, then embiggen it again, the text
> centers vertically. It's in the wrong place. The new theme is very minimal,
> perhaps a new color should be chosen. We've done green, what about orange,
> brown or blue?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Playing with a new theme for the docs, iteration 2

2012-03-25 Thread Matt Joiner
FWIW it doesn't hurt to err on the side of what worked. i have generally
have issues with low contrast, the current stable design is very good with
this.

i've just built the docs from tip, and the nav bar issue is fixed, nicely
done

i also don't see any reason to backport theme changes, +0
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Playing with a new theme for the docs, iteration 2

2012-03-26 Thread Matt Joiner
the text in the nav bar is too small, particularly in the search box.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop the new time.wallclock() function?

2012-03-26 Thread Matt Joiner
Inside time.steady, there are two different clocks trying to get out.

I think this steady business should be removed sooner rather than later.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-26 Thread Matt Joiner
So does anyone care to dig into the libstd++/boost/windoze implementation
to see how they each did steady_clock?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-26 Thread Matt Joiner
Cheers, that clears things up.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-27 Thread Matt Joiner
> I renamed time.steady() to time.try_monotonic() in the PEP. It's a
> temporary name until we decide what to do with this function.

How about get rid of it?

Also monotonic should either not exist if it's not available, or always
guarantee a (artificially) monotonic value. Finding out that something is
already known to not work shouldn't require a call and a faked OSError.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-27 Thread Matt Joiner
On Mar 28, 2012 8:38 AM, "Victor Stinner"  wrote:
>
> Scott wrote:
>
> << The Boost implementation can be summarized as:
>
> system_clock:
>
>  mac = gettimeofday
>  posix = clock_gettime(CLOCK_REALTIME)
>  win = GetSystemTimeAsFileTime
>
> steady_clock:
>
>  mac = mach_absolute_time
>  posix = clock_gettime(CLOCK_MONOTONIC)
>  win = QueryPerformanceCounter
>
> high_resolution_clock:
>
>  * = { steady_clock, if available
>   system_clock, otherwise } >>
>
> I read again the doc of the QElapsedTimer class of the Qt library. So Qt
and Boost agree to say that QueryPerformanceCounter() *is* monotonic.
>
> I was confused because of a bug found in 2006 in Windows XP on multicore
processors. QueryPerformanceCounter() gave a different value on each core.
The bug was fixed in Windows and is known as KB896256 (I already added a
link to the bug in the PEP).
>
>>> I added a time.hires() clock to the PEP for the benchmarking/profiling
>>> use case (...)
>>
>>
>> It is this always-having-to-manually-fallback-depending-on-os that I was
>> hoping your new functionality would avoid. Is time.try_monotonic()
>> suitable for this usecase?
>
>
> If QueryPerformanceCounter() is monotonic, the API can be simplified to:
>
>  * time.time() = system clock
>  * time.monotonic() = monotonic clock
>  * time.hires() = monotonic clock or fallback to system clock
>
> time.hires() definition is exactly what I was trying to implement with
"time.steady(strict=True)" / "time.try_monotonic()".
>
> --
>
> Scott> monotonic_clock = always goes forward but can be adjusted
> Scott> steady_clock = always goes forward and cannot be adjusted
>
> I don't know if the monotonic clock should be called time.monotonic() or
time.steady(). The clock speed can be adjusted by NTP, at least on Linux <
2.6.28.

Monotonic. It's still monotonic if it is adjusted forward, and that's okay.

>
> I don't know if other clocks used by my time.monotonic() proposition can
be adjusted or not.
>
> If I understand correctly, time.steady() cannot be implemented using
CLOCK_MONOTONIC on Linux because CLOCK_MONOTONIC can be adjusted?
>
> Does it really matter if a monotonic speed is adjusted?
>
>
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-28 Thread Matt Joiner
time.monotonic(): The uneventful and colorless function.
On Mar 28, 2012 9:30 PM, "Larry Hastings"  wrote:

> On 03/28/2012 01:56 PM, R. David Murray wrote:
>
>> On Wed, 28 Mar 2012 23:05:59 +1100, Steven D'Aprano
>>  wrote:
>>
>>> +1 on Nick's suggestion of try_monotonic. It is clear and obvious and
>>> doesn't
>>> mislead.
>>>
>> How about "monotonicest".
>>
>> (No, this is not really a serious suggestion.)
>>
>
> "monotonish".
>
> Thus honoring the Principle Of Least Monotonishment,
>
>
> //arry/
> __**_
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/**mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: Add monotonic clock

2012-03-28 Thread Matt Joiner
time.timeout_clock?

Everyone knows what that will be for and we won't have to make silly
theoretical claims about its properties and expected uses.

If no one else looks before I next get to a PC I'll dig up the clock/timing
source used for select and friends, and find any corresponding syscall that
retrieves it for Linux.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?

2012-04-03 Thread Matt Joiner
The discussion has completed degenerated. There are several different
clocks here, and several different agendas.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418 is too divisive and confusing and should be postponed

2012-04-03 Thread Matt Joiner
Finally! We've come full circle.

+1 for monotonic as just described by Victor.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 418 is too divisive and confusing and should be postponed

2012-04-03 Thread Matt Joiner
Lock it in before the paint dries.
On Apr 4, 2012 10:05 AM, "Matt Joiner"  wrote:

> Finally! We've come full circle.
>
> +1 for monotonic as just described by Victor.
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: #14533: if a test has no test_main, use loadTestsFromModule.

2012-04-09 Thread Matt Joiner
On Apr 10, 2012 2:36 AM, "Terry Reedy"  wrote:
>
>
> On 4/9/2012 9:13 AM, r.david.murray wrote:
>>
>> http://hg.python.org/cpython/rev/eff551437abd
>> changeset:   76176:eff551437abd
>> user:R David Murray
>> date:Mon Apr 09 08:55:42 2012 -0400
>> summary:
>>   #14533: if a test has no test_main, use loadTestsFromModule.
>>
>> This moves us further in the direction of using normal unittest
facilities
>> instead of specialized regrtest ones.  Any test module that can be
correctly
>> run currently using 'python unittest -m test.test_xxx' can now be
converted to
>> use normal unittest test loading by simply deleting its test_main, thus
no
>> longer requiring manual maintenance of the list of tests to run.
>
> ...
>>
>> +   if __name__ == '__main__':
>> +   unittest.main()
>>
>> -   if __name__ == '__main__':
>> -   test_main()
>
>
> Being on Windows, I sometimes run single tests interactively with
>
> from test import test_xxx as t; t.test_main()
>
> Should t.unittest.main(t.__name__) work as well?
> Should this always work even if there is still a test_main?
Both questions have the same answer. Yes, because this is how discovery
works.
>
> tjr
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] making the import machinery explicit

2012-04-14 Thread Matt Joiner
+1! Thanks for pushing this.
On Apr 15, 2012 4:04 AM, "Brett Cannon"  wrote:

> To start off, what I am about to propose was brought up at the PyCon
> language summit and the whole room agreed with what I want to do here, so I
> honestly don't expect much of an argument (famous last words).
>
> In the "ancient" import.c days, a lot of import's stuff was hidden deep in
> the C code and in no way exposed to the user. But with importlib finishing
> PEP 302's phase 2 plans of getting imoprt to be properly refactored to use
> importers, path hooks, etc., this need no longer be the case.
>
> So what I propose to do is stop having import have any kind of implicit
> machinery. This means sys.meta_path gets a path finder that does the heavy
> lifting for import and sys.path_hooks gets a hook which provides a default
> finder. As of right now those two pieces of machinery are entirely implicit
> in importlib and can't be modified, stopped, etc.
>
> If this happens, what changes? First, more of importlib will get publicly
> exposed (e.g. the meta path finder would become public instead of private
> like it is along with everything else that is publicly exposed). Second,
> import itself technically becomes much simpler since it really then is
> about resolving module names, traversing sys.meta_path, and then handling
> fromlist w/ everything else coming from how the path finder and path hook
> work.
>
> What also changes is that sys.meta_path and sys.path_hooks cannot be
> blindly reset w/o blowing out import. I doubt anyone is even touching those
> attributes in the common case, and the few that do can easily just stop
> wiping out those two lists. If people really care we can do a warning in
> 3.3 if they are found to be empty and then fall back to old semantics, but
> I honestly don't see this being an issue as backwards-compatibility would
> just require being more careful of what you delete (which I have been
> warning people to do for years now) which is a minor code change which
> falls in line with what goes along with any new Python version.
>
> And lastly, sticking None in sys.path_importer_cache would no longer mean
> "do the implicit thing" and instead would mean the same as NullImporter
> does now (which also means import can put None into sys.path_importer_cache
> instead of NullImporter): no finder is available for an entry on sys.path
> when None is found. Once again, I don't see anyone explicitly sticking None
> into sys.path_importer_cache, and if they are they can easily stick what
> will be the newly exposed finder in there instead. The more common case
> would be people wiping out all entries of NullImporter so as to have a new
> sys.path_hook entry take effect. That code would instead need to clear out
> None on top of NullImporter as well (in Python 3.2 and earlier this would
> just be a performance loss, not a semantic change). So this too could
> change in Python 3.3 as long as people update their code like they do with
> any other new version of Python.
>
> In summary, I want no more magic "behind the curtain" for Python 3.3 and
> import: sys.meta_path and sys.path_hooks contain what they should and if
> they are emptied then imports will fail and None in sys.path_importer_cache
> means "no finder" instead of "use magical, implicit stuff".
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] PEP 418: Add monotonic time, performance counter and process time functions

2012-04-16 Thread Matt Joiner
This is becoming the Manhattan Project of bike sheds.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Cython for cPickle?

2012-04-19 Thread Matt Joiner
Personally I find the unholy product of C and Python that is Cython to be
more complex than the sum of the complexities of its parts. Is it really
wise to be learning Cython without already knowing C, Python, and the
CPython object model?

While code generation alleviates the burden of tedious languages, it's also
infinitely more complex, makes debugging very difficult and adds to
prerequisite knowledge, among other drawbacks.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Failed issue tracker submission

2012-04-20 Thread Matt Joiner
I'm getting one of these every couple of days. What's the deal?
On Apr 21, 2012 1:03 AM, "Python tracker" <
roundup-ad...@psf.upfronthosting.co.za> wrote:

>
> An unexpected error occurred during the processing
> of your message. The tracker administrator is being
> notified.
>
> Return-Path: 
> X-Original-To: rep...@bugs.python.org
> Delivered-To: roundup+trac...@psf.upfronthosting.co.za
> Received: from mail.python.org (mail.python.org [82.94.164.166])
>by psf.upfronthosting.co.za (Postfix) with ESMTPS id 833861CBB0
>for ; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> Received: from albatross.python.org (localhost [127.0.0.1])
>by mail.python.org (Postfix) with ESMTP id 3VZ3GK10QGzN51
>for ; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org;
> s=200901;
>t=1334941209; bh=PuPHOLI7fb95kba5/ecLxSLCC9UpM27v8bYaw31epzE=;
>h=Date:Message-Id:Content-Type:MIME-Version:
> Content-Transfer-Encoding:From:To:Subject;
>b=ZfbTowau33LvKWnJHYtZ8Fy/cAslebBopL912urudimFDYNg5n7CHpPwxlMLlLTv5
> tR2OZmCp3w90e6h937L7R6g7mew3xHaxeRbzP06cEK0JTyOQaekSKHBxivVMuU2hjL
> AE1J6MRlKrxJoqE8dQMyzP7+wM5o39unn76WD6bE=
> Received: from localhost (HELO mail.python.org) (127.0.0.1)
>  by albatross.python.org with SMTP; 20 Apr 2012 19:00:09 +0200
> Received: from dinsdale.python.org (svn.python.org[IPv6:2001:888:2000:d::a4])
>(using TLSv1 with cipher AES256-SHA (256/256 bits))
>(No client certificate requested)
>by mail.python.org (Postfix) with ESMTPS
>for ; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> Received: from localhost
>([127.0.0.1] helo=dinsdale.python.org ident=hg)
>by dinsdale.python.org with esmtp (Exim 4.72)
>(envelope-from )
>id 1SLHBo-00063N-NT
>for rep...@bugs.python.org; Fri, 20 Apr 2012 19:00:08 +0200
> Date: Fri, 20 Apr 2012 19:00:08 +0200
> Message-Id: 
> Content-Type: text/plain; charset="utf8"
> MIME-Version: 1.0
> Content-Transfer-Encoding: base64
> From: python-dev@python.org
> To: rep...@bugs.python.org
> Subject: [issue14633]
>
>
> TmV3IGNoYW5nZXNldCBhMjgxYTY2MjI3MTQgYnkgQnJldHQgQ2Fubm9uIGluIGJyYW5jaCAnZGVm
>
> YXVsdCc6Cklzc3VlICMxNDYzMzogU2ltcGxpZnkgaW1wLmZpbmRfbW9kdWUoKSB0ZXN0IGFmdGVy
>
> IGZpeGVzIGZyb20gaXNzdWUKaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvYTI4MWE2
> NjIyNzE0Cg==
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Failed issue tracker submission

2012-04-20 Thread Matt Joiner
Cheers
On Apr 21, 2012 10:25 AM, "R. David Murray"  wrote:

>
> On Sat, 21 Apr 2012 08:54:56 +0800, Matt Joiner 
> wrote:
> > I'm getting one of these every couple of days. What's the deal?
> > On Apr 21, 2012 1:03 AM, "Python tracker" <
> > roundup-ad...@psf.upfronthosting.co.za> wrote:
>
> There is a bug in the interface between roundup and hg that is new
> since roundup was switched to using xapian for indexing.  When an hg
> commit mentions more than one issue number, the second (or subsequent,
> presumably) issue number triggers a write conflict and results in the
> email doing the second issue update being rejected.  Since the email
> address associated with the hg update email is python-dev, the bounce
> gets sent here.
>
> There are currently only three people who do maintenance work on the
> tracker (it used to be just one), and none of us have found time to
> try to figure out a fix yet.
>
> --David
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code?

2012-05-02 Thread Matt Joiner
On May 2, 2012 6:00 PM, "Antoine Pitrou"  wrote:
>
> On Wed, 02 May 2012 01:43:32 -0700
> Larry Hastings  wrote:
> >
> > I realize we can't jump to C99 because of A Certain Compiler.  (Its name
> > rhymes with Bike Row Soft Frizz You All See Muss Muss.)  But even that
> > compiler added this extension in the early 90s.
> >
> > Do we officially support any C compilers that *don't* permit
> > "intermingled variable declarations and code"?  Do we *unofficially*
> > support any?  And if we do, what do we gain?
>
> Well, there's this one called MSVC, which we support quite officially.

Not sure if comic genius or can't rhyme.

>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com