Re: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement
Dnia 25-11-2011 o godz. 1:01 Jesus Cea napisał(a): > > PS: The devguide doesn't say anything (AFAIK) about the contributor > agreement. There is info in the Contributing part of the devguide, follow How to Become a Core Developer link which points to http://docs.python.org/devguide/coredev.html where Contributor Agreement is mentioned. Regards, Maciej ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, Nov 25, 2011 at 5:41 PM, Eli Bendersky wrote: >> Eli, the use pattern I was referring to is when you read in chunks, >> and and append to a running buffer. Presumably if you know in advance >> the size of the data, you can readinto directly to a region of a >> bytearray. There by avoiding having to allocate a temporary buffer for >> the read, and creating a new buffer containing the running buffer, >> plus the new. >> >> Strangely, I find that your readandcopy is faster at this, but not by >> much, than readinto. Here's the code, it's a bit explicit, but then so >> was the original: >> >> BUFSIZE = 0x1 >> >> def justread(): >> # Just read a file's contents into a string/bytes object >> f = open(FILENAME, 'rb') >> s = b'' >> while True: >> b = f.read(BUFSIZE) >> if not b: >> break >> s += b >> >> def readandcopy(): >> # Read a file's contents and copy them into a bytearray. >> # An extra copy is done here. >> f = open(FILENAME, 'rb') >> s = bytearray() >> while True: >> b = f.read(BUFSIZE) >> if not b: >> break >> s += b >> >> def readinto(): >> # Read a file's contents directly into a bytearray, >> # hopefully employing its buffer interface >> f = open(FILENAME, 'rb') >> s = bytearray(os.path.getsize(FILENAME)) >> o = 0 >> while True: >> b = f.readinto(memoryview(s)[o:o+BUFSIZE]) >> if not b: >> break >> o += b >> >> And the timings: >> >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.justread()' >> 10 loops, best of 3: 298 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readandcopy()' >> 100 loops, best of 3: 9.22 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readinto()' >> 100 loops, best of 3: 9.31 msec per loop >> >> The file was 10MB. I expected readinto to perform much better than >> readandcopy. I expected readandcopy to perform slightly better than >> justread. This clearly isn't the case. >> > > What is 'python3' on your machine? If it's 3.2, then this is consistent with > my results. Try it with 3.3 and for a larger file (say ~100MB and up), you > may see the same speed as on 2.7 It's Python 3.2. I tried it for larger files and got some interesting results. readinto() for 10MB files, reading 10MB all at once: readinto/2.7 100 loops, best of 3: 8.6 msec per loop readinto/3.2 10 loops, best of 3: 29.6 msec per loop readinto/3.3 100 loops, best of 3: 19.5 msec per loop With 100KB chunks for the 10MB file (annotated with #): matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import readinto' "readinto.$f()"; done; done read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually faster than the 10MB read read/3.2 10 loops, best of 3: 253 msec per loop # wtf? read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? bytearray_read/2.7 100 loops, best of 3: 7.9 msec per loop bytearray_read/3.2 100 loops, best of 3: 7.48 msec per loop bytearray_read/3.3 100 loops, best of 3: 15.8 msec per loop # wtf? readinto/2.7 100 loops, best of 3: 8.93 msec per loop readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 is performing well? readinto/3.3 10 loops, best of 3: 20.4 msec per loop Here's the code: http://pastebin.com/nUy3kWHQ > > Also, why do you think chunked reads are better here than slurping the whole > file into the bytearray in one go? If you need it wholly in memory anyway, > why not just issue a single read? Sometimes it's not available all at once, I do a lot of socket programming, so this case is of interest to me. As shown above, it's also faster for python2.7. readinto() should also be significantly faster for this case, tho it isn't. > > Eli > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, 25 Nov 2011 08:38:48 +0200 Eli Bendersky wrote: > > Just to be clear, there were two separate issues raised here. One is the > speed regression of readinto() from 2.7 to 3.2, and the other is the > relative slowness of justread() in 3.3 > > Regarding the second, I'm not sure it's an issue because I tried a larger > file (100MB and then also 300MB) and the speed of 3.3 is now on par with > 3.2 and 2.7 > > However, the original question remains - on the 100MB file also, although > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > same speed (even a few % slower). That said, I now observe with Python 3.3 > the same speed as with 2.7, including the readinto() speedup - so it > appears that the readinto() regression has been solved in 3.3? Any clue > about where it happened (i.e. which bug/changeset)? It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ Regards Antoine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, 25 Nov 2011 20:34:21 +1100 Matt Joiner wrote: > > It's Python 3.2. I tried it for larger files and got some interesting results. > > readinto() for 10MB files, reading 10MB all at once: > > readinto/2.7 100 loops, best of 3: 8.6 msec per loop > readinto/3.2 10 loops, best of 3: 29.6 msec per loop > readinto/3.3 100 loops, best of 3: 19.5 msec per loop > > With 100KB chunks for the 10MB file (annotated with #): > > matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for > v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > readinto' "readinto.$f()"; done; done > read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > faster than the 10MB read > read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? No "wtf" here, the read() loop is quadratic since you're building a new, larger, bytes object every iteration. Python 2 has a fragile optimization for concatenation of strings, which can avoid the quadratic behaviour on some systems (depends on realloc() being fast). > readinto/2.7 100 loops, best of 3: 8.93 msec per loop > readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 > is performing well? > readinto/3.3 10 loops, best of 3: 20.4 msec per loop What if you allocate the bytearray outside of the timed function? Regards Antoine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
You can see in the tests on the largest buffer size tested, 8192, that the naive "read" actually outperforms readinto(). It's possibly by extrapolating into significantly larger buffer sizes that readinto() gets left behind. It's also reasonable to assume that this wasn't tested thoroughly. On Fri, Nov 25, 2011 at 9:55 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 08:38:48 +0200 > Eli Bendersky wrote: >> >> Just to be clear, there were two separate issues raised here. One is the >> speed regression of readinto() from 2.7 to 3.2, and the other is the >> relative slowness of justread() in 3.3 >> >> Regarding the second, I'm not sure it's an issue because I tried a larger >> file (100MB and then also 300MB) and the speed of 3.3 is now on par with >> 3.2 and 2.7 >> >> However, the original question remains - on the 100MB file also, although >> in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> same speed (even a few % slower). That said, I now observe with Python 3.3 >> the same speed as with 2.7, including the readinto() speedup - so it >> appears that the readinto() regression has been solved in 3.3? Any clue >> about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > Regards > > Antoine. > > > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 20:34:21 +1100 > Matt Joiner wrote: >> >> It's Python 3.2. I tried it for larger files and got some interesting >> results. >> >> readinto() for 10MB files, reading 10MB all at once: >> >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop >> >> With 100KB chunks for the 10MB file (annotated with #): >> >> matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import >> readinto' "readinto.$f()"; done; done >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually >> faster than the 10MB read >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > No "wtf" here, the read() loop is quadratic since you're building a > new, larger, bytes object every iteration. Python 2 has a fragile > optimization for concatenation of strings, which can avoid the > quadratic behaviour on some systems (depends on realloc() being fast). Is there any way to bring back that optimization? a 30 to 100x slow down on probably one of the most common operations... string contatenation, is very noticeable. In python3.3, this is representing a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. > >> readinto/2.7 100 loops, best of 3: 8.93 msec per loop >> readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 >> is performing well? >> readinto/3.3 10 loops, best of 3: 20.4 msec per loop > > What if you allocate the bytearray outside of the timed function? This change makes readinto() faster for 100K chunks than the other 2 methods and clears the differences between the versions. readinto/2.7 100 loops, best of 3: 6.54 msec per loop readinto/3.2 100 loops, best of 3: 7.64 msec per loop readinto/3.3 100 loops, best of 3: 7.39 msec per loop Updated test code: http://pastebin.com/8cEYG3BD > > Regards > > Antoine. > > > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > So as I think Eli suggested, the readinto() performance issue goes away with large enough reads, I'd put down the differences to some unrelated language changes. However the performance drop on read(): Python 3.2 is 30x slower than 2.7, and 3.3 is 100x slower than 2.7. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
> > However, the original question remains - on the 100MB file also, although > > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > > same speed (even a few % slower). That said, I now observe with Python > 3.3 > > the same speed as with 2.7, including the readinto() speedup - so it > > appears that the readinto() regression has been solved in 3.3? Any clue > > about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > Great, thanks. This is an important change, definitely something to wait for in 3.3 Eli ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
I was under the impression this is already in 3.3? On Nov 25, 2011 10:58 PM, "Eli Bendersky" wrote: > > >> > However, the original question remains - on the 100MB file also, although >> > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> > same speed (even a few % slower). That said, I now observe with Python 3.3 >> > the same speed as with 2.7, including the readinto() speedup - so it >> > appears that the readinto() regression has been solved in 3.3? Any clue >> > about where it happened (i.e. which bug/changeset)? >> >> It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > > Great, thanks. This is an important change, definitely something to wait for in 3.3 > Eli > > > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, Nov 25, 2011 at 14:02, Matt Joiner wrote: > I was under the impression this is already in 3.3? > Sure, but 3.3 wasn't released yet. Eli P.S. Top-posting again ;-) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On Fri, 25 Nov 2011 22:37:49 +1100 Matt Joiner wrote: > On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > > On Fri, 25 Nov 2011 20:34:21 +1100 > > Matt Joiner wrote: > >> > >> It's Python 3.2. I tried it for larger files and got some interesting > >> results. > >> > >> readinto() for 10MB files, reading 10MB all at once: > >> > >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop > >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop > >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop > >> > >> With 100KB chunks for the 10MB file (annotated with #): > >> > >> matt@stanley:~/Desktop$ for f in read bytearray_read readinto; do for > >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > >> readinto' "readinto.$f()"; done; done > >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > >> faster than the 10MB read > >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > > > No "wtf" here, the read() loop is quadratic since you're building a > > new, larger, bytes object every iteration. Python 2 has a fragile > > optimization for concatenation of strings, which can avoid the > > quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. Well, extending a bytearray() (as you saw yourself) is the proper solution in such cases. Note that you probably won't see a difference when concatenating very small strings. It would be interesting if you could run the same benchmarks on other OSes (Windows or OS X, for example). Regards Antoine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On 25 November 2011 11:37, Matt Joiner wrote: >> No "wtf" here, the read() loop is quadratic since you're building a >> new, larger, bytes object every iteration. Python 2 has a fragile >> optimization for concatenation of strings, which can avoid the >> quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. It's a fundamental, but sadly not well-understood, consequence of having immutable strings. Concatenating immutable strings in a loop is quadratic. There are many ways of working around it (languages like C# and Java have string builder classes, I believe, and in Python you can use StringIO or build a list and join at the end) but that's as far as it goes. The optimisation mentioned was an attempt (by mutating an existing string when the runtime determined that it was safe to do so) to hide the consequences of this fact from end-users who didn't fully understand the issues. It was relatively effective, but like any such case (floating point is another common example) it did some level of harm at the same time as it helped (by obscuring the issue further). It would be nice to have the optimisation back if it's easy enough to do so, for quick-and-dirty code, but it is not a good idea to rely on it (and it's especially unwise to base benchmarks on it working :-)) Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
2011/11/25 Paul Moore > The optimisation mentioned was an attempt (by mutating an existing > string when the runtime determined that it was safe to do so) to hide > the consequences of this fact from end-users who didn't fully > understand the issues. It was relatively effective, but like any such > case (floating point is another common example) it did some level of > harm at the same time as it helped (by obscuring the issue further). > > It would be nice to have the optimisation back if it's easy enough to > do so, for quick-and-dirty code, but it is not a good idea to rely on > it (and it's especially unwise to base benchmarks on it working :-)) > Note that this string optimization hack is still present in Python 3, but it now acts on *unicode* strings, not bytes. -- Amaury Forgeot d'Arc ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] webmas...@python.org address not working
On Fri, Nov 25, 2011, Jesus Cea wrote: > > When mailing there, I get this error. Not sure where to report. > > """ > Final-Recipient: rfc822; sdr...@sdrees.de > Original-Recipient: rfc822;webmas...@python.org > Action: failed > Status: 5.1.1 > Remote-MTA: dns; stefan.zinzdrees.de > Diagnostic-Code: smtp; 550 5.1.1 : Recipient address > rejected: User unknown in local recipient table > """ You reported it to the correct place, I pinged Stefan at the contact address listed by whois. Note that webmas...@python.org is a plain alias, so anyone whose e-mail isn't working will generate a bounce. -- Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/ WiFi is the SCSI of the 21st Century -- there are fundamental technical reasons for sacrificing a goat. (with no apologies to John Woods) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: > 2011/11/25 Paul Moore >> It would be nice to have the optimisation back if it's easy enough to >> do so, for quick-and-dirty code, but it is not a good idea to rely on >> it (and it's especially unwise to base benchmarks on it working :-)) > > Note that this string optimization hack is still present in Python 3, > but it now acts on *unicode* strings, not bytes. Ah, yes. That makes sense. Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?
On 25/11/2011 15:48, Paul Moore wrote: On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: 2011/11/25 Paul Moore It would be nice to have the optimisation back if it's easy enough to do so, for quick-and-dirty code, but it is not a good idea to rely on it (and it's especially unwise to base benchmarks on it working :-)) Note that this string optimization hack is still present in Python 3, but it now acts on *unicode* strings, not bytes. Ah, yes. That makes sense. Although for concatenating immutable bytes presumably the same hack would be *possible*. Michael Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Summary of Python tracker Issues
ACTIVITY SUMMARY (2011-11-18 - 2011-11-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open3134 (+19) closed 22128 (+31) total 25262 (+50) Open issues with patches: 1328 Issues opened (41) == #2286: Stack overflow exception caused by test_marshal on Windows x64 http://bugs.python.org/issue2286 reopened by brian.curtin #13387: suggest assertIs(type(obj), cls) for exact type checking http://bugs.python.org/issue13387 reopened by eric.araujo #13433: String format documentation contains error regarding %g http://bugs.python.org/issue13433 opened by Christian.Iversen #13434: time.xmlrpc.com dead http://bugs.python.org/issue13434 opened by pitrou #13435: Copybutton does not hide tracebacks http://bugs.python.org/issue13435 opened by lehmannro #13436: compile() doesn't work on ImportFrom with level=None http://bugs.python.org/issue13436 opened by Janosch.Gräf #13437: Provide links to the source code for every module in the docum http://bugs.python.org/issue13437 opened by Julian #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 opened by Oleg.Plakhotnyuk #13439: turtle: Errors in docstrings of onkey and onkeypress http://bugs.python.org/issue13439 opened by smichr #13440: Explain the "status quo wins a stalemate" principle in the dev http://bugs.python.org/issue13440 opened by ncoghlan #13441: TestEnUSCollation.test_strxfrm() fails on Solaris http://bugs.python.org/issue13441 opened by haypo #13443: wrong links and examples in the functional HOWTO http://bugs.python.org/issue13443 opened by eli.bendersky #13444: closed stdout causes error on stderr when the interpreter unco http://bugs.python.org/issue13444 opened by Ronny.Pfannschmidt #13445: Enable linking the module pysqlite with Berkeley DB SQL instea http://bugs.python.org/issue13445 opened by Lauren.Foutz #13446: imaplib, fetch: improper behaviour on read-only selected mailb http://bugs.python.org/issue13446 opened by char.nikolaou #13447: Add tests for Tools/scripts/reindent.py http://bugs.python.org/issue13447 opened by eric.araujo #13448: PEP 3155 implementation http://bugs.python.org/issue13448 opened by pitrou #13449: sched - provide an "async" argument for run() method http://bugs.python.org/issue13449 opened by giampaolo.rodola #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 opened by akira #13451: sched.py: speedup cancel() method http://bugs.python.org/issue13451 opened by giampaolo.rodola #13452: PyUnicode_EncodeDecimal: reject error handlers different than http://bugs.python.org/issue13452 opened by haypo #13453: Tests and network timeouts http://bugs.python.org/issue13453 opened by haypo #13454: crash when deleting one pair from tee() http://bugs.python.org/issue13454 opened by PyryP #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 opened by ezio.melotti #13456: Providing a custom HTTPResponse class to HTTPConnection http://bugs.python.org/issue13456 opened by r.david.murray #13461: Error on test_issue_1395_5 with Python 2.7 and VS2010 http://bugs.python.org/issue13461 opened by sable #13462: Improve code and tests for Mixin2to3 http://bugs.python.org/issue13462 opened by eric.araujo #13463: Fix parsing of package_data http://bugs.python.org/issue13463 opened by eric.araujo #13464: HTTPResponse is missing an implementation of readinto http://bugs.python.org/issue13464 opened by r.david.murray #13465: A Jython section in the dev guide would be great http://bugs.python.org/issue13465 opened by fwierzbicki #13466: new timezones http://bugs.python.org/issue13466 opened by Rioky #13467: Typo in doc for library/sysconfig http://bugs.python.org/issue13467 opened by naoki #13471: setting access time beyond Jan. 2038 on remote share failes on http://bugs.python.org/issue13471 opened by Thorsten.Simons #13472: devguide doesnât list all build dependencies http://bugs.python.org/issue13472 opened by eric.araujo #13473: Add tests for files byte-compiled by distutils[2] http://bugs.python.org/issue13473 opened by eric.araujo #13474: Mention of "-m" Flag Missing From Doc on Execution Model http://bugs.python.org/issue13474 opened by eric.snow #13475: Add '-p'/'--path0' command line option to override sys.path[0] http://bugs.python.org/issue13475 opened by ncoghlan #13476: Simple exclusion filter for unittest autodiscovery http://bugs.python.org/issue13476 opened by ncoghlan #13477: tarfile module should have a command line http://bugs.python.org/issue13477 opened by brandon-rhodes #13478: No documentation for timeit.default_timer http://bugs.python.org/issue13478 opened by sandro.tosi #13479: pickle too picky on re-defined classes http://bugs.python.org/issue13479 opened by kxroberto Most
Re: [Python-Dev] PyPy 1.7 - widening the sweet spot
On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > wrote: > > The problem is not with maintaining the modified directory. The > > problem was always things like changing interface between the C > > version and the Python version or introduction of new stuff that does > > not run on pypy because it relies on refcounting. I don't see how > > having a subrepo helps here. > > Indeed, the main thing that can help on this front is to get more > modules to the same state as heapq, io, datetime (and perhaps a few > others that have slipped my mind) where the CPython repo actually > contains both C and Python implementations and the test suite > exercises both to make sure their interfaces remain suitably > consistent (even though, during normal operation, CPython users will > only ever hit the C accelerated version). > > This not only helps other implementations (by keeping a Python version > of the module continuously up to date with any semantic changes), but > can help people that are porting CPython to new platforms: the C > extension modules are far more likely to break in that situation than > the pure Python equivalents, and a relatively slow fallback is often > going to be better than no fallback at all. (Note that ctypes based > pure Python modules *aren't* particularly useful for this purpose, > though - due to the libffi dependency, ctypes is one of the extension > modules most likely to break when porting). > And the other reason I plan to see this through before I die is to help distribute the maintenance burden. Why should multiple VMs fix bad assumptions made by CPython in their own siloed repos and then we hope the change gets pushed upstream to CPython when it could be fixed once in a single repo that everyone works off of? ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PyPy 1.7 - widening the sweet spot
On Fri, 25 Nov 2011 12:37:59 -0500 Brett Cannon wrote: > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > > > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > > wrote: > > > The problem is not with maintaining the modified directory. The > > > problem was always things like changing interface between the C > > > version and the Python version or introduction of new stuff that does > > > not run on pypy because it relies on refcounting. I don't see how > > > having a subrepo helps here. > > > > Indeed, the main thing that can help on this front is to get more > > modules to the same state as heapq, io, datetime (and perhaps a few > > others that have slipped my mind) where the CPython repo actually > > contains both C and Python implementations and the test suite > > exercises both to make sure their interfaces remain suitably > > consistent (even though, during normal operation, CPython users will > > only ever hit the C accelerated version). > > > > This not only helps other implementations (by keeping a Python version > > of the module continuously up to date with any semantic changes), but > > can help people that are porting CPython to new platforms: the C > > extension modules are far more likely to break in that situation than > > the pure Python equivalents, and a relatively slow fallback is often > > going to be better than no fallback at all. (Note that ctypes based > > pure Python modules *aren't* particularly useful for this purpose, > > though - due to the libffi dependency, ctypes is one of the extension > > modules most likely to break when porting). > > > > And the other reason I plan to see this through before I die Uh! Any bad news? :/ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PyPy 1.7 - widening the sweet spot
2011/11/25 Brett Cannon > > > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski >> wrote: >> > The problem is not with maintaining the modified directory. The >> > problem was always things like changing interface between the C >> > version and the Python version or introduction of new stuff that does >> > not run on pypy because it relies on refcounting. I don't see how >> > having a subrepo helps here. >> >> Indeed, the main thing that can help on this front is to get more >> modules to the same state as heapq, io, datetime (and perhaps a few >> others that have slipped my mind) where the CPython repo actually >> contains both C and Python implementations and the test suite >> exercises both to make sure their interfaces remain suitably >> consistent (even though, during normal operation, CPython users will >> only ever hit the C accelerated version). >> >> This not only helps other implementations (by keeping a Python version >> of the module continuously up to date with any semantic changes), but >> can help people that are porting CPython to new platforms: the C >> extension modules are far more likely to break in that situation than >> the pure Python equivalents, and a relatively slow fallback is often >> going to be better than no fallback at all. (Note that ctypes based >> pure Python modules *aren't* particularly useful for this purpose, >> though - due to the libffi dependency, ctypes is one of the extension >> modules most likely to break when porting). >> > > And the other reason I plan to see this through before I die is to help > distribute the maintenance burden. Why should multiple VMs fix bad > assumptions made by CPython in their own siloed repos and then we hope the > change gets pushed upstream to CPython when it could be fixed once in a > single repo that everyone works off of? > PyPy copied the CPython stdlib in a directory named "2.7", which is never modified; instead, adaptations are made by copying the file into "modified-2.7", and fixed there. Both directories appear in sys.path This was done for this very reason: so that it's easy to identify the differences and suggest changes to push upstream. But this process was not very successful for several reasons: - The definition of "bad assumptions" used to be very strict. It's much much better nowadays, thanks to the ResourceWarning in 3.x for example (most changes in modified-2.7 are related to the garbage collector), and wider acceptance by the core developers of the "@impl_detail" decorators in tests. - 2.7 was already in maintenance mode, and such changes were not considered as bug fixes, so modified-2.7 never shrinks. It was a bit hard to find the motivation to fix only the 3.2 version of the stdlib, which you can not even test with PyPy! - Some modules in the stdlib rely on specific behaviors of the VM or extension modules that are not always easy to implement correctly in PyPy. The ctypes module is the most obvious example to me, but also the pickle/copy modules which were modified because of subtle differences around built-in methods (or was it the __builtins__ module?) And oh, I almost forgot distutils, which needs to parse some Makefile which of course does not exist in PyPy. - Differences between C extensions and pure Python modules are sometimes considered "undefined behaviour" and are rejected. See issue13274, this one has an happy ending, but I remember that the _pyio.py module chose to not fix some obscure reentrancy issues (which I completely agree with) -- Amaury Forgeot d'Arc ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 380
On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >> Mea culpa for not keeping track, but what's the status of PEP 380? I >> really want this in Python 3.3! > > There are two relevant tracker issues (both with me for the moment). > > The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 > > That's really just missing the doc updates - I haven't had a chance to > look at Zbyszek's latest offering on that front, but it shouldn't be > far off being complete (the *text* in his previous docs patch actually > seemed reasonable - I mainly objected to way it was organised). > > However, the PEP 380 test suite updates have a dependency on a new dis > module feature that provides an iterator over a structured description > of bytecode instructions: http://bugs.python.org/issue11816 Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. Michael > > I find Meador's suggestion to change the name of the new API to > something involving the word "instruction" appealing, so I plan to do > that, which will have a knock-on effect on the tests in the PEP 380 > branch. However, even once I get that done, Raymond specifically said > he wanted to review the dis module patch before I check it in, so I > don't plan to commit it until he gives the OK (either because he > reviewed it, or because he decides he's OK with it going in without > his review and he can review and potentially update it in Mercurial > any time before 3.3 is released). > > I currently plan to update my working branches for both of those on > the 3rd of December, so hopefully they'll be ready to go within the > next couple of weeks. > > Cheers, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS"
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 12/11/11 16:56, Éric Araujo wrote: > Ezio and I chatted a bit about his on IRC and he may try to write > a Python parser for Misc/NEWS in order to write a fully automated > merge tool. Anything new in this front? :-) - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/_/_/ _/_/_/_/ _/_/ "My name is Dump, Core Dump" _/_/_/_/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtBMi5lgi5GaxT1NAQLKsgP6At6qnzHknuTjq35mHfxVSOxJnMuZ8/vx 5ZXHcxCuPJud9GJz0+NEmDPImQAtRUZyV41ud9nQYIfhYE5rV4qBiK7KwMspg39o kclfRhMIPsQV3PkB4dDWy+gEkck+Q16pSzdtxbzKx7DpYk7lnFp/vsHQbNC5iqC9 pfmMny4L0s8= =NlDr -END PGP SIGNATURE- ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 380
On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord wrote: > > On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>> really want this in Python 3.3! >> >> There are two relevant tracker issues (both with me for the moment). >> >> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >> >> That's really just missing the doc updates - I haven't had a chance to >> look at Zbyszek's latest offering on that front, but it shouldn't be >> far off being complete (the *text* in his previous docs patch actually >> seemed reasonable - I mainly objected to way it was organised). >> >> However, the PEP 380 test suite updates have a dependency on a new dis >> module feature that provides an iterator over a structured description >> of bytecode instructions: http://bugs.python.org/issue11816 > > > Is it necessary to test parts of PEP 380 through bytecode structures rather > than semantics? Those tests aren't going to be usable by other > implementations. The affected tests aren't testing the PEP 380 semantics, they're specifically testing CPython's bytecode generation for yield from expressions and disassembly of same. Just because they aren't of any interest to other implementations doesn't mean *we* don't need them :) There are plenty of behavioural tests to go along with the bytecode specific ones, and those *will* be useful to other implementations. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS"
On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > On 12/11/11 16:56, Éric Araujo wrote: >> Ezio and I chatted a bit about his on IRC and he may try to write >> a Python parser for Misc/NEWS in order to write a fully automated >> merge tool. > > Anything new in this front? :-) To me, it would make more sense to split the file into a Misc/NEWS3.2 and Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a piece of cake and would avoid adding a parser (an its idiosyncracies) to the toolchain. Raymond___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS"
On Sat, Nov 26, 2011 at 3:14 PM, Raymond Hettinger wrote: > > On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > > On 12/11/11 16:56, Éric Araujo wrote: > > Ezio and I chatted a bit about his on IRC and he may try to write > > a Python parser for Misc/NEWS in order to write a fully automated > > merge tool. > > Anything new in this front? :-) > > To me, it would make more sense to split the file into a Misc/NEWS3.2 and > Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a > piece of cake and would avoid adding a parser (and its idiosyncracies) to the > toolchain. +1 A simple-but-it-works approach to this problem sounds good to me. We'd still need to work out a few conventions about how changes that affect both versions get recorded (I still favour putting independent entries in both files), but simply eliminating the file name collision will also eliminate most of the merge conflicts. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 380
On Sat, Nov 26, 2011 at 6:39 AM, Nick Coghlan wrote: > On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord > wrote: >> >> On 24 Nov 2011, at 04:06, Nick Coghlan wrote: >> >>> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: Mea culpa for not keeping track, but what's the status of PEP 380? I really want this in Python 3.3! >>> >>> There are two relevant tracker issues (both with me for the moment). >>> >>> The main tracker issue for PEP 380 is here: >>> http://bugs.python.org/issue11682 >>> >>> That's really just missing the doc updates - I haven't had a chance to >>> look at Zbyszek's latest offering on that front, but it shouldn't be >>> far off being complete (the *text* in his previous docs patch actually >>> seemed reasonable - I mainly objected to way it was organised). >>> >>> However, the PEP 380 test suite updates have a dependency on a new dis >>> module feature that provides an iterator over a structured description >>> of bytecode instructions: http://bugs.python.org/issue11816 >> >> >> Is it necessary to test parts of PEP 380 through bytecode structures rather >> than semantics? Those tests aren't going to be usable by other >> implementations. > > The affected tests aren't testing the PEP 380 semantics, they're > specifically testing CPython's bytecode generation for yield from > expressions and disassembly of same. Just because they aren't of any > interest to other implementations doesn't mean *we* don't need them :) > > There are plenty of behavioural tests to go along with the bytecode > specific ones, and those *will* be useful to other implementations. > > Cheers, > Nick. > I'm with nick on this one, seems like a very useful test, just remember to mark it as @impl_detail (or however the decorator is called). Cheers, fijal ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com