[Python-Dev] What's required to keep OS/2 support in Python 3.3
Hi All, I'm a little slow in responding to http://blog.python.org/2011/05/python-33-to-drop-support-for-os2.html, but I'm interested in stepping up to help maintain OS/2 support in Python 3.3 and above. I've been building Python 2.x for a while, and currently have binaries of 2.6.5 available from http://os2ports.smedley.info Unlike Andrew Mcintyre, I'm using libc for development (http://svn.netlabs.org/libc) rather than emx. libc is still being developed whereas emx hasn't been updated in about 10 years. I haven't attempted a build of 3.x yet, but will grab the latest 3.x release and see what it takes to get it building here. I expect I'll hit the same problem with sysconfig.get_config_var("CONFIG_ARGS"): as with 2.7.2 but we'll wait and see. Cheers, Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
Serhiy Storchaka wrote: 06.01.12 02:10, Nick Coghlan написав(ла): Not a good idea - a lot of the 3rd party tests that depend on dict ordering are going to be using those modules anyway, so scattering our solution across half the standard library is needlessly creating additional work without really reducing the incompatibility problem. If we're going to change anything, it may as well be the string hashing algorithm itself. Changing the string hashing algorithm will hit the general performance and also will break down any code that depend on dict ordering. Specialized dict slow down only needed parts of some applications. The minimal proposed change of seeding the hash from a global value (a single memory read and an addition) will have such a minimal performance effect that it will be undetectable even on the most noise-free testing environment. Cheers, Mark ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] cpython (2.7): Issue #12042: a queue is only used to retrive results; preliminary patch by
On Thu, Jan 5, 2012 at 23:45, Terry Reedy wrote: > On 1/5/2012 1:51 PM, sandro.tosi wrote: >> >> http://hg.python.org/cpython/rev/3353f9747a39 >> changeset: 74282:3353f9747a39 >> branch: 2.7 > > >> Doc/whatsnew/2.6.rst | 4 ++-- > > > should that have been whatsnew/2.7.rst? The wording correction was in the 2.6 what's new, when describing multiprocessing (which was added in 2.6). -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
Glenn Linderman wrote: On 1/5/2012 5:52 PM, Steven D'Aprano wrote: At some point, presuming that there is no speed penalty, the behaviour will surely become not just enabled by default but mandatory. Python has never promised that hashes must be predictable or consistent, so apart from backwards compatibility concerns for old versions, future versions of Python should make it mandatory. Presuming that there is no speed penalty, I'd argue in favour of making it mandatory for 3.3. Why do we need a flag for something that is going to be always on? I think the whole paragraph is invalid, because it presumes there is no speed penalty. I presume there will be a speed penalty, until benchmarking shows otherwise. There *may* be a speed penalty, but I draw your attention to Paul McMillian's email on 1st of January: Empirical testing shows that this unoptimized python implementation produces ~10% slowdown in the hashing of ~20 character strings. and Christian Heimes' email on 3rd of January: The changeset adds the murmur3 hash algorithm with some minor changes, for example more random seeds. At first I was worried that murmur might be slower than our old hash algorithm. But in fact it seems to be faster! So I think that it's a fairly safe bet that there will be a solution that is as fast, or at worst, trivially slower, than the current hash function. But of course, benchmarks will be needed. -- Steven ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
Using my patch (random-2.patch), the overhead is 0%. I cannot see a difference with and without my patch. Numbers: --- unpatched: == 3 characters == 1 loops, best of 3: 459 usec per loop == 10 characters == 1 loops, best of 3: 575 usec per loop == 500 characters == 1 loops, best of 3: 1.36 msec per loop patched: == 3 characters == 1 loops, best of 3: 458 usec per loop == 10 characters == 1 loops, best of 3: 575 usec per loop == 500 characters == 1 loops, best of 3: 1.36 msec per loop --- (the patched version looks faster just because the timer is not reliable enough for such fast test) Script: --- echo "== 3 characters ==" ./python -m timeit -n 1 -s 'text=(("%03i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%03i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%03i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' echo "== 10 characters ==" ./python -m timeit -n 1 -s 'text=(("%010i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%010i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%010i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' echo "== 500 characters ==" ./python -m timeit -n 1 -s 'text=(("%0500i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%0500i" % x) for x in range(1,1000))' 'sum(hash(x) for x in text)' ./python -m timeit -n 1 -s 'text=(("%0500i" % x) --- (Take the smallest timing for each test) "-n 1" is needed because the hash value is only computed once (is cached). I may be possible to have more reliable results by disabling completly the hash cache (comment "PyUnicode_HASH(self) = x;" line). Victor ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What's required to keep OS/2 support in Python 3.3
Hi Paul, > I'm a little slow in responding to > http://blog.python.org/2011/05/python-33-to-drop-support-for-os2.html, > but I'm interested in stepping up to help maintain OS/2 support in > Python 3.3 and above. > > I've been building Python 2.x for a while, and currently have binaries > of 2.6.5 available from http://os2ports.smedley.info > > Unlike Andrew Mcintyre, I'm using libc for development > (http://svn.netlabs.org/libc) rather than emx. libc is still being > developed whereas emx hasn't been updated in about 10 years. > > I haven't attempted a build of 3.x yet, but will grab the latest 3.x > release and see what it takes to get it building here. I would suggest you start from the Mercurial repository instead. There you'll find both the current stable branch (named "3.2") and the current development branch (named "default"). It will also make it easier for you to write and maintain patches. Let me point you to the devguide, even though it doesn't talk specifically about porting: http://docs.python.org/devguide/ Regards Antoine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Summary of Python tracker Issues
ACTIVITY SUMMARY (2011-12-30 - 2012-01-06) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open3180 ( +2) closed 22322 (+34) total 25502 (+36) Open issues with patches: 1366 Issues opened (24) == #13685: argparse does not sanitize help strings for % signs http://bugs.python.org/issue13685 opened by Jeff.Yurkiw #13686: Some notes on the docs of multiprocessing http://bugs.python.org/issue13686 opened by eli.bendersky #13689: fix CGI Web Applications with Python link in howto/urllib2 http://bugs.python.org/issue13689 opened by sandro.tosi #13691: pydoc help (or help('help')) claims to run a help utility; doe http://bugs.python.org/issue13691 opened by Devin Jeanpierre #13692: 2to3 mangles from . import frobnitz http://bugs.python.org/issue13692 opened by holmbie #13694: asynchronous connect in asyncore.dispatcher does not set addr http://bugs.python.org/issue13694 opened by anacrolix #13695: "type specific" to "type-specific" http://bugs.python.org/issue13695 opened by Retro #13697: python RLock implementation unsafe with signals http://bugs.python.org/issue13697 opened by rbcollins #13698: Mailbox module should support other mbox formats in addition t http://bugs.python.org/issue13698 opened by endolith #13700: imaplib.IMAP4.authenticate authobject fails with PLAIN mechani http://bugs.python.org/issue13700 opened by etukia #13701: Remove Decimal Python 2.3 Compatibility http://bugs.python.org/issue13701 opened by ramchandra.apte #13702: relative symlinks in tarfile.extract broken (windows) http://bugs.python.org/issue13702 opened by Patrick.von.Reth #13703: Hash collision security issue http://bugs.python.org/issue13703 opened by barry #13704: Random number generator in Python core http://bugs.python.org/issue13704 opened by christian.heimes #13706: non-ascii fill characters no longer work in formatting http://bugs.python.org/issue13706 opened by skrah #13708: Document ctypes.wintypes http://bugs.python.org/issue13708 opened by ramchandra.apte #13709: Capitalization mistakes in the documentation for ctypes http://bugs.python.org/issue13709 opened by ramchandra.apte #13712: pysetup create should not convert package_data to extra_files http://bugs.python.org/issue13712 opened by christian.heimes #13715: typo in unicodedata documentation http://bugs.python.org/issue13715 opened by eli.collins #13716: distutils doc contains lots of XXX http://bugs.python.org/issue13716 opened by flox #13718: Format Specification Mini-Language does not accept comma for p http://bugs.python.org/issue13718 opened by mkesper #13719: bdist_msi upload fails http://bugs.python.org/issue13719 opened by schmir #13720: argparse print_help() fails if COLUMNS is set to a low value http://bugs.python.org/issue13720 opened by zbysz #818201: distutils: clean does not use build_base option from build http://bugs.python.org/issue818201 reopened by eric.araujo Most recent 15 issues with no replies (15) == #13720: argparse print_help() fails if COLUMNS is set to a low value http://bugs.python.org/issue13720 #13718: Format Specification Mini-Language does not accept comma for p http://bugs.python.org/issue13718 #13715: typo in unicodedata documentation http://bugs.python.org/issue13715 #13708: Document ctypes.wintypes http://bugs.python.org/issue13708 #13691: pydoc help (or help('help')) claims to run a help utility; doe http://bugs.python.org/issue13691 #13689: fix CGI Web Applications with Python link in howto/urllib2 http://bugs.python.org/issue13689 #13682: Documentation of os.fdopen() refers to non-existing bufsize ar http://bugs.python.org/issue13682 #13668: mute ImportError in __del__ of _threading_local module http://bugs.python.org/issue13668 #13665: TypeError: string or integer address expected instead of str i http://bugs.python.org/issue13665 #13649: termios.ICANON is not documented http://bugs.python.org/issue13649 #13638: PyErr_SetFromErrnoWithFilenameObject is undocumented http://bugs.python.org/issue13638 #13633: Handling of hex character references in HTMLParser.handle_char http://bugs.python.org/issue13633 #13631: readline fails to parse some forms of .editrc under editline ( http://bugs.python.org/issue13631 #13608: remove born-deprecated PyUnicode_AsUnicodeAndSize http://bugs.python.org/issue13608 #13605: document argparse's nargs=REMAINDER http://bugs.python.org/issue13605 Most recent 15 issues waiting for review (15) = #13719: bdist_msi upload fails http://bugs.python.org/issue13719 #13715: typo in unicodedata documentation http://bugs.python.org/issue13715 #13712: pysetup create should not convert package_data to extra_files http://bugs.python.org/issue13712 #13704: Random number generator in Python core http://bu
Re: [Python-Dev] usefulness of Python version of threading.RLock
Thanks for those precisions, but I must admit it doesn't help me much... Can we drop it? A yes/no answer will do it ;-) > I'm pretty sure the Python version of RLock is in use in several alternative > implementations that provide an alternative _thread.lock. I think gevent > would fall into this camp, as well as a personal project of mine in a > similar vein that operates on python3. Sorry, I'm not sure I understand. Do those projects use _PyRLock directly? If yes, then aliasing it to _CRLock should do the trick, no? ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What's required to keep OS/2 support in Python 3.3
Hi All, On 06/01/12 19:22, Paul Smedley wrote: I'm a little slow in responding to http://blog.python.org/2011/05/python-33-to-drop-support-for-os2.html, but I'm interested in stepping up to help maintain OS/2 support in Python 3.3 and above. I've been building Python 2.x for a while, and currently have binaries of 2.6.5 available from http://os2ports.smedley.info Unlike Andrew Mcintyre, I'm using libc for development (http://svn.netlabs.org/libc) rather than emx. libc is still being developed whereas emx hasn't been updated in about 10 years. I haven't attempted a build of 3.x yet, but will grab the latest 3.x release and see what it takes to get it building here. I expect I'll hit the same problem with sysconfig.get_config_var("CONFIG_ARGS"): as with 2.7.2 but we'll wait and see. I now have a dll and exe - however when it tried to build the modules, it dies with: Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Fatal Python error: Py_Initialize: Unable to get the locale encoding LookupError: no codec search functions registered: can't find encoding Have done a small amount of debugging: in get_codeset(), char* codeset = nl_langinfo(CODESET); returns: ISO8859-1 Which can't be found by: codec = _PyCodec_Lookup(encoding); from get_codec_name(const char *encoding) Where is the list of valid codepages read from? Should ISO8859-1 be valid? I see some references to ISO-8859-1 in the code but not ISO8859-1 TIA, Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Hash collision security issue (now public)
In http://mail.python.org/pipermail/python-dev/2012-January/115350.html, Mark Shannon wrote: > The minimal proposed change of seeding the hash from a global value (a > single memory read and an addition) will have such a minimal performance > effect that it will be undetectable even on the most noise-free testing > environment. (1) Is it established that this (a single initial add, with no per-loop operations) would be sufficient? I thought that was in the gray area of "We don't yet have a known attack, but there are clearly safer options." (2) Even if the direct cost (fetch and add) were free, it might be expensive in practice. The current hash function is designed to send "similar" strings (and similar numbers) to similar hashes. (2a) That guarantees they won't (initially) collide, even in very small dicts. (2b) It keeps them nearby, which has an effect on cache hits. The exact effect (and even direction) would of course depend on the workload, which makes me distrust micro-benchmarks. If this were a problem in practice, I could understand accepting a little slowdown as the price of safety, but ... it isn't. Even in theory, the only way to trigger this is to take unreasonable amounts of user input and turn it directly into an unreasonable number of keys (as opposed to values, or list elements) placed in the same dict (as opposed to a series of smaller dicts). -jJ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
Hi, It seems to me that half the folk discussing this issue want a super-strong, resist-all-hypothetical-attacks hash with little regard to performance. The other half want no change or a change that will have no observable effect. (I may be exaggerating a little.) Can I propose the following, half-way proposal: 1. Since there is a published vulnerability, that we fix it with the most efficient solution proposed so far: http://bugs.python.org/file24143/random-2.patch 2. Decide which versions of Python this should be applied to. 3.3 seems a given, the other are open to debate. 3. If and only if (and I think this unlikely) the solution chosen is shown to be vulnerable to a more sophisticated attack then a new issue should be opened and dealt with separately. Cheers, Mark. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What's required to keep OS/2 support in Python 3.3
On Sat, 07 Jan 2012 06:28:00 +1030 Paul Smedley wrote: > > I now have a dll and exe - however when it tried to build the modules, > it dies with: > Could not find platform independent libraries > Could not find platform dependent libraries > Consider setting $PYTHONHOME to [:] > Fatal Python error: Py_Initialize: Unable to get the locale encoding I would look at this line: > LookupError: no codec search functions registered: can't find encoding Normally the standard codec search function is registered when importing the "encodings" module (see Lib/encodings/__init__.py), which is done at the end of _PyCodecRegistry_Init() in Python/codecs.c. There's this comment there: /* Ignore ImportErrors... this is done so that distributions can disable the encodings package. Note that other errors are not masked, e.g. SystemErrors raised to inform the user of an error in the Python configuration are still reported back to the user. */ For the purpose of debugging you could *not* ignore the error and instead print it out or bail out. Regards Antoine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
On 6 January 2012 20:25, Mark Shannon wrote: > Hi, > > It seems to me that half the folk discussing this issue want a super-strong, > resist-all-hypothetical-attacks hash with little regard to performance. The > other half want no change or a change that will have no observable effect. > (I may be exaggerating a little.) > > Can I propose the following, half-way proposal: > > 1. Since there is a published vulnerability, > that we fix it with the most efficient solution proposed so far: > http://bugs.python.org/file24143/random-2.patch > > 2. Decide which versions of Python this should be applied to. > 3.3 seems a given, the other are open to debate. > > 3. If and only if (and I think this unlikely) the solution chosen is shown > to be vulnerable to a more sophisticated attack then a new issue should be > opened and dealt with separately. +1 Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What's required to keep OS/2 support in Python 3.3
Hi Antoine, On 07/01/12 06:58, Antoine Pitrou wrote: On Sat, 07 Jan 2012 06:28:00 +1030 Paul Smedley wrote: I now have a dll and exe - however when it tried to build the modules, it dies with: Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to[:] Fatal Python error: Py_Initialize: Unable to get the locale encoding I would look at this line: LookupError: no codec search functions registered: can't find encoding Normally the standard codec search function is registered when importing the "encodings" module (see Lib/encodings/__init__.py), which is done at the end of _PyCodecRegistry_Init() in Python/codecs.c. There's this comment there: /* Ignore ImportErrors... this is done so that distributions can disable the encodings package. Note that other errors are not masked, e.g. SystemErrors raised to inform the user of an error in the Python configuration are still reported back to the user. */ For the purpose of debugging you could *not* ignore the error and instead print it out or bail out. Thanks - commenting out the ImportErrors block, I get: ImportError: No module named encodings So seems it's not finding modules - possibly related to the warnings about: >> Could not find platform independent libraries >> Could not find platform dependent libraries Seems getenv() may not be working correctly... ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hash collision security issue (now public)
On 1/5/2012 4:10 PM, Nick Coghlan wrote: On Fri, Jan 6, 2012 at 8:15 AM, Serhiy Storchaka wrote: 05.01.12 21:14, Glenn Linderman написав(ла): So, fixing the vulnerable packages could be a sufficient response, rather than changing the hash function. How to fix? Each of those above allocates and returns a dict. Simply have each of those allocate and return and wrapped dict, which has the following behaviors: i) during __init__, create a local, random, string. ii) for all key values, prepend the string, before passing it to the internal dict. Good idea. Thanks for the implementation, Serhiy. That is the sort of thing I had in mind, indeed. Not a good idea - a lot of the 3rd party tests that depend on dict ordering are going to be using those modules anyway, Stats? Didn't someone post a list of tests that fail when changing the hash? Oh, those were stdlib tests, not 3rd party tests. I'm not sure how to gather the stats, then, are you? so scattering our solution across half the standard library is needlessly creating additional work without really reducing the incompatibility problem. Half the standard library? no one has cared to augment my list of modules, but I have seen reference to JSON in addition to cgi and urllib.parse. I think there are more than 6 modules in the standard library... If we're going to change anything, it may as well be the string hashing algorithm itself. Changing the string hashing algorithm is known (or at least no one has argued otherwise) to be a source of backward incompatibility that will break programs. My proposal (and Serhiy's implementation, assuming it works, or can be easily tweaked to work, I haven't reviewed it in detail or attempted to test it) will only break programs that have vulnerabilities. I failed to mention one other benefit of my proposal: every web request would have a different random prefix, so attempting to gather info is futile: the next request has a different random prefix, so different strings would collide. Cheers, Nick. Indeed it is nice when we can be cheery even when arguing, for the most part :) I've enjoyed reading the discussions in this forum because most folks have respect for other people's opinions, even when they differ. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] usefulness of Python version of threading.RLock
_PyRLock is not used directly. Instead, no _CRLock is provided, so the threading.RLock function calls _PyRLock. It's done this way because green threading libraries may only provide a greened lock. _CRLock in these contexts would not work: It would block the entire native thread. I suspect that if you removed _PyRLock, these implementations would have to expose their own RLock primitive which works the same way as the one just removed from the standard library. I don't know if this is a good thing. I would recommend checking with at least the gevent and eventlet developers. 2012/1/7 Charles-François Natali > Thanks for those precisions, but I must admit it doesn't help me much... > Can we drop it? A yes/no answer will do it ;-) > > > I'm pretty sure the Python version of RLock is in use in several > alternative > > implementations that provide an alternative _thread.lock. I think gevent > > would fall into this camp, as well as a personal project of mine in a > > similar vein that operates on python3. > > Sorry, I'm not sure I understand. Do those projects use _PyRLock directly? > If yes, then aliasing it to _CRLock should do the trick, no? > -- ಠ_ಠ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] "Sort attacks" (was Re: Hash collision security issue (now public))
I can't find it now, but I believe Marc-Andre mentioned that CPython's list.sort() was vulnerable to attack too, because of its O(n log n) worst-case behavior. I wouldn't worry about that, because nobody could stir up anguish about it by writing a paper ;-) 1. O(n log n) is enormously more forgiving than O(n**2). 2. An attacker need not be clever at all: O(n log n) is not only sort()'s worst case, it's also its _expected_ case when fed randomly ordered data. 3. It's provable that no comparison-based sorting algorithm can have better worst-case asymptotic behavior when fed randomly ordered data. So if anyone whines about this, tell 'em to go do something useful instead :-) still-solving-problems-not-in-need-of-attention-ly y'rs - tim ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com