Re: [Python-Dev] Request for CPython 3.5.3 release
Hi Nick, First, thanks a lot for your detailed answer, it was very instructive to me. My answers below. 2016-07-03 6:09 GMT+02:00 Nick Coghlan : > On 2 July 2016 at 16:17, Ludovic Gasc wrote: > > Hi everybody, > > > > I fully understand that AsyncIO is a drop in the ocean of CPython, you're > > working to prepare the entire 3.5.3 release for December, not yet ready. > > However, you might create a 3.5.2.1 release with only this AsyncIO fix ? > > That would be more work than just doing a 3.5.3 release, though - the > problem isn't with the version number bump, it's with asking the > release team to do additional work without clearly explaining the > rationale for the request (more on that below). While some parts of > the release process are automated, there's still a lot of steps to run > through by a number of different people: > https://www.python.org/dev/peps/pep-0101/. > Thanks for the link, I didn't know this PEP, it was interesting to read. > > The first key question to answer in this kind of situation is: "Is > there code that will run correctly on 3.5.1 that will now fail on > 3.5.2?" (i.e. it's a regression introduced by the asyncio and > coroutine changes in the point release rather than something that was > already broken in 3.5.0 and 3.5.1). > > If the answer is "No", then it doesn't inhibit the 3.5.2 rollout in > any way, and folks can wait until 3.5.3 for the fix. > > However, if the answer is "Yes, it's a new regression in 3.5.2" (as in > this case), then the next question becomes "Is there an agreed > resolution for the regression?" > > The answer to that is currently "No" - Yury's PR against the asyncio > repo is still being discussed. > > Once the answer to that question is "Yes", *then* the question of > releasing a high priority fix in a Python 3.5.3 release can be > properly considered by answering the question "Of the folks using > asyncio, what proportion of them are likely to encounter problems in > upgrading to Python 3.5.2, and is there a workaround they can apply or > alternate approach they can use to avoid the problem?". > > At the moment, Yury's explanation of the fix in the PR is > (understandably) addressed at getting the problem resolved within the > context of asyncio, and hence just describes the particular APIs > affected, and the details of the incorrect behaviour. While that's an > important step in the process, it doesn't provide a clear assessment > of the *consequences* of the bug aimed at folks that aren't themselves > deeply immersed in using asyncio, so we can't tell if the problem is > "Some idiomatic code frequently recommended in user facing examples > and used in third party asyncio based libraries may hang client > processes" (which would weigh in favour of an early 3.5.3 release > before people start encountering the regression in practice) or "Some > low level API's not recommended for general use may hang if used in a > particular non-idiomatic combination only likely to be encountered by > event loop implementors" (which would suggest it may be OK to stick > with the normal maintenance release cadence). > To my basic understanding, it seems to have race conditions to open sockets. If my understanding is true, it's a little bit the heart of AsyncIO is affected ;-) If you search about loop.sock_connect in Github, you've found a lot of results https://github.com/search?l=python&q=loop.sock_connect&ref=searchresults&type=Code&utf8=%E2%9C%93 Moreover, if Yury, one of contributors of AsyncIO: https://github.com/python/asyncio/graphs/contributors and uvloop creator has sent an e-mail about that, I'm tented to believe him. It's why a little bit scared by that, even if we don't have a lot of AsyncIO's users, especially with the latest release. However, Google Trends might give us a good overview of relative users we have, compare to Twisted, Gevent and Tornado: https://www.google.com/trends/explore#q=asyncio%2C%20%2Fm%2F02xknvd%2C%20gevent%2C%20%2Fm%2F07s58h4&date=1%2F2016%2012m&cmpt=q&tz=Etc%2FGMT-2 > > > If 3.5.2.1 or 3.5.3 are impossible to release before december, > > Early maintenance releases are definitely possible, but the > consequences of particular regressions need to be put into terms that > make sense to the release team, which generally means stepping up from > "APIs X, Y, and Z broke in this way" to "Users doing A, B, and C will > be affected in this way". > > As an example of a case where an early maintenance release took place: > several years ago, Python 2.6.3 happened to break both "from logging > import *" (due to a missing entry in test___all__ letting an error in > logging.__all__ through) and building extension modules with > setuptools (due to a change in a private API that setuptools was > monkeypatching). Those were considered significant enough for the > 2.6.4 release to happen early. > Ok, we'll see first what's the decision will emerge about this pull request in AsyncIO. > > > what are the > > alternative solutions for
Re: [Python-Dev] PEP487: Simpler customization of class creation
On 2 July 2016 at 10:50, Martin Teichmann wrote: > Hi list, > > so this is the next round for PEP 487. During the last round, most of > the comments were in the direction that a two step approach for > integrating into Python, first in pure Python, later in C, was not a > great idea and everything should be in C directly. So I implemented it > in C, put it onto the issue tracker here: > http://bugs.python.org/issue27366, and also modified the PEP > accordingly. > > For those who had not been in the discussion, PEP 487 proposes to add > two hooks, __init_subclass__ which is a classmethod called whenever a > class is subclassed, and __set_owner__, a hook in descriptors which > gets called once the class the descriptor is part of is created. I'm +1 for this part of the proposal. One potential documentation issue is that __init_subclass__ adds yet a third special magic method behaviour: - __new__ is implicitly a static method - __prepare__ isn't implicitly anything (but in hindsight should have implicitly been a class method) - __init_subclass__ is implicitly a class method I think making __init_subclass__ implicitly a class method is still the right thing to do if this proposal gets accepted, we'll just want to see if we can do something to tidy up that aspect of the documentation at the same time. > While implementing PEP 487 I realized that there is and oddity in the > type base class: type.__init__ forbids to use keyword arguments, even > for the usual three arguments it has (name, base and dict), while > type.__new__ allows for keyword arguments. As I plan to forward any > keyword arguments to the new __init_subclass__, I stumbled over that. > As I write in the PEP, I think it would be a good idea to forbid using > keyword arguments for type.__new__ as well. But if people think this > would be to big of a change, it would be possible to do it > differently. I *think* I'm in favour of cleaning this up, but I also think the explanation of the problem with the status quo could stand to be clearer, as could the proposed change in behaviour. Some example code at the interactive prompt may help with that. Positional arguments already either work properly, or give a helpful error message: >>> type("Example", (), {}) >>> type.__new__("Example", (), {}) Traceback (most recent call last): File "", line 1, in TypeError: type.__new__(X): X is not a type object (str) >>> type.__new__(type, "Example", (), {}) >>> type.__init__("Example", (), {}) Traceback (most recent call last): File "", line 1, in TypeError: descriptor '__init__' requires a 'type' object but received a 'str' >>> type.__init__(type, "Example", (), {}) By contrast, attempting to use keyword arguments is a fair collection of implementation defined "Uh, what just happened?": >>> type(name="Example", bases=(), dict={}) Traceback (most recent call last): File "", line 1, in TypeError: type.__init__() takes no keyword arguments >>> type.__new__(name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: type.__new__(): not enough arguments >>> type.__new__(type, name="Example", bases=(), dict={}) >>> type.__init__(name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: descriptor '__init__' of 'type' object needs an argument >>> type.__init__(type, name="Example", bases=(), dict={}) # Huh? Traceback (most recent call last): File "", line 1, in TypeError: type.__init__() takes no keyword arguments I think the PEP could be accepted without cleaning this up, though - it would just mean __init_subclass__ would see the "name", "bases" and "dict" keys when someone attempted to use keyword arguments with the dynamic type creation APIs. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Request for CPython 3.5.3 release
Another thought recently occurred to me. Do releases really have to be such big productions? A recent ACM article by Tom Limoncelli[1] reminded me that we're doing releases the old-fashioned way -- infrequently, and with lots of manual labor. Maybe we could (eventually) try to strive for a lighter-weight, more automated release process? It would be less work, and it would reduce stress for authors of stdlib modules and packages -- there's always the next release. I would think this wouldn't obviate the need for carefully planned and timed "big deal" feature releases, but it could make the bug fix releases *less* of a deal, for everyone. [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract (sadly requires login) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Request for CPython 3.5.3 release
Many of our users prefer stability (the sort who plan operating system updates years in advance), but generally I'm in favor of more frequent releases. It will likely require more complex branching though, presumably based on the LTS model everyone else uses. One thing we've discussed before is separating core and stdlib releases. I'd be really interested to see a release where most of the stdlib is just preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle wheels for stable releases and provide a fast track via pip to update individual packages. Probably no better opportunity to make such a fundamental change as we move to a new VCS... Cheers, Steve Top-posted from my Windows Phone -Original Message- From: "Guido van Rossum" Sent: 7/3/2016 7:42 To: "Python-Dev" Cc: "Nick Coghlan" Subject: Re: [Python-Dev] Request for CPython 3.5.3 release Another thought recently occurred to me. Do releases really have to be such big productions? A recent ACM article by Tom Limoncelli[1] reminded me that we're doing releases the old-fashioned way -- infrequently, and with lots of manual labor. Maybe we could (eventually) try to strive for a lighter-weight, more automated release process? It would be less work, and it would reduce stress for authors of stdlib modules and packages -- there's always the next release. I would think this wouldn't obviate the need for carefully planned and timed "big deal" feature releases, but it could make the bug fix releases *less* of a deal, for everyone. [1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract (sadly requires login) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
[forking the conversation since the subject has shifted] On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: > Many of our users prefer stability (the sort who plan operating system > updates years in advance), but generally I'm in favour of more frequent > releases. > So there's our 18 month cadence for feature/minor releases, and then there's the 6 month cadence for bug-fix/micro releases. At the language summit there was the discussion kicked off by Ned about our release schedule and a group of us had a discussion afterward where a more strict release cadence of 12 months with the release date tied to a consistent month -- e.g. September of every year -- instead of our hand-wavy "about 18 months after the last feature release"; people in the discussion seemed to like the 12 months consistency idea. I think making releases on a regular, annual schedule requires simply a decision by us to do it since the time scale we are talking about is still so large it shouldn't impact the workload of RMs & friends *that* much (I think). As for upping the bug-fix release cadence, if we can automate that then perhaps we can up the frequency (maybe once every quarter), but I'm not sure what kind of overhead that would add and thus how much would need to be automated to make that release cadence work. Doing this kind of shrunken cadence for bug-fix releases would require the RM & friends to decide what would need to be automated to shrink the release schedule to make it viable (e.g. "if we automated steps N & M of the release process then I would be okay releasing every 3 months instead of 6"). For me, I say we shift to an annual feature release in a specific month every year, and switch to a quarterly bug-fix releases only if we can add zero extra work to RMs & friends. > It will likely require more complex branching though, presumably based on > the LTS model everyone else uses. > Why is that? You can almost view our feature releases as LTS releases, at which point our current branching structure is no different. > > One thing we've discussed before is separating core and stdlib releases. > I'd be really interested to see a release where most of the stdlib is just > preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle > wheels for stable releases and provide a fast track via pip to update > individual packages. > > Probably no better opportunity to make such a fundamental change as we > move to a new VCS... > Topic 1 === If we separate out the stdlib, we first need to answer why we are doing this? The arguments supporting this idea is (1) it might simplify more frequent releases of Python (but that's a guess), (2) it would make the stdlib less CPython-dependent (if purely by the fact of perception and ease of testing using CI against other interpreters when they have matching version support), and (3) it might make it easier for us to get more contributors who are comfortable helping with just the stdlib vs CPython itself (once again, this might simply be through perception). So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository. Since the single repo for the stdlib is not that crazy, I'll talk about the crazier N repo idea (in all scenarios we would probably have a repo that pulled in cpython and the stdlib through either git submodules or subtrees and that would represent a CPython release repo). In this scenario, having each module/package have its own repo could get us a couple of things. One is that it might help simplify module maintenance by allowing each module to have its own issue tracker, set of contributors, etc. This also means it will make it obvious what modules are being neglected which will either draw attention and get help or honestly lead to a deprecation if no one is willing to help maintain it. Separate repos would also allow for easier backport releases (e.g. what asyncio and typing have been doing since they were created). If a module is maintained as if it was its own project then it makes it easier to make releases separated from the stdlib itself (although the usefulness is minimized as long as sys.path has site-packages as its last entry). Separate releases allows for faster releases of the stand-alone module, e.g. if only asyncio has a bug then asyncio can cut their own release and the rest of the stdlib doesn't need to care. Then when a new CPython release is done we can simply bundle up the stable release at the moment and essentially make our mythical sumo release be the stdlib release itself (and this would help stop modules like asyncio and typing from simply copying modules into the stdlib from their external repo if we just pulled in their repo using submodules or subtrees in a master
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On 3 July 2016 at 21:22, Brett Cannon wrote: > Topic 2 > === > Independent releases of the stdlib could be done, although if we break the > stdlib up into individual repos then it shifts the conversation as > individual modules could simply do their own releases independent of the big > stdlib release. Personally I don't see a point of doing a stdlib release > separate from CPython, but I could see doing a more frequent release of > CPython where the only thing that changed is the stdlib itself (but I don't > know if that would even alleviate the RM workload). The one major downside of independent stdlib releases is that it significantly increases the number of permutations of things 3rd parties have to support. It can be hard enough to get a user to report the version of Python they are having an issue with - to get them to report both python and stdlib version would be even trickier. And testing against all the combinations, and deciding which combinations are supported, becomes a much bigger problem. Furthermore, pip/setuptools are just getting to the point of allowing for dependencies conditional on Python version. If independent stdlib releases were introduced, we'd need to implement dependencies based on stdlib version as well - consider depending on a backport of a new module if the user has an older stdlib version that doesn't include it. Changing the principle that the CPython version is a well-defined label for a specific language level and stdlib, is a major change with very wide implications, and I don't see sufficient benefits to justify it. On the other hand, simply decoupling the internal development cycles for the language and the stdlib (or independent stdlib modules), without adding extra "release" cycles, is not that big a deal - in many ways, we do that already with projects like asyncio. Paul ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
As an observer and user— It may be worth asking the Rust team what the main pain points are in coordinating and managing their releases. Some context for those unfamiliar: Rust uses a Chrome- or Firefox-like release train approach, with stable and beta releases every six weeks. Each release cycle includes both the compiler and the standard library. They use feature flags on "nightly" (the master branch) and cut release branches for actually gets shipped in each release. This has the advantage of letting new features and functionality ship whenever they're ready, rather than waiting for Big Bang releases. Because of strong commitments to stability and backwards compatibility as part of that, it hasn't led to any substantial breakage along the way, either. There is also some early discussion of how they might add LTS releases into that mix. The Rust standard library is currently bundled into the same repository as the compiler. Although the stdlib is currently being modularized and somewhat decoupled from the compiler, I don't believe they intend to separate it from the compiler repository or release in that process (not least because there's no need to further speed up their release cadence!). None of that is meant to suggest Python adopt that specific cadence (though I have found it quite nice), but simply to observe that the Rust team might have useful info on upsides, downsides, and particular gotchas as Python considers changing its own release process. Regards, Chris Krycho > On Jul 3, 2016, at 16:22, Brett Cannon wrote: > > [forking the conversation since the subject has shifted] > >> On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: >> Many of our users prefer stability (the sort who plan operating system >> updates years in advance), but generally I'm in favour of more frequent >> releases. > > So there's our 18 month cadence for feature/minor releases, and then there's > the 6 month cadence for bug-fix/micro releases. At the language summit there > was the discussion kicked off by Ned about our release schedule and a group > of us had a discussion afterward where a more strict release cadence of 12 > months with the release date tied to a consistent month -- e.g. September of > every year -- instead of our hand-wavy "about 18 months after the last > feature release"; people in the discussion seemed to like the 12 months > consistency idea. I think making releases on a regular, annual schedule > requires simply a decision by us to do it since the time scale we are talking > about is still so large it shouldn't impact the workload of RMs & friends > that much (I think). > > As for upping the bug-fix release cadence, if we can automate that then > perhaps we can up the frequency (maybe once every quarter), but I'm not sure > what kind of overhead that would add and thus how much would need to be > automated to make that release cadence work. Doing this kind of shrunken > cadence for bug-fix releases would require the RM & friends to decide what > would need to be automated to shrink the release schedule to make it viable > (e.g. "if we automated steps N & M of the release process then I would be > okay releasing every 3 months instead of 6"). > > For me, I say we shift to an annual feature release in a specific month every > year, and switch to a quarterly bug-fix releases only if we can add zero > extra work to RMs & friends. > >> It will likely require more complex branching though, presumably based on >> the LTS model everyone else uses. > > Why is that? You can almost view our feature releases as LTS releases, at > which point our current branching structure is no different. > >> >> One thing we've discussed before is separating core and stdlib releases. I'd >> be really interested to see a release where most of the stdlib is just >> preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle >> wheels for stable releases and provide a fast track via pip to update >> individual packages. >> >> Probably no better opportunity to make such a fundamental change as we move >> to a new VCS... > > > > Topic 1 > === > If we separate out the stdlib, we first need to answer why we are doing this? > The arguments supporting this idea is (1) it might simplify more frequent > releases of Python (but that's a guess), (2) it would make the stdlib less > CPython-dependent (if purely by the fact of perception and ease of testing > using CI against other interpreters when they have matching version support), > and (3) it might make it easier for us to get more contributors who are > comfortable helping with just the stdlib vs CPython itself (once again, this > might simply be through perception). > > So if we really wanted to go this route of breaking out the stdlib, I think > we have two options. One is to have the cpython repo represent the CPython > interpreter and then have a separate stdlib repo. The other option is to
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On Sun, Jul 3, 2016, 13:43 Paul Moore wrote: > On 3 July 2016 at 21:22, Brett Cannon wrote: > > Topic 2 > > === > > Independent releases of the stdlib could be done, although if we break > the > > stdlib up into individual repos then it shifts the conversation as > > individual modules could simply do their own releases independent of the > big > > stdlib release. Personally I don't see a point of doing a stdlib release > > separate from CPython, but I could see doing a more frequent release of > > CPython where the only thing that changed is the stdlib itself (but I > don't > > know if that would even alleviate the RM workload). > > The one major downside of independent stdlib releases is that it > significantly increases the number of permutations of things 3rd > parties have to support. It can be hard enough to get a user to report > the version of Python they are having an issue with - to get them to > report both python and stdlib version would be even trickier. And > testing against all the combinations, and deciding which combinations > are supported, becomes a much bigger problem. > > Furthermore, pip/setuptools are just getting to the point of allowing > for dependencies conditional on Python version. If independent stdlib > releases were introduced, we'd need to implement dependencies based on > stdlib version as well - consider depending on a backport of a new > module if the user has an older stdlib version that doesn't include > it. > > Changing the principle that the CPython version is a well-defined > label for a specific language level and stdlib, is a major change with > very wide implications, and I don't see sufficient benefits to justify > it. On the other hand, simply decoupling the internal development > cycles for the language and the stdlib (or independent stdlib > modules), without adding extra "release" cycles, is not that big a > deal - in many ways, we do that already with projects like asyncio. > This last bit is what I would advocate if we broke the stdlib out unless an emergency patch release is warranted for a specific module (e.g. like asyncio that started this discussion). Obviously backporting is its own thing. -Brett > Paul > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On 3 July 2016 at 22:04, Brett Cannon wrote: > This last bit is what I would advocate if we broke the stdlib out unless an > emergency patch release is warranted for a specific module (e.g. like > asyncio that started this discussion). Obviously backporting is its own > thing. It's also worth noting that pip has no mechanism for installing an updated stdlib module, as everything goes into site-packages, and the stdlib takes precedence over site-packages unless you get into sys.path hacking abominations like setuptools uses (or at least used to use, I don't know if it still does). So as things stand, independent patch releases of stdlib modules would need to be manually copied into place. Allowing users to override the stdlib opens up a different can of worms - not necessarily one that we couldn't resolve, but IIRC, it was always a deliberate policy that overriding the stdlib wasn't possible (that's why backports have names like unittest2...) Paul ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On Jul 3, 2016 1:45 PM, "Paul Moore" wrote: > [...] > Furthermore, pip/setuptools are just getting to the point of allowing > for dependencies conditional on Python version. If independent stdlib > releases were introduced, we'd need to implement dependencies based on > stdlib version as well - consider depending on a backport of a new > module if the user has an older stdlib version that doesn't include > it. Regarding this particular point: right now, yeah, there's an annoying thing where you have to know that a dependency on stdlib/backported library X has to be written as "X >= 1.0 [py_version <= 3.4]" or whatever, and every package with this dependency has to encode some complicated indirect knowledge of what versions of X ship with what versions of python. (And life is even more complicated if you want to support pypy/jython/..., who are generally shipping manually maintained stdlib forks, and whose nominal "python version equivalent" is only an approximation.) In the extreme, one can imagine a module like typing still being distributed as part of the standard python download, BUT not in the stdlib, but rather as a "preinstalled package" in site-packages/ that could then be upgraded normally after install. In addition to whatever maintenance advantages this might (or might not) have, with regards to Paul's concerns this would actually be a huge improvement, since if a package needs typing 1.3 or whatever then they could just declare that, without having to know a priori which versions of python shipped which version. (Note that linux distributions already split up the stdlib into pieces, and you're not guaranteed to have all of it available.) Or if we want to be less aggressive and keep the stdlib monolithic, then it would still be great if there were some .dist-info metadata somewhere that said "this version of the stdlib provides typing 1.3, asyncio 1.4, ...". I haven't thought through all the details of how this would work and how pip could best take advantage, though. -n ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On Sun, Jul 3, 2016, 14:22 Paul Moore wrote: > On 3 July 2016 at 22:04, Brett Cannon wrote: > > This last bit is what I would advocate if we broke the stdlib out unless > an > > emergency patch release is warranted for a specific module (e.g. like > > asyncio that started this discussion). Obviously backporting is its own > > thing. > > It's also worth noting that pip has no mechanism for installing an > updated stdlib module, as everything goes into site-packages, and the > stdlib takes precedence over site-packages unless you get into > sys.path hacking abominations like setuptools uses (or at least used > to use, I don't know if it still does). So as things stand, > independent patch releases of stdlib modules would need to be manually > copied into place. > I thought I mentioned this depends on changing sys.path; sorry if I didn't. > Allowing users to override the stdlib opens up a different can of > worms - not necessarily one that we couldn't resolve, but IIRC, it was > always a deliberate policy that overriding the stdlib wasn't possible > (that's why backports have names like unittest2...) > I think it could be considered less of an issue now thanks to being able to declare dependencies and the version requirements for pip. -brett > Paul > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
I actually thought about Rust when thinking about 3 month releases (I know they release faster though). What i would want to know is whether the RMs for Rust are employed by Mozilla and thus have work time to do but it vs Python RMs & friends who vary ob whether they get work time. On Sun, Jul 3, 2016, 13:54 Chris Krycho wrote: > As an observer and user— > > It may be worth asking the Rust team what the main pain points are in > coordinating and managing their releases. > > Some context for those unfamiliar: Rust uses a Chrome- or Firefox-like > release train approach, with stable and beta releases every six weeks. Each > release cycle includes both the compiler and the standard library. They use > feature flags on "nightly" (the master branch) and cut release branches for > actually gets shipped in each release. This has the advantage of letting > new features and functionality ship whenever they're ready, rather than > waiting for Big Bang releases. Because of strong commitments to stability > and backwards compatibility as part of that, it hasn't led to any > substantial breakage along the way, either. > > There is also some early discussion of how they might add LTS releases > into that mix. > > The Rust standard library is currently bundled into the same repository as > the compiler. Although the stdlib is currently being modularized and > somewhat decoupled from the compiler, I don't believe they intend to > separate it from the compiler repository or release in that process (not > least because there's no need to further speed up their release cadence!). > > None of that is meant to suggest Python adopt that specific cadence > (though I have found it *quite* nice), but simply to observe that the > Rust team might have useful info on upsides, downsides, and particular > gotchas as Python considers changing its own release process. > > Regards, > Chris Krycho > > On Jul 3, 2016, at 16:22, Brett Cannon wrote: > > [forking the conversation since the subject has shifted] > > On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: > >> Many of our users prefer stability (the sort who plan operating system >> updates years in advance), but generally I'm in favour of more frequent >> releases. >> > > So there's our 18 month cadence for feature/minor releases, and then > there's the 6 month cadence for bug-fix/micro releases. At the language > summit there was the discussion kicked off by Ned about our release > schedule and a group of us had a discussion afterward where a more strict > release cadence of 12 months with the release date tied to a consistent > month -- e.g. September of every year -- instead of our hand-wavy "about 18 > months after the last feature release"; people in the discussion seemed to > like the 12 months consistency idea. I think making releases on a regular, > annual schedule requires simply a decision by us to do it since the time > scale we are talking about is still so large it shouldn't impact the > workload of RMs & friends *that* much (I think). > > As for upping the bug-fix release cadence, if we can automate that then > perhaps we can up the frequency (maybe once every quarter), but I'm not > sure what kind of overhead that would add and thus how much would need to > be automated to make that release cadence work. Doing this kind of shrunken > cadence for bug-fix releases would require the RM & friends to decide what > would need to be automated to shrink the release schedule to make it viable > (e.g. "if we automated steps N & M of the release process then I would be > okay releasing every 3 months instead of 6"). > > For me, I say we shift to an annual feature release in a specific month > every year, and switch to a quarterly bug-fix releases only if we can add > zero extra work to RMs & friends. > > >> It will likely require more complex branching though, presumably based on >> the LTS model everyone else uses. >> > > Why is that? You can almost view our feature releases as LTS releases, at > which point our current branching structure is no different. > > >> >> One thing we've discussed before is separating core and stdlib releases. >> I'd be really interested to see a release where most of the stdlib is just >> preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle >> wheels for stable releases and provide a fast track via pip to update >> individual packages. >> >> Probably no better opportunity to make such a fundamental change as we >> move to a new VCS... >> > > > > Topic 1 > === > If we separate out the stdlib, we first need to answer why we are doing > this? The arguments supporting this idea is (1) it might simplify more > frequent releases of Python (but that's a guess), (2) it would make the > stdlib less CPython-dependent (if purely by the fact of perception and ease > of testing using CI against other interpreters when they have matching > version support), and (3) it might make it easier for us to get more > contributors who are comforta
Re: [Python-Dev] PEP487: Simpler customization of class creation
Hi Nick, thanks for the nice review! > I think making __init_subclass__ implicitly a class method is still > the right thing to do if this proposal gets accepted, we'll just want > to see if we can do something to tidy up that aspect of the > documentation at the same time. I could write some documentation, I just don't know where to put it. I personally have no strong feelings whether __init_subclass__ should be implicitly a @classmethod or not - but as the general consensus here seemed to hint making it implicit is better, this is how I wrote it. >> While implementing PEP 487 I realized that there is and oddity in the >> type base class: type.__init__ forbids to use keyword arguments, even >> for the usual three arguments it has (name, base and dict), while >> type.__new__ allows for keyword arguments. As I plan to forward any >> keyword arguments to the new __init_subclass__, I stumbled over that. >> As I write in the PEP, I think it would be a good idea to forbid using >> keyword arguments for type.__new__ as well. But if people think this >> would be to big of a change, it would be possible to do it >> differently. > > [some discussion cut out] > > I think the PEP could be accepted without cleaning this up, though - > it would just mean __init_subclass__ would see the "name", "bases" and > "dict" keys when someone attempted to use keyword arguments with the > dynamic type creation APIs. Yes, this would be possible, albeit a bit ugly. I'm not so sure whether backwards compatibility is so important in this case. It is very easy to change the code to the fully cleaned up version Looking through old stuff I found http://bugs.python.org/issue23722, which describes the following problem: at the time __init_subclass__ is called, super() doesn't work yet for the new class. It does work for __init_subclass__, because it is called on the base class, but not for calls to other classmethods it does. This is a pity especially because also the two argument form of super() cannot be used as the new class has no name yet. The problem is solvable though. The initializations necessary for super() to work properly simply should be moved before the call to __init_subclass__. I implemented that by putting a new attribute into the class's namespace to keep the cell which will later be used by super(). This new attribute would be remove by type.__new__ again, but transiently it would be visible. This technique has already been used for __qualname__. The issue contains a patch that fixes that behavior, and back in the day you proposed I add the problem to the PEP. Should I? Greetings Martin ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
My thinking on this issue was that some/most packages from the stdlib would move into site-packages. Certainly I'd expect asyncio to be in this category, and probably typing. Even going as far as email and urllib would potentially be beneficial (to those packages, is my thinking). Obviously not every single module can do this, but there are plenty that aren't low-level dependencies for other modules that could. Depending on particular versions of these then becomes a case of adding normal package version constraints - we could even bundle version information for non-updateable packages so that installs fail on incompatible Python versions. The "Uber repository" could be a requirements.txt that pulls down wheels for the selected stable versions of each package so that we still distribute all the same code with the same stability, but users have much more ability to patch their own stdlib after install. (FWIW, we use a system similar to this at Microsoft for building Visual Studio, so I can vouch that it works on much more complicated software than Python.) Cheers, Steve Top-posted from my Windows Phone -Original Message- From: "Paul Moore" Sent: 7/3/2016 14:23 To: "Brett Cannon" Cc: "Guido van Rossum" ; "Nick Coghlan" ; "Python-Dev" ; "Steve Dower" Subject: Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release) On 3 July 2016 at 22:04, Brett Cannon wrote: > This last bit is what I would advocate if we broke the stdlib out unless an > emergency patch release is warranted for a specific module (e.g. like > asyncio that started this discussion). Obviously backporting is its own > thing. It's also worth noting that pip has no mechanism for installing an updated stdlib module, as everything goes into site-packages, and the stdlib takes precedence over site-packages unless you get into sys.path hacking abominations like setuptools uses (or at least used to use, I don't know if it still does). So as things stand, independent patch releases of stdlib modules would need to be manually copied into place. Allowing users to override the stdlib opens up a different can of worms - not necessarily one that we couldn't resolve, but IIRC, it was always a deliberate policy that overriding the stdlib wasn't possible (that's why backports have names like unittest2...) Paul ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 487: Simpler customization of class creation
Hi Guido, sorry I missed your post... >> One of the big issues that makes library authors reluctant to use >> metaclasses >> (even when they would be appropriate) is the risk of metaclass conflicts. > > Really? I've written and reviewed a lot of metaclasses and this has never > worried me. The problem is limited to multiple inheritance, right? I worry a > lot about MI being imposed on classes that weren't written with MI in mind, > but I've never particularly worried about the special case of metaclasses. Yes, the problem only arises with MI. Unfortunately, that's not uncommon: if you want to implement an ABC with a class from a framework which uses metaclasses, you have a metaclass conflict. So then you start making MyFrameworkABCMeta-classes. The worst is if you already have a framework with users out there. No way you add a metaclass to your class, however convenient it would be. Because you never now if some user out there had gotten the idea to implement an ABC with it. Sure, you could let your metaclass inherit from ABCMeta, but is this really how it should be done? (This has already been mentioned by others over at python-ideas: https://mail.python.org/pipermail/python-ideas/2016-February/038506.html) Greetings Martin ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence
On 7/3/2016 4:22 PM, Brett Cannon wrote: So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository. Option 3 is something in between: groups of stdlib modules in their own repository. An obvious example: a gui group with _tkinter, tkinter, idlelib, turtle, turtledemo, and their doc files. Having 100s of repositories would not work well with with TortoiseHg. -- Terry Jan Reedy ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence
On 03Jul2016 1556, Terry Reedy wrote: On 7/3/2016 4:22 PM, Brett Cannon wrote: So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository. Option 3 is something in between: groups of stdlib modules in their own repository. An obvious example: a gui group with _tkinter, tkinter, idlelib, turtle, turtledemo, and their doc files. Having 100s of repositories would not work well with with TortoiseHg. A rough count of how I'd break up the current 3.5 Lib folder (which I happened to have handy) suggests no more than 50 repos. But there'd be no need to have all of them checked out just to build - only the ones you want to modify. And in that case, you'd probably have a stable Python to work against the separate package repo and wouldn't need to clone the core one. (I'm envisioning a build process that generates wheels from online sources and caches them. So updating the stdlib wheel cache would be part of the build process, but then the local wheels are used to install.) I personally would only have about 5 repos cloned on any of my dev machines (core, ctypes, distutils, possibly tkinter, ssl), as I rarely touch any other packages. (Having those separate from core is mostly for the versioning benefits - I doubt we could ever release Python without them, but it'd be great to be able to update distutils, ctypes or ssl in place with a simple pip/package mgr command.) Cheers, Steve ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP487: Simpler customization of class creation
On Sat, Jul 2, 2016 at 10:50 AM, Martin Teichmann wrote: > Hi list, > > so this is the next round for PEP 487. During the last round, most of > the comments were in the direction that a two step approach for > integrating into Python, first in pure Python, later in C, was not a > great idea and everything should be in C directly. So I implemented it > in C, put it onto the issue tracker here: > http://bugs.python.org/issue27366, and also modified the PEP > accordingly. Thanks! Reviewing inline below. > For those who had not been in the discussion, PEP 487 proposes to add > two hooks, __init_subclass__ which is a classmethod called whenever a > class is subclassed, and __set_owner__, a hook in descriptors which > gets called once the class the descriptor is part of is created. > > While implementing PEP 487 I realized that there is and oddity in the > type base class: type.__init__ forbids to use keyword arguments, even > for the usual three arguments it has (name, base and dict), while > type.__new__ allows for keyword arguments. As I plan to forward any > keyword arguments to the new __init_subclass__, I stumbled over that. > As I write in the PEP, I think it would be a good idea to forbid using > keyword arguments for type.__new__ as well. But if people think this > would be to big of a change, it would be possible to do it > differently. This is an area of exceeding subtlety (and also not very well documented/specified, probably). I'd worry that changing anything here might break some code. When a metaclass overrides neither __init__ nor __new__, keyword args will not work because type.__init__ forbids them. However when a metaclass overrides them and calls them using super(), it's quite possible that someone ended up calling super().__init__() with three positional args but super().__new__() with keyword args, since the call sites are distinct (in the overrides for __init__ and __new__ respectively). What's your argument for changing this, apart from a desire for more regularity? > Hoping for good comments > > Greetings > > Martin > > The PEP follows: > > PEP: 487 > Title: Simpler customisation of class creation > Version: $Revision$ > Last-Modified: $Date$ > Author: Martin Teichmann , > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 27-Feb-2015 > Python-Version: 3.6 > Post-History: 27-Feb-2015, 5-Feb-2016, 24-Jun-2016, 2-Jul-2016 > Replaces: 422 > > > Abstract > > > Currently, customising class creation requires the use of a custom metaclass. > This custom metaclass then persists for the entire lifecycle of the class, > creating the potential for spurious metaclass conflicts. > > This PEP proposes to instead support a wide range of customisation > scenarios through a new ``__init_subclass__`` hook in the class body, > and a hook to initialize attributes. > > The new mechanism should be easier to understand and use than > implementing a custom metaclass, and thus should provide a gentler > introduction to the full power Python's metaclass machinery. > > > Background > == > > Metaclasses are a powerful tool to customize class creation. They have, > however, the problem that there is no automatic way to combine metaclasses. > If one wants to use two metaclasses for a class, a new metaclass combining > those two needs to be created, typically manually. > > This need often occurs as a surprise to a user: inheriting from two base > classes coming from two different libraries suddenly raises the necessity > to manually create a combined metaclass, where typically one is not > interested in those details about the libraries at all. This becomes > even worse if one library starts to make use of a metaclass which it > has not done before. While the library itself continues to work perfectly, > suddenly every code combining those classes with classes from another library > fails. > > Proposal > > > While there are many possible ways to use a metaclass, the vast majority > of use cases falls into just three categories: some initialization code > running after class creation, the initalization of descriptors and > keeping the order in which class attributes were defined. > > The first two categories can easily be achieved by having simple hooks > into the class creation: > > 1. An ``__init_subclass__`` hook that initializes >all subclasses of a given class. > 2. upon class creation, a ``__set_owner__`` hook is called on all the >attribute (descriptors) defined in the class, and > > The third category is the topic of another PEP 520. > > As an example, the first use case looks as follows:: > >>>> class SpamBase: >...# this is implicitly a @classmethod >...def __init_subclass__(cls, **kwargs): >...cls.class_args = kwargs >...super().__init_subclass__(cls, **kwargs) > >>>> class Spam(SpamBase, a=1, b="b"): >...pass > >>>> Spam.class_args >{'a': 1, 'b': 'b'} > > The base class ``object``
[Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release)
On 4 July 2016 at 00:39, Guido van Rossum wrote: > Another thought recently occurred to me. Do releases really have to be > such big productions? A recent ACM article by Tom Limoncelli[1] > reminded me that we're doing releases the old-fashioned way -- > infrequently, and with lots of manual labor. Maybe we could > (eventually) try to strive for a lighter-weight, more automated > release process? It would be less work, and it would reduce stress for > authors of stdlib modules and packages -- there's always the next > release. I would think this wouldn't obviate the need for carefully > planned and timed "big deal" feature releases, but it could make the > bug fix releases *less* of a deal, for everyone. Yes, getting the maintenance releases to the point of being largely automated would be beneficial. However, I don't think the problem is lack of desire for that outcome, it's that maintaining the release toolchain pretty much becomes a job at that point, as you really want to be producing nightly builds (since the creation of those nightlies in effect becomes the regression test suite for the release toolchain), and you also need to more strictly guard against even temporary regressions in the maintenance branches. There are some variants we could pursue around that model (e.g. automating Python-only updates without automating updates that require rebuilding the core interpreter binaries for Windows and Mac OS X), but none of it is the kind of thing likely to make anyone say "I want to work on improving this in my free time". Even for commercial redistributors, it isn't easy for us to make the business case for assigning someone to work on it, since we're generally working from the source trees rather than the upstream binary releases. I do think it's worth putting this into our bucket of "ongoing activities we could potentially propose to the PSF for funding", though. I know Ewa (Jodlowska, the PSF's Director of Operations) is interested in better supporting the Python development community directly (hence https://donate.pypi.io/ ) in addition to the more indirect community building efforts like PyCon US and the grants program, so I've been trying to build up a mental list of CPython development pain points where funded activities could potentially improve the contributor experience for volunteers. So far I have: - issue triage (including better acknowledging folks that help out with triage efforts) - patch review (currently "wait and see" pending the impact of the GitHub migration) - nightly pre-release builds (for ease of contribution without first becoming a de facto C developer and to help make life easier for release managers) That last one is a new addition to my list based on this thread, and I think it's particularly interesting in that it would involve a much smaller set of target users than the first two (with the primary stakeholders being the release managers and the folks preparing the binary installers), but also a far more concrete set of deliverables (i.e. nightly binary builds being available for active development and maintenance branches for at least Windows and Mac OS X, and potentially for the manylinux1 baseline API defined in PEP 513) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release cadence (was: Request for CPython 3.5.3 release)
On 4 July 2016 at 06:22, Brett Cannon wrote: > [forking the conversation since the subject has shifted] > > On Sun, 3 Jul 2016 at 09:50 Steve Dower wrote: >> >> Many of our users prefer stability (the sort who plan operating system >> updates years in advance), but generally I'm in favour of more frequent >> releases. > > > So there's our 18 month cadence for feature/minor releases, and then there's > the 6 month cadence for bug-fix/micro releases. At the language summit there > was the discussion kicked off by Ned about our release schedule and a group > of us had a discussion afterward where a more strict release cadence of 12 > months with the release date tied to a consistent month -- e.g. September of > every year -- instead of our hand-wavy "about 18 months after the last > feature release"; people in the discussion seemed to like the 12 months > consistency idea. While we liked the "consistent calendar cadence that is some multiple of 6 months" idea, several of us thought 12 months was way too short as it makes for too many entries in third party support matrices. I'd also encourage folks to go back and read the two PEPs that were written the last time we had a serious discussion about changing the release cadence, since many of the concerns raised then remain relevant today: * PEP 407 (faster cycle with LTS releases): https://www.python.org/dev/peps/pep-0407/ * PEP 413 (separate stdlib versioning): https://www.python.org/dev/peps/pep-0413/ In particular, the "unsustainable community support matrix" problem I describe in PEP 413 is still a major point of concern for me - we know from PyPI's download metrics that Python 2.6 is still in heavy use, so many folks have only started to bump their "oldest supported version" up to Python 2.7 in the last year or so (5+ years after it was released). People have been a bit more aggressive in dropping compatibility with older Python 3 versions, but it's also been the case that availability and adoption of LTS versions of Python 3 has been limited to date (mainly just the 5 years for 3.2 in Ubuntu 12.04 and 3.4 in Ubuntu 14.04 - the longest support lifecycle I'm aware of after that is Red Hat's 3 years for Red Hat Software Collections). The reason I withdrew PEP 413 as a prospective answer to that problem is that I think there's generally only a limited number of libraries that are affected by the challenge of sometimes getting too old to be useful to cross-platform library and framework developers (mostly network protocol and file format related, but also the ones still marked as provisional), and the introduction of ensurepip gives us a new way of dealing with them: treating *those particular libraries* as independently upgradable bundled libraries where the CPython build process creates wheel files for them, and then uses ensurepip's internally bundled pip to install those wheels at install time, even if pip itself is left out of the installation. In the specific case that prompted this thread for example, I don't think the problem is really that the standard library release cadence is too slow in general: it's that "pip install --upgrade asyncio" *isn't an available option* in Python 3.5, even if you're using a virtual environment. For other standard library modules, we've tackled that by letting people do things like "pip install contextlib2" to get the newer versions, even on older Python releases - individual projects are then responsible for opting in to using either the stdlib version or the potentially-newer backported version. However, aside from the special case of ensurepip, what we've yet to do is ever make a standard library package *itself* independently upgradable (such that the Python version merely implies a *minimum* version of that library, rather than an exact version). Since it has core developers actively involved in its development, and already provides a PyPI package for the sake of Python 3.3 users, perhaps "asyncio" could make a good guinea pig for designing such a bundling process? Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release)
The bots Mozilla runs around both Rust and Servo should make a lot of this much lower overhead if they can be repurposed (as I believe several other communities have already done). Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), is integrated with GitHub PRs so maintainers can trigger it with a comment there, and can also roll up a bunch of changes into one (handy to pull together e.g. a bunch of small documentation changes like typo fixes): https://github.com/barosl/homu That seems to keep the pain level of having an always-building-and-passing-tests nightly version much lower. Aside: I don't want to flood these discussions with "yay Rust!" stuff, so this will probably be my last such response unless something else really jumps out. ;-) Thanks for the work you're all doing here. Regards, Chris Krycho > On Jul 3, 2016, at 7:34 PM, Nick Coghlan wrote: > >> On 4 July 2016 at 00:39, Guido van Rossum wrote: >> Another thought recently occurred to me. Do releases really have to be >> such big productions? A recent ACM article by Tom Limoncelli[1] >> reminded me that we're doing releases the old-fashioned way -- >> infrequently, and with lots of manual labor. Maybe we could >> (eventually) try to strive for a lighter-weight, more automated >> release process? It would be less work, and it would reduce stress for >> authors of stdlib modules and packages -- there's always the next >> release. I would think this wouldn't obviate the need for carefully >> planned and timed "big deal" feature releases, but it could make the >> bug fix releases *less* of a deal, for everyone. > > Yes, getting the maintenance releases to the point of being largely > automated would be beneficial. However, I don't think the problem is > lack of desire for that outcome, it's that maintaining the release > toolchain pretty much becomes a job at that point, as you really want > to be producing nightly builds (since the creation of those nightlies > in effect becomes the regression test suite for the release > toolchain), and you also need to more strictly guard against even > temporary regressions in the maintenance branches. > > There are some variants we could pursue around that model (e.g. > automating Python-only updates without automating updates that require > rebuilding the core interpreter binaries for Windows and Mac OS X), > but none of it is the kind of thing likely to make anyone say "I want > to work on improving this in my free time". Even for commercial > redistributors, it isn't easy for us to make the business case for > assigning someone to work on it, since we're generally working from > the source trees rather than the upstream binary releases. > > I do think it's worth putting this into our bucket of "ongoing > activities we could potentially propose to the PSF for funding", > though. I know Ewa (Jodlowska, the PSF's Director of Operations) is > interested in better supporting the Python development community > directly (hence https://donate.pypi.io/ ) in addition to the more > indirect community building efforts like PyCon US and the grants > program, so I've been trying to build up a mental list of CPython > development pain points where funded activities could potentially > improve the contributor experience for volunteers. So far I have: > > - issue triage (including better acknowledging folks that help out > with triage efforts) > - patch review (currently "wait and see" pending the impact of the > GitHub migration) > - nightly pre-release builds (for ease of contribution without first > becoming a de facto C developer and to help make life easier for > release managers) > > That last one is a new addition to my list based on this thread, and I > think it's particularly interesting in that it would involve a much > smaller set of target users than the first two (with the primary > stakeholders being the release managers and the folks preparing the > binary installers), but also a far more concrete set of deliverables > (i.e. nightly binary builds being available for active development and > maintenance branches for at least Windows and Mac OS X, and > potentially for the manylinux1 baseline API defined in PEP 513) > > Cheers, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 487: Simpler customization of class creation
OK, I see this point now. Still looking for time to review the rest of your PEP! --Guido (mobile) On Jul 3, 2016 3:29 PM, "Martin Teichmann" wrote: > Hi Guido, > > sorry I missed your post... > > >> One of the big issues that makes library authors reluctant to use > >> metaclasses > >> (even when they would be appropriate) is the risk of metaclass > conflicts. > > > > Really? I've written and reviewed a lot of metaclasses and this has never > > worried me. The problem is limited to multiple inheritance, right? I > worry a > > lot about MI being imposed on classes that weren't written with MI in > mind, > > but I've never particularly worried about the special case of > metaclasses. > > Yes, the problem only arises with MI. Unfortunately, that's not > uncommon: if you want to implement an ABC with a class from a > framework which uses metaclasses, you have a metaclass conflict. So > then you start making MyFrameworkABCMeta-classes. > > The worst is if you already have a framework with users out there. No > way you add a metaclass to your class, however convenient it would > be. Because you never now if some user out there had gotten the idea > to implement an ABC with it. Sure, you could let your metaclass > inherit from ABCMeta, but is this really how it should be done? > > (This has already been mentioned by others over at python-ideas: > https://mail.python.org/pipermail/python-ideas/2016-February/038506.html) > > Greetings > > Martin > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release)
Once the GH migration occurs I think we will take a look at Homu (it's been brought up previously). On Sun, Jul 3, 2016, 17:35 Chris Krycho wrote: > The bots Mozilla runs around both Rust and Servo should make a lot of this > much lower overhead if they can be repurposed (as I believe several other > communities have already done). > > Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), > is integrated with GitHub PRs so maintainers can trigger it with a comment > there, and can also roll up a bunch of changes into one (handy to pull > together e.g. a bunch of small documentation changes like typo fixes): > https://github.com/barosl/homu That seems to keep the pain level of > having an always-building-and-passing-tests nightly version much lower. > > Aside: I don't want to flood these discussions with "yay Rust!" stuff, so > this will probably be my last such response unless something else really > jumps out. ;-) Thanks for the work you're all doing here. > > Regards, > Chris Krycho > > On Jul 3, 2016, at 7:34 PM, Nick Coghlan wrote: > > On 4 July 2016 at 00:39, Guido van Rossum wrote: > > Another thought recently occurred to me. Do releases really have to be > > such big productions? A recent ACM article by Tom Limoncelli[1] > > reminded me that we're doing releases the old-fashioned way -- > > infrequently, and with lots of manual labor. Maybe we could > > (eventually) try to strive for a lighter-weight, more automated > > release process? It would be less work, and it would reduce stress for > > authors of stdlib modules and packages -- there's always the next > > release. I would think this wouldn't obviate the need for carefully > > planned and timed "big deal" feature releases, but it could make the > > bug fix releases *less* of a deal, for everyone. > > > Yes, getting the maintenance releases to the point of being largely > automated would be beneficial. However, I don't think the problem is > lack of desire for that outcome, it's that maintaining the release > toolchain pretty much becomes a job at that point, as you really want > to be producing nightly builds (since the creation of those nightlies > in effect becomes the regression test suite for the release > toolchain), and you also need to more strictly guard against even > temporary regressions in the maintenance branches. > > There are some variants we could pursue around that model (e.g. > automating Python-only updates without automating updates that require > rebuilding the core interpreter binaries for Windows and Mac OS X), > but none of it is the kind of thing likely to make anyone say "I want > to work on improving this in my free time". Even for commercial > redistributors, it isn't easy for us to make the business case for > assigning someone to work on it, since we're generally working from > the source trees rather than the upstream binary releases. > > I do think it's worth putting this into our bucket of "ongoing > activities we could potentially propose to the PSF for funding", > though. I know Ewa (Jodlowska, the PSF's Director of Operations) is > interested in better supporting the Python development community > directly (hence https://donate.pypi.io/ ) in addition to the more > indirect community building efforts like PyCon US and the grants > program, so I've been trying to build up a mental list of CPython > development pain points where funded activities could potentially > improve the contributor experience for volunteers. So far I have: > > - issue triage (including better acknowledging folks that help out > with triage efforts) > - patch review (currently "wait and see" pending the impact of the > GitHub migration) > - nightly pre-release builds (for ease of contribution without first > becoming a de facto C developer and to help make life easier for > release managers) > > That last one is a new addition to my list based on this thread, and I > think it's particularly interesting in that it would involve a much > smaller set of target users than the first two (with the primary > stakeholders being the release managers and the folks preparing the > binary installers), but also a far more concrete set of deliverables > (i.e. nightly binary builds being available for active development and > maintenance branches for at least Windows and Mac OS X, and > potentially for the manylinux1 baseline API defined in PEP 513) > > Cheers, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > __
Re: [Python-Dev] Automating the maintenance release pipeline (was Re: Request for CPython 3.5.3 release)
On 4 July 2016 at 10:34, Chris Krycho wrote: > The bots Mozilla runs around both Rust and Servo should make a lot of this > much lower overhead if they can be repurposed (as I believe several other > communities have already done). > > Homu, the build manager tool, runs CI (including buildbots, Travis, etc.), > is integrated with GitHub PRs so maintainers can trigger it with a comment > there, and can also roll up a bunch of changes into one (handy to pull > together e.g. a bunch of small documentation changes like typo fixes): > https://github.com/barosl/homu That seems to keep the pain level of having > an always-building-and-passing-tests nightly version much lower. Aye, as Brett mentioned, we're definitely interested in the work Rust/Mozilla have been doing, and it's come up in previous discussions on the core-workflow list like https://mail.python.org/pipermail/core-workflow/2016-February/000480.html However, automating the Mac OS X and Windows Installer builds and the subsquent uploads to python.org gets more challenging, as at that point you're looking at either producing unsigned binaries, or else automating the creation of signed binaries, and the latter means you start running into secrets management problems that don't exist for plain CI builds. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com